Yet Another Free Extension for Blue Iris Adding AI Object Dectection/Reduction in False Alarms/Enhanced Notification of Activity/On Guard

Version 1.5.2 is now on GitHub as a "Pre-Release"/Beta here: Release On Guard Security Assistant Version 1.5.2 · Ken98045/On-Guard

Release notes:

1. The whole interface for identify objects you wish to track has been changed. Along with this you can now track any object DeepStack currently recognizes (yes, including pizza). Unfortunately the list of objects is a little unorganized right now.
2. There are 2 "special" objects. The first is "* Any Vehicle". This means what it says. The second is "* Any Mammal" which includes all animals (except people).
3. You can now add a delay before sending any URL/Http notification.
4. You can flag any picture initiating an event as "Flagged" and/or "Confirmed" for Blue Iris. You can also pass the Blue Iris "Reset" flag to undo flags/commit.
5. You must re-define your areas/cameras because all settings are now stored in the Windows Registry. This will enable future updates that will (hopefully) not require you to do that again except for major changes.
6. This release if "pre-release"/Beta because of the numerous internal changes. I've spent a couple of days testing, but it is easy to miss something.
7. This release is much more complete/stable than the previous Beta (1.5.1) I mentioned a few posts previous to this.
8. The manual has been updated (briefly) to reflect the changes. However, more changes need to be done. If you have questions please ask.
9. Mike Farrington has begun contributing to the project. His help is very much appreciated. I hope that he continues to contribute in the future!
10. Ideas for enhancement have been incorporated into the project. Additional ideas are appreciated. I do have additional enhancements I'm working on, so your requests may or may not need to wait depending on priorities. If you do have a request please provide details on exactly how you'd like it to work.
 
Dunno if possible...but since OnGuard relies on Deepstack... is it possible to have a red/green indicator for OnGuard to show the Deepstack server status? Or even the seconds to process a Deepstack image in a OnGuard text box?
I gotta find out how to do masking. Keeps thinking a bush is a person. And that's a problem.
bush people.jpg
 
I will have to test this out.
I am using 6 instances of the Windows GPU version of DeepStack with the Vorlon Version of AI Tools and that has been working very well, but always keen to try something new.
The Windows GPU version, like the updated CPU version just starts from the command line, so easy to automate the start-up.
 
Dunno if possible...but since OnGuard relies on Deepstack... is it possible to have a red/green indicator for OnGuard to show the Deepstack server status? Or even the seconds to process a Deepstack image in a OnGuard text box?
I gotta find out how to do masking. Keeps thinking a bush is a person. And that's a problem.
View attachment 80005
 
Put an "Ignore" area around the bush. I have had similar problems. Sometimes it depends entirely on lighting. One of the pictures in the manual has a "pig planter" in it. Sometimes it hits the animal definition, sometimes not. Deepstack isn't perfect, but it will probably get better over time.

Yes, I can put some timing data on the screen and the connection status. In order to keep the connection status I'll need to hit it with a picture every minute or so. That shouldn't be a lot of work. DeepStack timing is from the time a picture hits the queue to the time it it processed. So, your last picture (of several close together) may show 5 seconds even though the "actual" time it takes it to process an individual image may be 0.75 seconds. So, I think 5 seconds may be misleading, but so is 0.75 seconds. Probably the 5 seconds figure is more helpful?
 
I will have to test this out.
I am using 6 instances of the Windows GPU version of DeepStack with the Vorlon Version of AI Tools and that has been working very well, but always keen to try something new.
The Windows GPU version, like the updated CPU version just starts from the command line, so easy to automate the start-up.
Yes, with the GPU version it is helpful to use multiple instances of DeepStack. However, DeepStack in GPU mode isn't entirely CPU neutral. In addition, if you are running Blue Iris on the same computer the BI processing also goes up.

I've had another request for supporting multiple DeepStack instances. I'll work on that. However, the real solution is for DeepStack to improve their simultaneous processing for GPU. That should actually be quite easy given the complexity of the rest of their application. Having 6 DeepStack versions running on one machine has got to add to memory use more than them just fixing that.

For the CPU version just sending 3 or 4 pictures close together would max out the CPU (unless you could process on multiple remote computers), so their wasn't a huge need for multiple AI server settings for most people.

I should have support for multiple instances in a week or so unless I get sidetracked by bugs and other more immediate work items.
 
Yes, with the GPU version it is helpful to use multiple instances of DeepStack. However, DeepStack in GPU mode isn't entirely CPU neutral. In addition, if you are running Blue Iris on the same computer the BI processing also goes up.

I've had another request for supporting multiple DeepStack instances. I'll work on that. However, the real solution is for DeepStack to improve their simultaneous processing for GPU. That should actually be quite easy given the complexity of the rest of their application. Having 6 DeepStack versions running on one machine has got to add to memory use more than them just fixing that.

For the CPU version just sending 3 or 4 pictures close together would max out the CPU (unless you could process on multiple remote computers), so their wasn't a huge need for multiple AI server settings for most people.

I should have support for multiple instances in a week or so unless I get sidetracked by bugs and other more immediate work items.
Thank-you for the reply, although it does not align with my experiences.

I am using BI on quite an old i7 using H.265 multiple 4K cams and Intel Beta H\W decoding. It's headless with a dongle on the on-board HDMI with BIOS set to use Integrated Graphics.
All images are sent to the DeepStack in 4K, every 2seconds per camera. (when using the CPU version I was having BI downsize them, but not don't downsize them).

CPU usage of each Windows PowerShell Instance is peaking at 0.6% (0.3 Python, 0.3 redis-server, deepstack itself seems to sit on zero), so I would have to say that in my deployment Deepstack is effectively using less than 1% CPU.
My GPU is a very old GTX 745 (not over clocked) and is averaging 60ms per image, was 100ms but has reduced further with the latest deepstack update.

Note:
The GPU version in the documentation and on GitHub is using the CPU. If you follow the bugs, they link to the following latest compiled version for GPU.

 
Thank-you for the reply, although it does not align with my experiences.

I am using BI on quite an old i7 using H.265 multiple 4K cams and Intel Beta H\W decoding. It's headless with a dongle on the on-board HDMI with BIOS set to use Integrated Graphics.
All images are sent to the DeepStack in 4K, every 2seconds per camera. (when using the CPU version I was having BI downsize them, but not don't downsize them).

CPU usage of each Windows PowerShell Instance is peaking at 0.6% (0.3 Python, 0.3 redis-server, deepstack itself seems to sit on zero), so I would have to say that in my deployment Deepstack is effectively using less than 1% CPU.
My GPU is a very old GTX 745 (not over clocked) and is averaging 60ms per image, was 100ms but has reduced further with the latest deepstack update.

Note:
The GPU version in the documentation and on GitHub is using the CPU. If you follow the bugs, they link to the following latest compiled version for GPU.

Yes, I am using the latest GPU Windows Beta, but I haven't really had the time to look in any detail at the CPU use of DeepStack with that. I do see a barely noticeable GPU bump when a picture is processed. For me with 4 cameras (2 high, 2 low resolution at 15fps) I see BI at about 35% CPU. I'll get the multiple DeepStack instance support in relatively soon.

If I knew Python better I might just try to improve the DeepStack thread use since it is now open source. Since I don't I guess I'll just work around it.
 
Was gonna try to give this a shot tonight but when i try to set notifications, i set autofill, and reduce the cooldown and set the confirmed flag for blue iris and hit ok. But if I go back into that one, the only thing that saved is the autofill. Cooldown doesnt have, and the flags dont save.
 
Yes, I am using the latest GPU Windows Beta, but I haven't really had the time to look in any detail at the CPU use of DeepStack with that. I do see a barely noticeable GPU bump when a picture is processed. For me with 4 cameras (2 high, 2 low resolution at 15fps) I see BI at about 35% CPU. I'll get the multiple DeepStack instance support in relatively soon.

If I knew Python better I might just try to improve the DeepStack thread use since it is now open source. Since I don't I guess I'll just work around it.
By itself, CPU usage is a terrible metric as it doesn't take into count the clock speed, which as we know all modern CPU's are designed to run at higher-utilisation but lower the core speed to save energy.
With my setup 6 x 4k cameras, 2 x 2k cameras and 2 1080p wireless BlueIris uses 6-12% CPU and 25-27% GPU 0 (Hardware Decode = Intel Beta). This did not change with the result of running AI Tool and Deepstack on the same server as BI. Nor could I see any increase in overall CPU running AI Tool with the Beta Version from Digital Ocean. As I said earlier the Beta on GitHub and in the Doco still has the bug.

Like you I do see a quick spike on the GPU 1 (NVIDIA CARD) during DeepStack processing, which is why I run multiple instances to better utilise the GPU and also resiliency.

My CPU is quite old being an i7-6700 with performance monitor reporting the speed at 3.41ghz (base spec is 3.4ghz with boost of 4ghz) so really just sipping the power. I am using Dual Streams with 24x7 recording.
It's the combination of Dual Streams and Hardware Decoding in BI which is why my CPU usage is so low, even on such an old device.

It's along way to say that in my experience I disagree running DeepStack and BlueIris on the same device increased the CPU usage of BlueIris.
Infact I would argue it's more efficient to run on the same device, but there is no right or wrong. If you intended to say that running DeepsSack on the same device as BI will increase overall CPU usage, then absolutely especially if the GPU version is not working as intended and this may push some setups over. It can also be easier to get DeepStack GPU working on a dedicated instance without conflicting or messing up QuickSync hardware decoding. So really it's whatever works.
 
Put an "Ignore" area around the bush. I have had similar problems. Sometimes it depends entirely on lighting. One of the pictures in the manual has a "pig planter" in it. Sometimes it hits the animal definition, sometimes not. Deepstack isn't perfect, but it will probably get better over time.

Yes, I can put some timing data on the screen and the connection status. In order to keep the connection status I'll need to hit it with a picture every minute or so. That shouldn't be a lot of work. DeepStack timing is from the time a picture hits the queue to the time it it processed. So, your last picture (of several close together) may show 5 seconds even though the "actual" time it takes it to process an individual image may be 0.75 seconds. So, I think 5 seconds may be misleading, but so is 0.75 seconds. Probably the 5 seconds figure is more helpful?
how about that. was unaware of this "ignore".
anything of a visual notification that Deepstack has not crashed would be great.
 
Was gonna try to give this a shot tonight but when i try to set notifications, i set autofill, and reduce the cooldown and set the confirmed flag for blue iris and hit ok. But if I go back into that one, the only thing that saved is the autofill. Cooldown doesnt have, and the flags dont save.
I do not use autofill. I actually use the entire trigger URL link.
 
Any way I can manually test MQTT? I have had multiple cars drive through an area of interest where I am looking specifically through cars, and blue iris is recording and marking confirmed as required, but not sending any MQTT messages. MQTT notify is green in that areas settings and the server is setup correctly in the app settings.
 
Any way I can manually test MQTT? I have had multiple cars drive through an area of interest where I am looking specifically through cars, and blue iris is recording and marking confirmed as required, but not sending any MQTT messages. MQTT notify is green in that areas settings and the server is setup correctly in the app settings.

Disregard. Appears I got it working. Only thing I can't seem to figure out is the image part. I am decoding what the MQTT server is receiving for an image payload and it is only about the top 5-10% of the image. I believe MQTT payloads are limited to 250ish MB so I dont think image size is an issue.
 
Disregard. Appears I got it working. Only thing I can't seem to figure out is the image part. I am decoding what the MQTT server is receiving for an image payload and it is only about the top 5-10% of the image. I believe MQTT payloads are limited to 250ish MB so I dont think image size is an issue.
I have to admit that I didn't test the image completely. I'll take a look soon.
 
I have to admit that I didn't test the image completely. I'll take a look soon.
I could also be doing something very wrong. I am trying to ingest into home assistant for alerts which isnt working at all. But if I pull the payload out of MQTT explorer and try to decode with a tool its only 5% of the image.

Overall glad to see progress on this. It does basically everything I need it to do, just need to figure out the best way to get push notifications working.