[tool] [tutorial] Free AI Person Detection for Blue Iris

So I have a Nvidia 970 card in my Blue Iris PC (4th Gen Intel), should I run the GPU version of Deepstack? Want to try native Deepstack in BI...
 
Dealbreaker for me is the possibility to run it on a different computer. I think deepstack on the same machine with lots of cameras will eat alot of CPU...
You can use Deepstack on another machine using Blue Iris integration if you need to, or you can use it on same machine. You just put the IP address of the second machine in BI. BI does seem to use more CPU time at this stage than AITools, but I am sure that will improve over time.
I run Deepstack in a Ubuntu VM using VirtualBox on the same machine BI is running on as the Deepstack Docker image seems to run smoother with less resource usage than the Windows version.
 
  • Like
Reactions: Marcelor73
If yo
So I have a Nvidia 970 card in my Blue Iris PC (4th Gen Intel), should I run the GPU version of Deepstack? Want to try native Deepstack in BI...
You can try the GPU version of DeepStack. Depends how many cameras you have and how much RAM you have.
Which 4thGEN Intel is it? Is it a Core i5 or i7?
If you have at least 16GB RAM in your system, then DeepStack on the same machine is worth a try, though if you have more than about 3 cameras and a couple may trigger at once, then you will find a Core i5 will definitely lag out.
My primary system runs DS in a VM on Docker, but I have a test system with 3 x 5MP cameras and Deepstack installed locally using the BI Integration. It runs 16GB RAM with a Core i7 7700 CPU and when BI is processing AI it will regularly hit 80% CPU usage with the CPU version of DeepStack and idle it will sit at 3 to 4% CPU usage. It is BI using the CPU time and not Deepstack as I do monitor which programs are using the CPU time. This test system does not have an NVIDIA GPU so the Deepstack GPU version won't run on it properly. I plan obtaining an NVIDIA GPU of some kind for it to test the GPU version of Deepstack to see if there is any differences in object detection accruacy.
 
  • Like
Reactions: David L
I still use AITool as there is a lot more granular control over things like percentage size of detected object, percentage of confidence per object (in the newer versions).
I have cameras up on second story due to the way the place is laid out and AITools can be set to pick up objects that the native BI integration misses.

Even if development on AITools stopped right now and didn't proceed further than it is currently at, you still have a lot more granular control than the native integration has. and it would continue to be usable for a long time.

In addition to VolronCD's AItool being substantially more granular than the current BI implementation of Deepstack integration, his AItool can also use other AI engines such as DOODs, Sighthound and the big gun AWS Rekognition. VolronCD has been very responsive to bug fixes and enhancement requests also.
 
  • Like
Reactions: austwhite
It would be great if there was a "face" function. Where to upload faces to deepstack. Or more importantly: saving unknown faces.
 
I run deepstack with lots of cameras on the same machine with no problem. I do use the gpu version. I'd still use AITool though.
i still use AI Tools for LPR, but have settled on BI Native, it starts Deepstack as a service, frees up a ton of memory, ignore station objects works so much better for myself compared to AI Tool's Dynamic. To be clear this wasn't the fault of AI Tools, but a fault in the way Blue Iris sometimes send low quality then high quality images (despite constant recording and ample pre-record buffer)
It would be great if there was a "face" function. Where to upload faces to deepstack. Or more importantly: saving unknown faces.
if you use BlueIris native integration it’s built in, including face detection.
Alll my custom use cases are now met by BI native. Still have 1 can on AI tools out of loyalty.
 
i still use AI Tools for LPR, but have settled on BI Native, it starts Deepstack as a service, frees up a ton of memory, ignore station objects works so much better for myself compared to AI Tool's Dynamic. To be clear this wasn't the fault of AI Tools, but a fault in the way Blue Iris sometimes send low quality then high quality images (despite constant recording and ample pre-record buffer)

if you use BlueIris native integration it’s built in, including face detection.
Alll my custom use cases are now met by BI native. Still have 1 can on AI tools out of loyalty.
It's not really built in. You still have to run Deepstack separately. (Just being picky with terminology :) ) That said, BI integration does meet most needs.
If you have specific needs, the AI Tool is still very good. It's nothing to do with loyalty really. It's to do with what meets your needs best.
For me, the BI Integration doesn't have the granularity due to where my camera's are and what I need to detect.
I don't use telegram or any of that stuff as my external triggers are handled by MQTT triggers from Blue Iris.
BI integration tends to miss events that AI Tools will grab, but as I said that is really due to my situation and placement of cameras. I think the BI Integration will grab 98% of events for most home users.
AITools kicks butt with LPR though. Plate Recognizer integration in BI is really hit and miss.

I never had memory issues with AITools. I never found it to use much memory space on it's own and I limited my jpeg snap folder to only keeping images for upto 3 hours to save disk.

The one thing I really hope BI Deepstack integration will include in the future is the ability to have more than one DS server used. I find that 1 DS server struggles to keep up with several cameras in a busy environment. The multiple server and refinement server options in AITools are a god send for me, otherwise the lag / delay in processing would trigger the cameras at the wrong times.
 
Last edited:
  • Like
Reactions: spammenotinoz
I actually said the "integration" was Native, not AI \ Deepstack Functionality...:)
But agree with what you have said, it comes down to use cases. For instance I am comfortable with BI and direct DeepStack Integration as I use constant record, but as you point out AI Tools will flag a higher rate (BI will deliberately miss two close events), isn't a problem for me with constant recording, but I would hate to be someone recording on alert only. Need a really long trigger timeout, to ensure you get all the footage.
BI can now send motion alerts to mobiles only when people are detected when away from home, while still flagging other relevant objects, that was my key use as well as daily summaries and third party integrations\web calls, but BI can do all that now.
I also have BI sending "Critical" IOS alerts, when a person is detected between 11pm and 5am. Supporting the iOS critical alert function is new to me.
Other gotchas, you need to select High-Quality Alert Images in BI to have similar quality with AI Tools.
Funny you mention AI Tools, as the cam I still have on AT_Tools is LPR, but using JPEG's created on alert only. Honestly though both have similar detection rates, My cams are 4k, I then run a script to trim (not downscale) the image so it meets the plate analzyer sizing then upload.
Uploading via a script, provides more customization around which API to use. ie: one when away, one when home. But I was actually able to configure the same with BI (ie: still run the same script on alert and then upload)
Perhaps it was just the number of cams, but I did find the dynamic masking consuming a fair amount of CPU, and damn BI switching stills between main and sub played havock with dynamic and masking in general. Get it working and then bam, BI update and it's unreliable again.
The other strange thing is with AI_Tools I had to run 6 separate GPU instances of DeepStack, where with BI I can just use 1 GPU instance. I think it's because BI is sending the Lower Res Alert Images and not full 4k JPEG's, or just the DeepStack GPU version improved over when I first set it up.
What I am doing though, is finding uses for AI_Tool outside of BlueIris.
 
Last edited:
@spammenotinoz 100% agreed. Sending high res images to deepstack, especially if using the face recognition API, takes a fair chunk of CPU time, even if you use the GPU version of Deepstack.

I don't send the full High Res image to Deepstack using AITools. My cameras all have three feeds, Low Res, Medium Res and High Res. I send use the Medium Res as the "sub stream" and this is the one Blue Iris sends to Deepstack. That said, I did not notice a huge difference in detection of objects between low res, med res and high res during experimentation. Where it makes a big difference is doing facial recognition, which I only do on 3 cameras I have.
I settled on using the medium res feed to send to AI Tools by using the medium res as the substream. Blue Iris, when it takes the snapshot, uses the sub-stream for snapshots. I don't use the high res jpg option, except for the facial recognition :).
 
If yo

You can try the GPU version of DeepStack. Depends how many cameras you have and how much RAM you have.
Which 4thGEN Intel is it? Is it a Core i5 or i7?
If you have at least 16GB RAM in your system, then DeepStack on the same machine is worth a try, though if you have more than about 3 cameras and a couple may trigger at once, then you will find a Core i5 will definitely lag out.
My primary system runs DS in a VM on Docker, but I have a test system with 3 x 5MP cameras and Deepstack installed locally using the BI Integration. It runs 16GB RAM with a Core i7 7700 CPU and when BI is processing AI it will regularly hit 80% CPU usage with the CPU version of DeepStack and idle it will sit at 3 to 4% CPU usage. It is BI using the CPU time and not Deepstack as I do monitor which programs are using the CPU time. This test system does not have an NVIDIA GPU so the Deepstack GPU version won't run on it properly. I plan obtaining an NVIDIA GPU of some kind for it to test the GPU version of Deepstack to see if there is any differences in object detection accruacy.
Thanks for the reply, sorry I should of mentioned it is an i7 with 32Gigs of RAM. So I ended up installing the GPU version, I am up an running, right now I have Deepstack on my Hik Doorbell Cam to test it out. It is working, I am getting Detections...

Appreciate your input...
 
@spammenotinoz 100% agreed. Sending high res images to deepstack, especially if using the face recognition API, takes a fair chunk of CPU time, even if you use the GPU version of Deepstack.

I don't send the full High Res image to Deepstack using AITools. My cameras all have three feeds, Low Res, Medium Res and High Res. I send use the Medium Res as the "sub stream" and this is the one Blue Iris sends to Deepstack. That said, I did not notice a huge difference in detection of objects between low res, med res and high res during experimentation. Where it makes a big difference is doing facial recognition, which I only do on 3 cameras I have.
I settled on using the medium res feed to send to AI Tools by using the medium res as the substream. Blue Iris, when it takes the snapshot, uses the sub-stream for snapshots. I don't use the high res jpg option, except for the facial recognition :).
How did you get BI to create SnapShots via the Sub-Stream? I couldn't find that setting, the only way I could was by setting up clones which then used the Sub-Stream as the main feed.
 
It would be great if there was a "face" function. Where to upload faces to deepstack. Or more importantly: saving unknown faces.
There is in BI.
1619438069415.png

1619438726474.png

1619438021462.png
 
Last edited:
  • Like
Reactions: spammenotinoz
It's not really built in. You still have to run Deepstack separately. (Just being picky with terminology :) ) That said, BI integration does meet most needs.
If you have specific needs, the AI Tool is still very good. It's nothing to do with loyalty really. It's to do with what meets your needs best.
For me, the BI Integration doesn't have the granularity due to where my camera's are and what I need to detect.
I don't use telegram or any of that stuff as my external triggers are handled by MQTT triggers from Blue Iris.
BI integration tends to miss events that AI Tools will grab, but as I said that is really due to my situation and placement of cameras. I think the BI Integration will grab 98% of events for most home users.
AITools kicks butt with LPR though. Plate Recognizer integration in BI is really hit and miss.

I never had memory issues with AITools. I never found it to use much memory space on it's own and I limited my jpeg snap folder to only keeping images for upto 3 hours to save disk.

The one thing I really hope BI Deepstack integration will include in the future is the ability to have more than one DS server used. I find that 1 DS server struggles to keep up with several cameras in a busy environment. The multiple server and refinement server options in AITools are a god send for me, otherwise the lag / delay in processing would trigger the cameras at the wrong times.

You can use a load balancer for that.
 
It's not really built in. You still have to run Deepstack separately. (Just being picky with terminology :) ) That said, BI integration does meet most needs.
If you have specific needs, the AI Tool is still very good. It's nothing to do with loyalty really. It's to do with what meets your needs best.
For me, the BI Integration doesn't have the granularity due to where my camera's are and what I need to detect.
I don't use telegram or any of that stuff as my external triggers are handled by MQTT triggers from Blue Iris.
BI integration tends to miss events that AI Tools will grab, but as I said that is really due to my situation and placement of cameras. I think the BI Integration will grab 98% of events for most home users.
AITools kicks butt with LPR though. Plate Recognizer integration in BI is really hit and miss.

I never had memory issues with AITools. I never found it to use much memory space on it's own and I limited my jpeg snap folder to only keeping images for upto 3 hours to save disk.

The one thing I really hope BI Deepstack integration will include in the future is the ability to have more than one DS server used. I find that 1 DS server struggles to keep up with several cameras in a busy environment. The multiple server and refinement server options in AITools are a god send for me, otherwise the lag / delay in processing would trigger the cameras at the wrong times.

@austwhite, I am still running AI Tool 1.67 from the 1st post in this thread. Are you running a newer version from github? Is there a minimum NVidia GPU version for DS to work? I have a GTX 670 that I can put into my BI server. Thanks.
 
@austwhite, I am still running AI Tool 1.67 from the 1st post in this thread. Are you running a newer version from github? Is there a minimum NVidia GPU version for DS to work? I have a GTX 670 that I can put into my BI server. Thanks.
I am running a GTX 970 in my BI box and running DS GPU version, no problems, but I am just testing one Cam right now though. My card has 4Gigs of Memory, which I am hoping will be enough...How much Memory does your 670 have? I had a GTX 760 card once that had 4Gigs even though 2 Gigs was the norm, I paid more for it since I was using it as a gaming card back then.
 
Is it possible to trigger with a face detection only a telegram message?


Gesendet von iPhone mit Tapatalk