[tool] [tutorial] Free AI Person Detection for Blue Iris

Tinbum

Pulling my weight
Joined
Sep 5, 2017
Messages
446
Reaction score
126
Location
UK
From appearances, and as a long time lurker, this topic/thread has slowed down drastically since BI has added native Deepstack support. Personally I have not looked into the native support capabilities as BI runs on 1 system and the AI / Deepstack on another etc.

Is continued development of the AI Tool being planned in light of the above?
I'm pretty sure it is as it offers so much more.
 

austwhite

Getting the hang of it
Joined
Jun 7, 2020
Messages
93
Reaction score
92
Location
USA
From appearances, and as a long time lurker, this topic/thread has slowed down drastically since BI has added native Deepstack support. Personally I have not looked into the native support capabilities as BI runs on 1 system and the AI / Deepstack on another etc.

Is continued development of the AI Tool being planned in light of the above?
I still use AITool as there is a lot more granular control over things like percentage size of detected object, percentage of confidence per object (in the newer versions).
I have cameras up on second story due to the way the place is laid out and AITools can be set to pick up objects that the native BI integration misses.

Even if development on AITools stopped right now and didn't proceed further than it is currently at, you still have a lot more granular control than the native integration has. and it would continue to be usable for a long time.
 

Firenor

n3wb
Joined
Dec 10, 2020
Messages
15
Reaction score
3
Location
Sweden
I still use AITool as there is a lot more granular control over things like percentage size of detected object, percentage of confidence per object (in the newer versions).
I have cameras up on second story due to the way the place is laid out and AITools can be set to pick up objects that the native BI integration misses.

Even if development on AITools stopped right now and didn't proceed further than it is currently at, you still have a lot more granular control than the native integration has. and it would continue to be usable for a long time.
Dealbreaker for me is the possibility to run it on a different computer. I think deepstack on the same machine with lots of cameras will eat alot of CPU...
 

Tinbum

Pulling my weight
Joined
Sep 5, 2017
Messages
446
Reaction score
126
Location
UK
Dealbreaker for me is the possibility to run it on a different computer. I think deepstack on the same machine with lots of cameras will eat alot of CPU...
I run deepstack with lots of cameras on the same machine with no problem. I do use the gpu version. I'd still use AITool though.
 

Firenor

n3wb
Joined
Dec 10, 2020
Messages
15
Reaction score
3
Location
Sweden
I run deepstack with lots of cameras on the same machine with no problem. I do use the gpu version. I'd still use AITool though.
Well, it all depends on the machine I guess... Its much easier to get away with cheaper machines if I split it up, which means I can get better cameras ;)
 

David L

IPCT Contributor
Joined
Aug 2, 2019
Messages
7,932
Reaction score
20,757
Location
USA
So I have a Nvidia 970 card in my Blue Iris PC (4th Gen Intel), should I run the GPU version of Deepstack? Want to try native Deepstack in BI...
 

austwhite

Getting the hang of it
Joined
Jun 7, 2020
Messages
93
Reaction score
92
Location
USA
Dealbreaker for me is the possibility to run it on a different computer. I think deepstack on the same machine with lots of cameras will eat alot of CPU...
You can use Deepstack on another machine using Blue Iris integration if you need to, or you can use it on same machine. You just put the IP address of the second machine in BI. BI does seem to use more CPU time at this stage than AITools, but I am sure that will improve over time.
I run Deepstack in a Ubuntu VM using VirtualBox on the same machine BI is running on as the Deepstack Docker image seems to run smoother with less resource usage than the Windows version.
 

austwhite

Getting the hang of it
Joined
Jun 7, 2020
Messages
93
Reaction score
92
Location
USA
If yo
So I have a Nvidia 970 card in my Blue Iris PC (4th Gen Intel), should I run the GPU version of Deepstack? Want to try native Deepstack in BI...
You can try the GPU version of DeepStack. Depends how many cameras you have and how much RAM you have.
Which 4thGEN Intel is it? Is it a Core i5 or i7?
If you have at least 16GB RAM in your system, then DeepStack on the same machine is worth a try, though if you have more than about 3 cameras and a couple may trigger at once, then you will find a Core i5 will definitely lag out.
My primary system runs DS in a VM on Docker, but I have a test system with 3 x 5MP cameras and Deepstack installed locally using the BI Integration. It runs 16GB RAM with a Core i7 7700 CPU and when BI is processing AI it will regularly hit 80% CPU usage with the CPU version of DeepStack and idle it will sit at 3 to 4% CPU usage. It is BI using the CPU time and not Deepstack as I do monitor which programs are using the CPU time. This test system does not have an NVIDIA GPU so the Deepstack GPU version won't run on it properly. I plan obtaining an NVIDIA GPU of some kind for it to test the GPU version of Deepstack to see if there is any differences in object detection accruacy.
 

aralos1999

n3wb
Joined
Dec 6, 2015
Messages
21
Reaction score
19
I still use AITool as there is a lot more granular control over things like percentage size of detected object, percentage of confidence per object (in the newer versions).
I have cameras up on second story due to the way the place is laid out and AITools can be set to pick up objects that the native BI integration misses.

Even if development on AITools stopped right now and didn't proceed further than it is currently at, you still have a lot more granular control than the native integration has. and it would continue to be usable for a long time.
In addition to VolronCD's AItool being substantially more granular than the current BI implementation of Deepstack integration, his AItool can also use other AI engines such as DOODs, Sighthound and the big gun AWS Rekognition. VolronCD has been very responsive to bug fixes and enhancement requests also.
 

ChrisX

Getting the hang of it
Joined
Nov 18, 2016
Messages
130
Reaction score
4
It would be great if there was a "face" function. Where to upload faces to deepstack. Or more importantly: saving unknown faces.
 

spammenotinoz

Getting comfortable
Joined
Apr 4, 2019
Messages
345
Reaction score
274
Location
Sydney
I run deepstack with lots of cameras on the same machine with no problem. I do use the gpu version. I'd still use AITool though.
i still use AI Tools for LPR, but have settled on BI Native, it starts Deepstack as a service, frees up a ton of memory, ignore station objects works so much better for myself compared to AI Tool's Dynamic. To be clear this wasn't the fault of AI Tools, but a fault in the way Blue Iris sometimes send low quality then high quality images (despite constant recording and ample pre-record buffer)
It would be great if there was a "face" function. Where to upload faces to deepstack. Or more importantly: saving unknown faces.
if you use BlueIris native integration it’s built in, including face detection.
Alll my custom use cases are now met by BI native. Still have 1 can on AI tools out of loyalty.
 

ChrisX

Getting the hang of it
Joined
Nov 18, 2016
Messages
130
Reaction score
4
I think the AI tool is better. Much better to set / options and also Telegram ...
 

austwhite

Getting the hang of it
Joined
Jun 7, 2020
Messages
93
Reaction score
92
Location
USA
i still use AI Tools for LPR, but have settled on BI Native, it starts Deepstack as a service, frees up a ton of memory, ignore station objects works so much better for myself compared to AI Tool's Dynamic. To be clear this wasn't the fault of AI Tools, but a fault in the way Blue Iris sometimes send low quality then high quality images (despite constant recording and ample pre-record buffer)

if you use BlueIris native integration it’s built in, including face detection.
Alll my custom use cases are now met by BI native. Still have 1 can on AI tools out of loyalty.
It's not really built in. You still have to run Deepstack separately. (Just being picky with terminology :) ) That said, BI integration does meet most needs.
If you have specific needs, the AI Tool is still very good. It's nothing to do with loyalty really. It's to do with what meets your needs best.
For me, the BI Integration doesn't have the granularity due to where my camera's are and what I need to detect.
I don't use telegram or any of that stuff as my external triggers are handled by MQTT triggers from Blue Iris.
BI integration tends to miss events that AI Tools will grab, but as I said that is really due to my situation and placement of cameras. I think the BI Integration will grab 98% of events for most home users.
AITools kicks butt with LPR though. Plate Recognizer integration in BI is really hit and miss.

I never had memory issues with AITools. I never found it to use much memory space on it's own and I limited my jpeg snap folder to only keeping images for upto 3 hours to save disk.

The one thing I really hope BI Deepstack integration will include in the future is the ability to have more than one DS server used. I find that 1 DS server struggles to keep up with several cameras in a busy environment. The multiple server and refinement server options in AITools are a god send for me, otherwise the lag / delay in processing would trigger the cameras at the wrong times.
 
Last edited:

spammenotinoz

Getting comfortable
Joined
Apr 4, 2019
Messages
345
Reaction score
274
Location
Sydney
I actually said the "integration" was Native, not AI \ Deepstack Functionality...:)
But agree with what you have said, it comes down to use cases. For instance I am comfortable with BI and direct DeepStack Integration as I use constant record, but as you point out AI Tools will flag a higher rate (BI will deliberately miss two close events), isn't a problem for me with constant recording, but I would hate to be someone recording on alert only. Need a really long trigger timeout, to ensure you get all the footage.
BI can now send motion alerts to mobiles only when people are detected when away from home, while still flagging other relevant objects, that was my key use as well as daily summaries and third party integrations\web calls, but BI can do all that now.
I also have BI sending "Critical" IOS alerts, when a person is detected between 11pm and 5am. Supporting the iOS critical alert function is new to me.
Other gotchas, you need to select High-Quality Alert Images in BI to have similar quality with AI Tools.
Funny you mention AI Tools, as the cam I still have on AT_Tools is LPR, but using JPEG's created on alert only. Honestly though both have similar detection rates, My cams are 4k, I then run a script to trim (not downscale) the image so it meets the plate analzyer sizing then upload.
Uploading via a script, provides more customization around which API to use. ie: one when away, one when home. But I was actually able to configure the same with BI (ie: still run the same script on alert and then upload)
Perhaps it was just the number of cams, but I did find the dynamic masking consuming a fair amount of CPU, and damn BI switching stills between main and sub played havock with dynamic and masking in general. Get it working and then bam, BI update and it's unreliable again.
The other strange thing is with AI_Tools I had to run 6 separate GPU instances of DeepStack, where with BI I can just use 1 GPU instance. I think it's because BI is sending the Lower Res Alert Images and not full 4k JPEG's, or just the DeepStack GPU version improved over when I first set it up.
What I am doing though, is finding uses for AI_Tool outside of BlueIris.
 
Last edited:

austwhite

Getting the hang of it
Joined
Jun 7, 2020
Messages
93
Reaction score
92
Location
USA
@spammenotinoz 100% agreed. Sending high res images to deepstack, especially if using the face recognition API, takes a fair chunk of CPU time, even if you use the GPU version of Deepstack.

I don't send the full High Res image to Deepstack using AITools. My cameras all have three feeds, Low Res, Medium Res and High Res. I send use the Medium Res as the "sub stream" and this is the one Blue Iris sends to Deepstack. That said, I did not notice a huge difference in detection of objects between low res, med res and high res during experimentation. Where it makes a big difference is doing facial recognition, which I only do on 3 cameras I have.
I settled on using the medium res feed to send to AI Tools by using the medium res as the substream. Blue Iris, when it takes the snapshot, uses the sub-stream for snapshots. I don't use the high res jpg option, except for the facial recognition :).
 

David L

IPCT Contributor
Joined
Aug 2, 2019
Messages
7,932
Reaction score
20,757
Location
USA
If yo

You can try the GPU version of DeepStack. Depends how many cameras you have and how much RAM you have.
Which 4thGEN Intel is it? Is it a Core i5 or i7?
If you have at least 16GB RAM in your system, then DeepStack on the same machine is worth a try, though if you have more than about 3 cameras and a couple may trigger at once, then you will find a Core i5 will definitely lag out.
My primary system runs DS in a VM on Docker, but I have a test system with 3 x 5MP cameras and Deepstack installed locally using the BI Integration. It runs 16GB RAM with a Core i7 7700 CPU and when BI is processing AI it will regularly hit 80% CPU usage with the CPU version of DeepStack and idle it will sit at 3 to 4% CPU usage. It is BI using the CPU time and not Deepstack as I do monitor which programs are using the CPU time. This test system does not have an NVIDIA GPU so the Deepstack GPU version won't run on it properly. I plan obtaining an NVIDIA GPU of some kind for it to test the GPU version of Deepstack to see if there is any differences in object detection accruacy.
Thanks for the reply, sorry I should of mentioned it is an i7 with 32Gigs of RAM. So I ended up installing the GPU version, I am up an running, right now I have Deepstack on my Hik Doorbell Cam to test it out. It is working, I am getting Detections...

Appreciate your input...
 

spammenotinoz

Getting comfortable
Joined
Apr 4, 2019
Messages
345
Reaction score
274
Location
Sydney
@spammenotinoz 100% agreed. Sending high res images to deepstack, especially if using the face recognition API, takes a fair chunk of CPU time, even if you use the GPU version of Deepstack.

I don't send the full High Res image to Deepstack using AITools. My cameras all have three feeds, Low Res, Medium Res and High Res. I send use the Medium Res as the "sub stream" and this is the one Blue Iris sends to Deepstack. That said, I did not notice a huge difference in detection of objects between low res, med res and high res during experimentation. Where it makes a big difference is doing facial recognition, which I only do on 3 cameras I have.
I settled on using the medium res feed to send to AI Tools by using the medium res as the substream. Blue Iris, when it takes the snapshot, uses the sub-stream for snapshots. I don't use the high res jpg option, except for the facial recognition :).
How did you get BI to create SnapShots via the Sub-Stream? I couldn't find that setting, the only way I could was by setting up clones which then used the Sub-Stream as the main feed.
 

austwhite

Getting the hang of it
Joined
Jun 7, 2020
Messages
93
Reaction score
92
Location
USA
How did you get BI to create SnapShots via the Sub-Stream? I couldn't find that setting, the only way I could was by setting up clones which then used the Sub-Stream as the main feed.
Sorry, bad wording on my part. I do use a clone camera that I use for "sub streams" for image captures.
 
Top