[tool] [tutorial] Free AI Person Detection for Blue Iris

That's not even been close to my experience but like I said your mileage may vary!
I guess it is somewhat dependant upon resources available

The machine I had it on before was a little short on RAM and CPU, I now have BI running on a 4th gen i7 with more RAM. I'll try installing the Docker version and see how it does. With the additional resources the load might be minimal. Thanks for the advice!
 
The machine I had it on before was a little short on RAM and CPU, I now have BI running on a 4th gen i7 with more RAM. I'll try installing the Docker version and see how it does. With the additional resources the load might be minimal. Thanks for the advice!

I'm running on a 4th gen i5 w/ 8gb RAM. 4 cams. My BI idles between 3-7% CPU and spikes momentarily during deepstack detections but that spike was greatly reduced with the latest deepstack upgrade. Definitely give it another go.
 
  • Like
Reactions: cscoppa
I'm running on a 4th gen i5 w/ 8gb RAM. 4 cams. My BI idles between 3-7% CPU and spikes momentarily during deepstack detections but that spike was greatly reduced with the latest deepstack upgrade. Definitely give it another go.

Yeah, working on it now. The one thing I noticed with the Windows version of Deepstack, it PEGS the machine so hard during detection that it affects the actual recorded video. Hoping Docker version will be better.
 
Guys... I'm testing the Amazon Rekognition AI. It does a really nice job with analysis of various objects. I'm hopeful AI Tool will alert on those different things by configuring "Additional Relevant Objects". I have however noticed that my trigger is delayed a bit and I'm missing the object sometimes in my alert clip. I'll need to increase my pre-trigger buffer if I stick with Rekognition instead of Deepstack as my AI engine. Anyone else testing this, what are your results?
 
Last edited:
  • Like
Reactions: cscoppa
I'm running on a 4th gen i5 w/ 8gb RAM. 4 cams. My BI idles between 3-7% CPU and spikes momentarily during deepstack detections but that spike was greatly reduced with the latest deepstack upgrade. Definitely give it another go.

Looking good so far, definitely eating less resources than beta3 was:

1607450153043.png
 
How are people temporarily pausing AI detections? I made a simple exe in Python and PyInstaller that closes AI Tool, starts a counter, then re-opens when time runs out. It works but it feels clunky, I have to open RDP on my phone and run the exe on my camserver. I tried running the exe remotely with SSH but I run into permission issues that I can't get around, windows just won't let me open a non-service program in another session. My goal is a one-tap shortcut on my home screen "Pause AI detentions for 30 minutes".

I apologize if this has been addressed. Believe it or not I have read through all 131 pages here but I simply cannot remember everything that was discussed. I'm running the VorlonCD fork 11/30 build posted by Village Guy recently.
 
How are people temporarily pausing AI detections? I made a simple exe in Python and PyInstaller that closes AI Tool, starts a counter, then re-opens when time runs out. It works but it feels clunky, I have to open RDP on my phone and run the exe on my camserver. I tried running the exe remotely with SSH but I run into permission issues that I can't get around, windows just won't let me open a non-service program in another session. My goal is a one-tap shortcut on my home screen "Pause AI detentions for 30 minutes".

I apologize if this has been addressed. Believe it or not I have read through all 131 pages here but I simply cannot remember everything that was discussed. I'm running the VorlonCD fork 11/30 build posted by Village Guy recently.
Setup a profile in BI that disables detections and then send a command to BI to switch profiles from your phone to activate or de-activate.
I run Tasker on my phone (android) and have it programmed to turn on detection when my phone is out of range of my WiFi signal and off again when I'm home.
Bit like geofencing except based on my wifi signal.
If you simply want to do it manually it is even easier, just send command lines in the same way you have AITool send commands to BI.
Needless to say you will need to open a port on your router to pass through your commands.
 
I'm a newbie so please be patient. I have recently purchased BI & a new Windows PC to run this excellent set up on. I've tried setting up following both different YouTube install methods by The Hook Up" (using High Def & Low def) and "FamilyTechExec" (using two streams on one camera). I cant seem to get either working. In each case I get loads of unwanted clips - I assume in my alerts I should have ONLY clips that have been processed by AI and have passed as identified movement. AI Seems to work and successfully processes the events but they dont seem to be going through to BI. I have tested the URL by copying into the browser and it gives back the correct response.
I have spent maybe ten hours on this already and wondered if there's a guide anywhere else based on the latest AI. I have run it all under Windows but not adverse to setting up Docker/VM (again i'd need to follow a guide). Any help advice appreciated,
Thanks for bearing with me.
 
I'm a newbie so please be patient. I have recently purchased BI & a new Windows PC to run this excellent set up on. I've tried setting up following both different YouTube install methods by The Hook Up" (using High Def & Low def) and "FamilyTechExec" (using two streams on one camera). I cant seem to get either working. In each case I get loads of unwanted clips - I assume in my alerts I should have ONLY clips that have been processed by AI and have passed as identified movement. AI Seems to work and successfully processes the events but they dont seem to be going through to BI. I have tested the URL by copying into the browser and it gives back the correct response.
I have spent maybe ten hours on this already and wondered if there's a guide anywhere else based on the latest AI. I have run it all under Windows but not adverse to setting up Docker/VM (again i'd need to follow a guide). Any help advice appreciated,
Thanks for bearing with me.
Most if not all you need to know is within this forum thread. I understand that it's a hell of a lot to read through but to give you a starting point, please read my posts over the last few weeks and then we can try and fill in the blanks ;)
 
@Chris Dodge I'm trying out Amazon Rekognition tonight. I have it all set up and working. Is there a way for AI Tool to support the various labels? I'd like to add "Deer" so I get alerts when they trigger an alert. Is that what "Additional Relevant Objects" is for? Can I just add Deer there and have it trigger the camera to record on that camera? Thanks!

I too would like to know about the above question, can I use my own labels / a own trained model with this? Sorry I have not heared about BI until today.
 
I too would like to know about the above question, can I use my own labels / a own trained model with this? Sorry I have not heared about BI until today.

The additional relevant objects field in AITool can for sure be used to tell AITool to report objects that can be detected by deepstack but aren't available as one of AITool's checkboxes (Deepstack can detect around 80 objects but AITool only has checkboxes for 15 of them). I do not know if/how the entries in this field can generalize to other image analytics software.
 
Setup a profile in BI that disables detections and then send a command to BI to switch profiles from your phone to activate or de-activate.
I run Tasker on my phone (android) and have it programmed to turn on detection when my phone is out of range of my WiFi signal and off again when I'm home.
Bit like geofencing except based on my wifi signal.
If you simply want to do it manually it is even easier, just send command lines in the same way you have AITool send commands to BI.
Needless to say you will need to open a port on your router to pass through your commands.

I just want to follow-up in case somebody else wants to do this.

I made 2 additional profiles, one for 30 minutes and one for 3 hours. I set the temp time for each to 30 and 180 minutes respectively. The only change for each of these profiles is for each cam I unchecked the box to dump JPEGS when triggered (there are other ways to do it as well). I then installed an app on my phone called "HTTP Shortcuts" and created 3 shortcuts on my home screen, one for each of the new profiles and one for the default profile in case I want to "cancel" the detection pause request. Note that I don't otherwise use profiles or schedules, I just have profile 1 scheduled all the time as is default. You might have to do some more tweaking to this method if you have a schedule that changes profiles periodically already.

The http request for profile change is:

http://192.168.0.xx:xx/admin?user=user&pw=password&profile=x (replace your own IP, user, pw, and profile number)

No need for port forwarding as long as my phone is connected to home wifi or my VPN server.

It works exactly as desired, one single tap on my home screen and AI detections are paused until time runs out or I manually cancel the pause.
 
I just want to follow-up in case somebody else wants to do this.

I made 2 additional profiles, one for 30 minutes and one for 3 hours. I set the temp time for each to 30 and 180 minutes respectively. The only change for each of these profiles is for each cam I unchecked the box to dump JPEGS when triggered (there are other ways to do it as well). I then installed an app on my phone called "HTTP Shortcuts" and created 3 shortcuts on my home screen, one for each of the new profiles and one for the default profile in case I want to "cancel" the detection pause request. Note that I don't otherwise use profiles or schedules, I just have profile 1 scheduled all the time as is default. You might have to do some more tweaking to this method if you have a schedule that changes profiles periodically already.

The http request for profile change is:

http://192.168.0.xx:xx/admin?user=user&pw=password&profile=x (replace your own IP, user, pw, and profile number)

No need for port forwarding as long as my phone is connected to home wifi or my VPN server.

It works exactly as desired, one single tap on my home screen and AI detections are paused until time runs out or I manually cancel the pause.
Your welcome!
 
Has anybody else noticed that the x5-beta version of deepstack, while faster, is not as good as the x3 version when it comes to quality of detections? I'm getting more phantom people detected in my images than I used to. I also notice that the people that are detected come with much lower confidence percentages. A person clearly standing in view looking at the camera might still only come back is "65%" whereas on x3-beta it would definitely be 99% or 100%.

It hasn't grossly affecting the function of my system yet but I do get more false positives here and there.
 
Has anybody else noticed that the x5-beta version of deepstack, while faster, is not as good as the x3 version when it comes to quality of detections? I'm getting more phantom people detected in my images than I used to. I also notice that the people that are detected come with much lower confidence percentages. A person clearly standing in view looking at the camera might still only come back is "65%" whereas on x3-beta it would definitely be 99% or 100%.

It hasn't grossly affecting the function of my system yet but I do get more false positives here and there.

I'm seeing this also. I had a potted plant detected as a human, also confidence percentages on humans is definitely lower.
 
I'm seeing this also. I had a potted plant detected as a human, also confidence percentages on humans is definitely lower.

Funny, I'm actually seeing the exact same false label. x3 always correctly labelled a "Potted Plant" on my doorstep. x5 detects the same thing as a "Person". Frustrating because it's not in an area I can mask without literally masking half my doorstep.

I might try changing the mode to "High" when I get a chance although I'm not sure how to do that with the Docker version of deepstack.
 
  • Love
Reactions: cscoppa
Funny, I'm actually seeing the exact same false label. x3 always correctly labelled a "Potted Plant" on my doorstep. x5 detects the same thing as a "Person". Frustrating because it's not in an area I can mask without literally masking half my doorstep.

Exact same thing, x3 always identified the plant correctly. x5 sometimes gets it right, sometimes doesn't.
 
I encourage everyone to try AWS Rekognition for image analysis. It's working very well for me.
 
I encourage everyone to try AWS Rekognition for image analysis. It's working very well for me.

Can you elaborate on your setup? Are you using AI Tool to send images to Rekognition for analysis?


Sent from my iPhone using Tapatalk