[tool] [tutorial] Free AI Person Detection for Blue Iris

Been watching this thread for a good while now. Thanks for all of the great help that's been provided. On to my problem...

I started with 1.65 and last week switched to 2.0.6x (upgraded to 2.0.759 today). Got it all up and running but I have a weird issue that didn't seem to pop up until after the upgrade, although I don't see how AITool is the culprit. I'm using Blue Iris and have it save photos for AITool to examine in an AIInput folder (as is common I believe). I have BI set to purge files in that folder older than 7 days. Despite that, the images are deleted every 3 to 4 minutes or so. AITool still works, but I have no visual history for AITool. I've attached a screenshot of the settings for that folder in BI. Anyone have any suggestions on something I'm missing or seen something similar? Thank you.
View attachment 83791

I forgot to mention that you should check the folder properties to see if the size of the folder is up to the 1 GB limit you set in BI.

To limit wear on my SSD drive I have a 3 GB RAM drive configured. This is faster than the surveillance drive and an option if you have plenty of RAM in your system. I use Radeon RAMDisk which is free for up to 4 GB RAM drives. My AiInput folder is on the RAM drive. On a reboot the RAM drive image is saved to the surveillance drive and then reloaded to RAM as the OS comes back up. With the RAM drive put a check in DO NOT monitor free space (for some NAS) checkbox because BI will keep warning you that the space is less than you have allocated.
 
I forgot to mention that you should check the folder properties to see if the size of the folder is up to the 1 GB limit you set in BI.

To limit wear on my SSD drive I have a 3 GB RAM drive configured. This is faster than the surveillance drive and an option if you have plenty of RAM in your system. I use Radeon RAMDisk which is free for up to 4 GB RAM drives. My AiInput folder is on the RAM drive. On a reboot the RAM drive image is saved to the surveillance drive and then reloaded to RAM as the OS comes back up. With the RAM drive put a check in DO NOT monitor free space (for some NAS) checkbox because BI will keep warning you that the space is less than you have allocated.

I'll keep in mind the RAMDISK option. Not a bad idea. I checked the folder size to see if it hit the 1GB limit and it was 100MB. Despite that, I changed it to 14GB just for kicks and then forgout about it. I came back an hour later and sure enough, there's tons of images still there (about 1.25GB). I then dropped the cap down to 2GB in BI and then the folder trimmed itself down to just under 1GB of images. Not sure what's causing the discrepancy but it's doing the job for now. Thanks for your help!
 
Most recent VorlonCD AITool compile as of 3/2/21. Running fine from my compile of his recent updates.

View attachment 83849
View attachment 83851
This version allows you to set the detection threshold for each object. You can enter the same object multiple times in order to have different thresholds for day and night or to detect an object only during certain hours.

1614772750267.png
 
  • Like
Reactions: ManFromKC
Is it best to use the Windows version of the Deepstack for version 2 or the docker version? I can't get the GPU version of docker working anymore. Also is there no GPU version for windows?
 
Is it best to use the Windows version of the Deepstack for version 2 or the docker version? I can't get the GPU version of docker working anymore. Also is there no GPU version for windows?
I'm running two Deepstack instances (Windows and Docker container running on my QNAP) along with AWS Rekognition... AITool load balances between them and it works well for me.

Screen Shot 2021-03-03 at 8.48.17 AM.png
 
Does the deepquestai/deepstack:cpu-2021.02.1 work on Ryzen 9 from AMD? Or do you still have to use the deepquestai/deepstack:noavx? And if so how do you get the latest 2021.02.1 version with noavx?
 
So for CPU you can now create multiple instances of Deepstack for Docker for example and run it in parallel on a fast multi core CPU?

I run 5 instances with four 4- MP cameras on a I7 6700. The max queued images AiTool shows is 2. Processor utilization gets up to about 45% if I trigger all 4 cameras at once. At idle the CPU usage is at 11%. You just have to change the port and instance name to run more Docker containers. Run this command from Power Shell to create the Docker container/app.

docker run --restart=always -e MODE=High -e VISION-DETECTION=True -v localstorage:/datastore -p 8090:5000 --name deepstack0 deepquestai/deepstack:latest

docker run --restart=always -e MODE=High -e VISION-DETECTION=True -v localstorage:/datastore -p 8091:5000 --name deepstack1 deepquestai/deepstack:latest
 
Last edited:
This feels like it would be a FAQ but at least my searches came up short (maybe wrong keywords) so I'll ask now: is there a way to practically suppress the motion capture JPG snapshots from being viewed as alerts while still passing them off to AI Tool for processing?

That is, I followed the steps at the beginning of this thread and successfully got AI Tool 1.67 working with a Deepstack running in Docker. It is very good at determining people and so far I haven't had any false positives. All alert videos in my BI feed have been those triggered by AI Tool and they've all had people in them. Sweet!

But it's very difficult to even see those particular alerts since the entire feed is flooded with the raw motion capture JPGs. I'm seeing 1,135 clips with 323 triggers in just the past few days. They show up in the iPhone app like "Snapshot (390K) jpg DeepQuestAI".

I can (and do) just scroll down until I see a video since those will always be from AI Tool... but I'd really not have to wade through scores of clips just to find the rare video (since 90% of triggers aren't people at all).

So back to my initial question -- is there a way to have BI still trigger on motion and push the screen captures to the aux directory for AI Tool but not display them as an alert at all, so all I see are the triggered videos?
 
@granroth there is an option on the record tab in blue iris to include jpeg in all clips, with that unchecked i dont see any. there is also a setting on the trigger tab "add to alerts list" I have this selected to none

Im also using the send MQTT image in aitool as a way to show the latest positive trigger to home assistant.

setup a mqtt camera in HA
- platform: mqtt
topic: ai/[cameraname]/image
name: motion_detected_snapshot

then in the automation/notification message just add the image
data:
image: /api/camera_proxy/camera.motion_detected_snapshot

Now all i need to do is fix the way the snapshot is displayed, as the top and bottom of the image get clipped off
 
  • Like
Reactions: granroth
There is an easier way to do this. Setup the Blue Iris integration through HACS. I have a blueprint listed on the HA forums that works via an input Boolean. When AITool gets a positive result, it sends an MQTT message. I have nodered take the message and turn on the input Boolean. When the Boolean is activated HA takes a snapshot of the camera and send a notification to the app with an image. If I tap the notification it takes me directly to the camera feeds.

Its near instantaneous. The snapshot from HA is nearly identical to the blue iris/AI tool snapshot. There’s also a feature for a persistent notification that even if your phone is on silent it will make a sound on your phone and watch. Good use for doorbell or a camera location where movement probably requires review.
 
I did try and get that integration working a few days ago.. but couldn't make it work.

I've learned a lot about HA since then so i'm giving it another go now.