[tool] [tutorial] Free AI Person Detection for Blue Iris

So from what i gather, my options are:

1) Run as is with BI on a stand alone Win10 PC with QuickSync and Deepstack on a powerful but busy Unraid server
2) Run Deepstack on the same stand alone Win10PC as Blue Iris in a Docker (I gather this is the best way to run it on WIn10)
3) Run Deepstack on one of those AI sticks I saw people playing with earlier on in the thread. This is the option i am the least comfortable in but if it's effective enough i could be convinced to make the switch...

I would recommend option 2 on the basis that everything is running locally. Personally I like to keep things as simple as possible. If your CPU becomes overloaded then that is a whole new ball game!

Good to know you got the Clone configuration working.
 
  • Like
Reactions: seth-feinberg
I would recommend option 2 on the basis that everything is running locally. Personally I like to keep things as simple as possible. If your CPU becomes overloaded then that is a whole new ball game!

Good to know you got the Clone configuration working.
Thanks for the continued help @Village Guy !

Is there a best practice for getting the Docker up and running on Windows10. I remember from my read through that people were having a LOT of issues setting it up. Any good tutorial out there you'd recommend?
 
Thanks for the continued help @Village Guy !

Is there a best practice for getting the Docker up and running on Windows10. I remember from my read through that people were having a LOT of issues setting it up. Any good tutorial out there you'd recommend?

Sorry I can't help with directing you to a tutorial, I can recommend setting up to use the WSL 2 based engine as it is more efficient.

Aside from that I have noted below the commands that I personally use to download and initiate Deepstack, I suspect you will already be familiar with this information.

docker pull deepquestai/deepstack:latest

docker run --restart=always -e MODE=High -e VISION-DETECTION=True -v localstorage:/datastore -p 82:5000 deepquestai/deepstack:latest
 
I recently installed Chris's fork over my pre-existing AI Tools. Other than some minor config tweaks, it was a smooth process and everything works just fine. Except for one thing - MQTT.

I've configured my cameras in AI Tools to send MQTT alerts except it seems there's an issue communicating with my MQTT server. Here's a relevant log post for when I hit the MQTT Test button for my Deck camera:



AI Tools is running on the same Windows box as BI and BI has zero issue sending MQTT commands to exactly the same MQTT server. It's all on the same network (my Mosquiito container is on the same box as my Deepstack container) . I'm using the usual port 1883 and do not require login creds or TLS with my MQTT server. I installed MQTT Explorer on the same box as AI Tools / BI and also have zero issue communicating with the MQTT server. So I am truly baffled and the error message is detailed, but not helpful - at least to me.

Can anyone offer some debug advice?
I had the same issue. I do not require a password either. I just populated the fields with bogus credentials and it worked.

1615490172511.png
 
  • Like
Reactions: bohemian
Sorry I can't help with directing you to a tutorial, I can recommend setting up to use the WSL 2 based engine as it is more efficient.

Aside from that I have noted below the commands that I personally use to download and initiate Deepstack, I suspect you will already be familiar with this information.

docker pull deepquestai/deepstack:latest

docker run --restart=always -e MODE=High -e VISION-DETECTION=True -v localstorage:/datastore -p 82:5000 deepquestai/deepstack:latest

I agree with Village Guy with keeping things local. I am running Docker on Windows WSL2 on an I7 6700 with 3-4MP cameras and 1-2 MP with 24-7 recording. With heavy motion from all cameras my CPU gets to about 40%. I have 5 DeepStack Instances in Docker. I am also using the CPU version of DeepStack like Village Guy. I found a YouTube video for configuring WSL2 but can't find the specific video. You don't need to install any Linux after WSL2 is configured. The video I was watched showed how to install Linux after WSL2 so stop after WSL2 is setup. I then found directions on installing Docker Desktop for Windows. After that is installed then you will need to pull DeepStack from Docker and run the command that Village guy has from PowerShell. My commonad is slightly different. Notice I have a name for each instance. The instance below is deepstack0, the next is deepstack1 and so on. Each instance needs a unique port. The instance below is 8090, the next 8091 and so on.

docker run --restart=always -e MODE=High -e VISION-DETECTION=True -v localstorage:/datastore -p 8090:5000 --name deepstack0 deepquestai/deepstack:latest
 
  • Like
Reactions: seth-feinberg
I'm running two Deepstack instances (Windows and Docker container running on my QNAP) along with AWS Rekognition... AITool load balances between them and it works well for me.

View attachment 83874
Do you just add another docker container running Deepstack locally on the same machine as Blue Iris? Do you mind listing out the steps briefly so I can get an idea of how to set it up properly for load balancing? Thanks!

I'm running on i7-7700K if that helps.
 
Has anyone experienced the issue that the Alerts pane get filled up with multiple triggers from AI? I have set to send an HD image from Blue Iris every 1 second. Let's say the 1 event triggers 4 images in 4 seconds and AI marks all 4 images as legitimate, I will see all 4 alerts in Blue Iris when the main camera is triggered.

1615502016078.png

Any help is appreciated. Thank you!
 
Okay, so I have found a new use for AI Tool
a) Flag my BI Clips (I constant record), to quickly review footage
b) Send Telegram Alerts when people are detected when I am away
c) On Object detection, trigger a clone of my overview CAM (this creates a daily view) for my playback
The new use which I am still tinkering with is as follows;
When AI Tool detects a Car on my LPR cam (had to zoom out a little, but not much :(, it triggers a clone in BI configured with plate Recognizer "Free"
Actually seems to be working well, the AI Tool dynamic masking keeps the calls down low so I can stay under the free limit. Without this parked cars were sending me over the limit quickly.

I did try to fire the LPR cam only when the overview cam detected a car (worked fine, but found I was making too many called to "Plate Recognizer"
Probably a more efficient way, but just having a tinker with LPR.

Using an older HDW5831R-ZE Pro Series configured in 4MP mode. This camera served me well for many years, but just can't compete with newer low light models.
Daytime it's brilliant. It is varifocus and as the plates are quite reflective, I find once dialed in it's actually quite decent for LPR.

Until I played with LPR I didn't realise how many cars either don't have plates, or are un-readable (covered with dirt, blocked or missing letters).
When you think its dialed in at night you get the cars with the ultra-bright LED trick strips. It's not like I live in the country.
 
Last edited:
  • Like
Reactions: bohemian
Has anyone experienced the issue that the Alerts pane get filled up with multiple triggers from AI? I have set to send an HD image from Blue Iris every 1 second. Let's say the 1 event triggers 4 images in 4 seconds and AI marks all 4 images as legitimate, I will see all 4 alerts in Blue Iris when the main camera is triggered.

View attachment 84462

Any help is appreciated. Thank you!
Has been a while since I set it up, I recommend;
a) Using the dynamic mask feature
b) Having a cooldown of at least 10sec under AI Tool -> Camera -> Actions (10s could be the default, perhaps you changed it)
Although I don't recall it being directly related, in BI -> Camera -> Triggers set a reasonable break-time such as 5-10seconds, not too small either.
 
Has been a while since I set it up, I recommend;
a) Using the dynamic mask feature
b) Having a cooldown of at least 10sec under AI Tool -> Camera -> Actions (10s could be the default, perhaps you changed it)
Although I don't recall it being directly related, in BI -> Camera -> Triggers set a reasonable break-time such as 5-10seconds, not too small either.

Thank you, I have implemented the changes you suggested and it seems to be working well for now!
 
I have finally got DeepStack installed on Docker on Windows using WSL2, running 2 instances at the moment. I have been seeing some crazy CPU usage when they are both being used, from 20% to all the way at 99% when DeepStack instances are working. Here is what I see in one of the instances:

1615582808265.png

This is what happens when both DeepStack instances are running at the same time. 70.4% usage.
1615582923051.png

Is this normal? Is it something I'm setting up incorrectly? Here is my machine's setup:

i7-7700K
16 GB RAM

Thanks in advance!
 
Okay, so I have found a new use for AI Tool
a) Flag my BI Clips (I constant record), to quickly review footage
b) Send Telegram Alerts when people are detected when I am away
c) On Object detection, trigger a clone of my overview CAM (this creates a daily view) for my playback
The new use which I am still tinkering with is as follows;
When AI Tool detects a Car on my LPR cam (had to zoom out a little, but not much :(, it triggers a clone in BI configured with plate Recognizer "Free"
Actually seems to be working well, the AI Tool dynamic masking keeps the calls down low so I can stay under the free limit. Without this parked cars were sending me over the limit quickly.

I did try to fire the LPR cam only when the overview cam detected a car (worked fine, but found I was making too many called to "Plate Recognizer"
Probably a more efficient way, but just having a tinker with LPR.

Using an older HDW5831R-ZE Pro Series configured in 4MP mode. This camera served me well for many years, but just can't compete with newer low light models.
Daytime it's brilliant. It is varifocus and as the plates are quite reflective, I find once dialed in it's actually quite decent for LPR.

Until I played with LPR I didn't realise how many cars either don't have plates, or are un-readable (covered with dirt, blocked or missing letters).
When you think its dialed in at night you get the cars with the ultra-bright LED trick strips. It's not like I live in the country.

Yeah, I am experimenting with something similar, but using Node Red running on Home Assistant to trigger (based on MQTT messages from AITool) and process the still images flagged as either "car" or "truck". That way I can really tune the number of API calls to LPR based on time between clips and overall confidence.

Still plugging away on this but I seem to be getting closer. Fun stuff!
 
  • Like
Reactions: spammenotinoz
I have finally got DeepStack installed on Docker on Windows using WSL2, running 2 instances at the moment. I have been seeing some crazy CPU usage when they are both being used, from 20% to all the way at 99% when DeepStack instances are working. Here is what I see in one of the instances:

View attachment 84503

This is what happens when both DeepStack instances are running at the same time. 70.4% usage.
View attachment 84504

Is this normal? Is it something I'm setting up incorrectly? Here is my machine's setup:

i7-7700K
16 GB RAM

Thanks in advance!

This was my first WSL2 and Docker setup so I do not know if there is something not configured right or not on your end. I would stop the DeepStack instances and see what the CPU usage drops to. That way you know if it is Docker or DeepStack. If Docker is still high you can probably find some good resources to troubleshoot Docker high CPU usage. It looks odd that Docker is reporting 241% CPU usage.

I have an I7-6700 with 32 GB of ram. I have 5 instances of DeepStack running and at idle I am using a lot less CPU than you show. Docker reports less CPU usage than Vmmem shows in the Task Manager.

1615586656392.png

Below is what it looks like when I trigger 4 cameras 2 times each. That would be 16 images generated in 8 seconds. BI image quality is set to 50% in the daytime. Image size is currently about 512KB for each of the 16 images.

1615586887507.png

This shows AiTool is configure to use all 5 instances. The last used times are 6 seconds apart.

1615587074606.png

Below are my processing times using the CPU version of DeepStack in High mode.

1615587238045.png
 
Hi All, Seem to be having a wierd bug, or maybe its by design..

I want to alert to cars coming down the alley so i want to detect cars on that camera, but not in my carport.

i just cant seem to customise the detections per camera?? if i disable cars on one camera it says updated one camera.. but all of them change. and same in reverse.. enabling it enables it on all
AItool version 2.0.846.7731 built 3/3/21
 
Hi All, Seem to be having a wierd bug, or maybe its by design..

I want to alert to cars coming down the alley so i want to detect cars on that camera, but not in my carport.

i just cant seem to customise the detections per camera?? if i disable cars on one camera it says updated one camera.. but all of them change. and same in reverse.. enabling it enables it on all
AItool version 2.0.846.7731 built 3/3/21

Their seems to be a bug that has crept in that I have been dicussing with VorlonCD.
 
Yep, thats better!

Next question... How do i use dynamic masking? I want to ignore the car parked in my garage, except when a person has been seen in the last 5 minutes. whats happening at the moment is I walk into the garage and hop in my car, this triggers the camera to record but once I'm in the car there are no more triggers so I dont get the video of the car pulling out