[tool] [tutorial] Free AI Person Detection for Blue Iris

Okay, so had to redo my system with a fresh windows install. Managed to get everything up and running again, and am getting notifications via gmail. But 2 things aren't happening:

1. Not getting the alert image in BI. I.e., in the below shot it doesn't show my alerts after I turned on the AI cameras. But I am getting the emails.

1593958645007.png

2. No alerts come up in BI's android app either.


Settings are below for AI Cams and regular cams:

1593958747355.png
1593958831681.png1593958850055.png
1593958873282.png1593958888062.png

1593958909515.png1593958942382.png

AI Camera;

1593958970810.png1593958988265.png

1593959004753.png

I have not set up Alerts in the AICamera, since I believe AITools only Triggers the normal FrontDoor camera?

1593959124443.png1593959168579.png

Must have gotten something mixed up between the two cameras somewhere?
 
Then how do I get notified when there is a car pull in my driveway?

What I’ve done is in the AITools I disable car then in the main cam I also have a motion alert setup but in the area towards where the back/front of the car would be when it pulls in so that motion would then trigger the cam as well.
 
Is there a away to automatically start Deepstack windows version?
 
@pbc what happens if in the Trigger tab on the main cam you enable capture image? And disable it on the clone AI cam?

Will try that. Said it before, but this is a confusing line:

2.5 Store alert images in 'Input Path'
now go to Record, check 'JPEG snapshot each (mm:ss)', select the folder you created in step 2.1, check the box 'Only when triggered' and set the interval to p.e. 0:05.0 (one image every 5 second). Furthermore, you might want to disable 'Create Alert list images when triggered', because otherwise alot of false-alarm images (remember we set the motion detection to be very sensitive) will be stored in your alerts folder.
Now go to 'Trigger', check 'Capture an alert list image' and set the Break time 'End trigger unless retriggered' to p.e. 4s, so that a short alert only causes one image to analyze. If you think that the AI Software might overlook an object "on first sight" because it's only party visible (which most times is no problem at all for the AI Software), you can also make the break time longer than the 5s interval. In this case, multiple images will be analyze by the AI Software.


Because there is no "Create Alert List images when Triggered" in the Record tab to disable, and only the "Capture Alert List Image" in the Trigger section, which the above implies to keep on in the AI clone camera.

Edit: So if I check Capture an Alert List image" only in the FrontDoor camera and not in the AIFrontDoor camera, no image gets processed by AITools. So it's definitely correct to be checking that in the AI camera.

Still getting no alerts in my BI Android app though. Also not getting any images showing in the Alerts section of BI.

But I do get the images emailed and they show up in AITools.
 
Last edited:
RePost: Can someone help me?

I have the following issue:

Starting DeepStack with VISION-DETECTION=True gives me the following error:
[02.07.2020, 17:25:57.546]: (1/6) Uploading image to DeepQuestAI Server
[02.07.2020, 17:25:58.043]: Cleaning cameras/history.csv if neccessary...
[02.07.2020, 17:25:59.807]: System.Net.Http.HttpRequestException | An error occurred while sending the request. (code: -2146233079 )
[02.07.2020, 17:25:59.816]: ERROR: Processing the following image 'D:\BlueIris\aiinput/IPC08.20200702_172557513.jpg' failed. Can't reach DeepQuestAI Server at
.

When I visit the webpage
I can see DeepStack is running and activated.

When I start DeepStack with VISION-SCENE=True or VISION-FACE=True i get this error:
[02.07.2020, 17:21:26.236]: (1/6) Uploading image to DeepQuestAI Server
[02.07.2020, 17:21:26.249]: (2/6) Waiting for results
[02.07.2020, 17:21:26.257]: (3/6) Processing results:
[02.07.2020, 17:21:26.265]: System.NullReferenceException | Object reference not set to an instance of an object. (code: -2147467261 )
[02.07.2020, 17:21:26.273]: ERROR: Processing the following image 'D:\BlueIris\aiinput/IPC08.20200702_172126168.jpg' failed. Failure in AI Tool processing the image.

This seems logic to me since it needs VISION-DETECTION, but it shows DeepStack in this case can be reached.

This is how I start Deepstack (Linux Debian 10 in VM under MSWindows10):
sudo docker run -e VISION-DETECTION=True -v localstorage:/datastore -p 80:5000 deepquestai/deepstack

I already tried the windows version of deepstack in windows 10. Same problem after selecting the checkbox after DETECTION.
I tried to install DeepStack on a docker version in Synology NAS. When entering enviroment variable VISION-DETECTION=TRUE Same problem without this variable server can be reached but processing error.

Please Help.

PiTA
 
RePost: Can someone help me?

I have the following issue:

Starting DeepStack with VISION-DETECTION=True gives me the following error:
[02.07.2020, 17:25:57.546]: (1/6) Uploading image to DeepQuestAI Server
[02.07.2020, 17:25:58.043]: Cleaning cameras/history.csv if neccessary...
[02.07.2020, 17:25:59.807]: System.Net.Http.HttpRequestException | An error occurred while sending the request. (code: -2146233079 )
[02.07.2020, 17:25:59.816]: ERROR: Processing the following image 'D:\BlueIris\aiinput/IPC08.20200702_172557513.jpg' failed. Can't reach DeepQuestAI Server at
.

When I visit the webpage
I can see DeepStack is running and activated.

When I start DeepStack with VISION-SCENE=True or VISION-FACE=True i get this error:
[02.07.2020, 17:21:26.236]: (1/6) Uploading image to DeepQuestAI Server
[02.07.2020, 17:21:26.249]: (2/6) Waiting for results
[02.07.2020, 17:21:26.257]: (3/6) Processing results:
[02.07.2020, 17:21:26.265]: System.NullReferenceException | Object reference not set to an instance of an object. (code: -2147467261 )
[02.07.2020, 17:21:26.273]: ERROR: Processing the following image 'D:\BlueIris\aiinput/IPC08.20200702_172126168.jpg' failed. Failure in AI Tool processing the image.

This seems logic to me since it needs VISION-DETECTION, but it shows DeepStack in this case can be reached.

This is how I start Deepstack (Linux Debian 10 in VM under MSWindows10):
sudo docker run -e VISION-DETECTION=True -v localstorage:/datastore -p 80:5000 deepquestai/deepstack

I already tried the windows version of deepstack in windows 10. Same problem after selecting the checkbox after DETECTION.
I tried to install DeepStack on a docker version in Synology NAS. When entering enviroment variable VISION-DETECTION=TRUE Same problem without this variable server can be reached but processing error.

Please Help.

PiTA
I also have attempted port 83 and 5000 on linux and windows executable. Iptables opened all ports. Windows added firewall rule. No success
 
RePost: Can someone help me?

I have the following issue:

Starting DeepStack with VISION-DETECTION=True gives me the following error:
[02.07.2020, 17:25:57.546]: (1/6) Uploading image to DeepQuestAI Server
[02.07.2020, 17:25:58.043]: Cleaning cameras/history.csv if neccessary...
[02.07.2020, 17:25:59.807]: System.Net.Http.HttpRequestException | An error occurred while sending the request. (code: -2146233079 )
[02.07.2020, 17:25:59.816]: ERROR: Processing the following image 'D:\BlueIris\aiinput/IPC08.20200702_172557513.jpg' failed. Can't reach DeepQuestAI Server at
.

When I visit the webpage
I can see DeepStack is running and activated.

When I start DeepStack with VISION-SCENE=True or VISION-FACE=True i get this error:
[02.07.2020, 17:21:26.236]: (1/6) Uploading image to DeepQuestAI Server
[02.07.2020, 17:21:26.249]: (2/6) Waiting for results
[02.07.2020, 17:21:26.257]: (3/6) Processing results:
[02.07.2020, 17:21:26.265]: System.NullReferenceException | Object reference not set to an instance of an object. (code: -2147467261 )
[02.07.2020, 17:21:26.273]: ERROR: Processing the following image 'D:\BlueIris\aiinput/IPC08.20200702_172126168.jpg' failed. Failure in AI Tool processing the image.

This seems logic to me since it needs VISION-DETECTION, but it shows DeepStack in this case can be reached.

This is how I start Deepstack (Linux Debian 10 in VM under MSWindows10):
sudo docker run -e VISION-DETECTION=True -v localstorage:/datastore -p 80:5000 deepquestai/deepstack

I already tried the windows version of deepstack in windows 10. Same problem after selecting the checkbox after DETECTION.
I tried to install DeepStack on a docker version in Synology NAS. When entering enviroment variable VISION-DETECTION=TRUE Same problem without this variable server can be reached but processing error.

Please Help.

PiTA
Have you tried the noavx version of Docker? Your CPU may not support AVX. Try the following commands:

sudo docker pull deepquestai/deepstack:noavx

sudo docker run -e VISION-DETECTION=True -v localstorage:/datastore \-p 80:5000 deepquestai/deepstack:noavx
 
  • Like
Reactions: Konrad Walsh
Have you tried the noavx version of Docker? Your CPU may not support AVX. Try the following commands:

sudo docker pull deepquestai/deepstack:noavx

sudo docker run -e VISION-DETECTION=True -v localstorage:/datastore \-p 80:5000 deepquestai/deepstack:noavx
OMG that seemed to work in my VM running Debian 10. Thank you so much!
 
  • Like
Reactions: pmcross
Before I put a lot of time into this I thought I would throw this out to the group and see if this has already been sorted or if there is a better suggestion.

The issue I'm having is that something is triggering my cameras and because there are normally cars in 2 of my camera's they always trigger. Now I could take the car trigger out of these cameras but I want them to trigger when cars come down the road or up the driveway.

This got me thinking that Blue Iris can place a rectangle around the moving object that triggered the alert. If I read that picture and kept all the rest of the picture transparent except for the movement and passed that onto Deepstack (aitool) then it should stop all these false positives.

Has anyone headed down this track before?
 
Before I put a lot of time into this I thought I would throw this out to the group and see if this has already been sorted or if there is a better suggestion.

The issue I'm having is that something is triggering my cameras and because there are normally cars in 2 of my camera's they always trigger. Now I could take the car trigger out of these cameras but I want them to trigger when cars come down the road or up the driveway.

This got me thinking that Blue Iris can place a rectangle around the moving object that triggered the alert. If I read that picture and kept all the rest of the picture transparent except for the movement and passed that onto Deepstack (aitool) then it should stop all these false positives.

Has anyone headed down this track before?
Others who had this problem usually set up another camera clone and mask the are where the parked cars are usually parked. This camera would be set to only trigger on vehicles and the other clone should be not masked and set to only trigger on people. That way you get alerts for cars outside of the masked areas and alerts for people any where.
 
Went back a few pages but I have this setup with a camera setup on the substream that takes the pictures and sends them over to the aitool to process and then that triggers my main stream. I saw with the newest version of Blue Iris you can combine the main stream and substream into one camera. Is it possible to use the single setup to still use the substream to record and capture images for processing and have the main stream record when triggered?
 
That would work if you have a large or long driveway, I can’t do this as my front driveway is only just lightly longer then my cars so masking the area wouldn’t detect a car which is why I have motion detection enabled on the main camera to detect movement where the front/back of the car would end up when parked.
 
Given that this is AI, will it become more accurate over time? Also what controls the speed for triggering a camera? By the time the camera is triggered the car or person is halfway across the camera's view.
 
Given that this is AI, will it become more accurate over time? Also what controls the speed for triggering a camera? By the time the camera is triggered the car or person is halfway across the camera's view.

The trigger speed really depends on the processing time it takes for AITools to get the image, pass it to DQ and then get a reply back. For me that is around 3.5 seconds so in theory there would be a minimum of 3.5seconds before the cam gets triggered, so I have my pre cam recording time in BI set to 7 seconds so the cam when triggered will also record the previous 7 seconds of video.
 
Thanks for building this product and tutorial. I got it all setup tonight, on between a Windows VM running BI and AITools and a Docker instance of the alternate version of DeepStack. I can reach the webserver on port 90 from the BI machine or anywhere on my network, but sending the image to the sever always results in "can't reach the server message" in the log. I made sure to turn off firewalls and ensured routing was good between the machines... Can't figure out what it could be.

Will keep trying tomorrow.

UPDATE: SUCCESS! 4AM but I got it working. It most likely that I had the environmental variable for the docker container set to VISION-SCENE=True instead of VISION-DETECTION=True

In case it helps anyone, here is my docker-compose section for deepstack:

Code:
  deepstack:
    container_name: deepstack
    image: deepquestai/deepstack:noavx
    volumes:
      - [yourlocalpathhere]:/datastore
      - /etc/localtime:/etc/localtime:ro
    environment:
      - TZ=America/Los_Angeles
      - VISION-DETECTION=True
    ports:
      - 5000:5000
    restart: "unless-stopped"
 
Last edited:
The trigger speed really depends on the processing time it takes for AITools to get the image, pass it to DQ and then get a reply back. For me that is around 3.5 seconds so in theory there would be a minimum of 3.5seconds before the cam gets triggered, so I have my pre cam recording time in BI set to 7 seconds so the cam when triggered will also record the previous 7 seconds of video.
@IAmATeaf Just wondering if you've managed to move all of your cameras to the new Docker/Deepstack instance running on Windows and how your CPU utilization has been? The reason that I ask is because I am considering running Deepstack in Docekr on my BI machine, but I am running Server 2012 R2 which isn't supported by Docker for Windows. If the CPU savings is substantial I will reload my system with Server 2016, but before doing that I wanted to follow up with your results. I have 15 cameras and when running Deepstack directly on Windows my CPU was being maxed out and alerts were being missed due to this. I know that you had success after switching from Deepstack on Windows to Deepstack on Docker on Windows. Just wondering how things are going for you and how many cameras you have and what your CPU savings is?
 
@pmcross Yes I have moved/configured all of my cams to make use of AITools. Initially I did have DQ for Windows installed but that was maxing out mu CPU and pegging it at 100%

Since then I’ve moved over to Docker Desktop and have DQ running within it.

This makes the CPU less spikes but when multiple cams trigger then I can spikes of around 84% but the system will then quite quickly settle back down to around 25-30%.

When I was using substreams my system idled at around 8% but after cloning the cams I found thy at the images that BI was saving were from the sub stream and this caused DQ to miss some alerts so I had to remove substreams from the AI cloned cams which results in BI then pulling individual streams so the overall CPU usage goes up. I’m hoping that a future update will fix this issue and allow images to be saved using the main stream res but still allow motion to be detected using the substream for the AI cloned cams. If and when this does come it should hopefully help to bring the CPU usage down further. This has been reported to Ken and I urge others to also report this and ask if it could be an option to allow you to choose which stream is used for the image.

Apart from the above, I think I’ve already stated that I have had to enable motion detection on my main cams as some events were being missed so again this might be adding to the overall CPU usage?