[tool] [tutorial] Free AI Person Detection for Blue Iris

Right have DeepStack running within Docker, can access the DQAI webpage using localhost and the port number but I'm getting the following errors in the log file for AI Tools?

[22.06.2020, 15:46:56.117]: Starting analysis of C:\BlueIris\AI-Input/AI-Drive1_C.20200622_154656086.jpg
[22.06.2020, 15:46:56.123]: System.IO.IOException | The process cannot access the file 'C:\BlueIris\AI-Input\AI-Drive1_C.20200622_154656086.jpg' because it is being used by another process. (code: -2147024864 )
[22.06.2020, 15:46:56.129]: Could not access file - will retry after 10 ms delay
[22.06.2020, 15:46:56.157]: Retrying image processing - retry 1
[22.06.2020, 15:46:56.163]: (1/6) Uploading image to DeepQuestAI Server
[22.06.2020, 15:46:56.174]: (2/6) Waiting for results
[22.06.2020, 15:46:56.178]: (3/6) Processing results:
[22.06.2020, 15:46:56.184]: System.NullReferenceException | Object reference not set to an instance of an object. (code: -2147467261 )
[22.06.2020, 15:46:56.188]: ERROR: Processing the following image 'C:\BlueIris\AI-Input/AI-Drive1_C.20200622_154656086.jpg' failed. Failure in AI Tool processing the image.

How did fix this error?
 
Did you find a solution for this yet? Someone has noted the commands to create a service from NSSM for deepstack: Windows Auto Startup and Minimize

However, I'm trying to figure out how to add service dependencies with NSSM so that the services will start in order

I'm Boost on the other forum.
I have made a post here with screenshots

 
  • Like
Reactions: meissen
Nothing is glaring me in the eye on the settings -- every thing seems to be right. What if you move the sensitivity sliders all the way to the left?
 
Hello,
I recently set this all up and was super excited to reduce the amount of false alerts I was getting. I am running blue iris on a dell optiplex i7-6700 with 8gb of ram and it's been working great. I have seven Dahua IPC-HDW5231R-Z cameras and one Dahua SD49225T-HN. I followed the guide from this thread + this video to get everything setup. I am running DeepStack on my Synology NAS in a docker container. I have everything setup correctly as far as i know, and AI Tool is working as expected....but i have some serious issues and can't figure out why.

I have never had any issues getting cameras to trigger until now. All of the secondary cameras i added with continuous recording cameras responsible for triggering and putting images into the directory to be analyzed are now all of the sudden not triggering. I can walk right in front of them waving my arms or slowly walking normal from 10 feet away and all of them but one do not trigger. There seems to be one camera that is triggering as expected, but at the same time i still get TON of false alerts from it. Based on what most everyone else's experience has been with this...i have to be doing something wrong, or have something incorrectly configured (hopefully?).

If anyone has any suggestions on things I could try to make this experience a bit better before i give up that would be awesome.

FYI: I am using a min object size of 292, a min contrast size of 19, and a min duration of 0.5 to try to get them to over-trigger.

I'm guessing that if you followed the video, you are using "SD'' camera feeds @ 640x480 but with your old motion detection settings of the full resolution image. 292 pixels in a 640x480 image is very large. Try lowering it.
 
  • Like
Reactions: meissen
I think I'm getting there with this. Thanks to all the community hard work for creating guides and tools like this.

I was maxing out my ancient CPU (i7 960 from back in the day) trying to run Deepstack, with image processing times being multiple seconds. Not workable.

However, given the recent(ish) announcement of the CUDA toolkit being available in WSL 2, I know have a GPU based Deepstack Docker running. I can access this fine in a Browser, and put this URL in to the AITool. However, I get this in the error log:

Code:
[14.08.2020, 20:28:49.612]: Starting analysis of C:\aiinput\frontsd.20200814_202849597.jpg
[14.08.2020, 20:28:49.617]: (1/6) Uploading image to DeepQuestAI Server
[14.08.2020, 20:28:49.633]: (2/6) Waiting for results
[14.08.2020, 20:28:49.639]: (3/6) Processing results:
[14.08.2020, 20:28:49.640]: System.NullReferenceException | Object reference not set to an instance of an object. (code: -2147467261 )
[14.08.2020, 20:28:49.676]: ERROR: Processing the following image 'C:\aiinput\frontsd.20200814_202849597.jpg' failed. Failure in AI Tool processing the image.

Any ideas? I'm thinking that the image is getting to the docker container fine, and being processed (below is the Docker log), but then not getting passed back to AI Tool. However, I am finding my way by feel...
Code:
[GIN] 2020/08/14 - 19:19:40 | 403 |       237.6µs |      172.20.0.1 | POST     /v1/vision/detection
[GIN] 2020/08/14 - 19:28:49 | 403 |        38.5µs |      172.20.0.1 | POST     /v1/vision/detection

Any thoughts? I'm not sure if the 403 is referencing an http forbidden error, which is why it's returning null? So it gets the request, and understands it, but says no you're not allowed to make that request?
 
I think I'm getting there with this. Thanks to all the community hard work for creating guides and tools like this.

I was maxing out my ancient CPU (i7 960 from back in the day) trying to run Deepstack, with image processing times being multiple seconds. Not workable.

However, given the recent(ish) announcement of the CUDA toolkit being available in WSL 2, I know have a GPU based Deepstack Docker running. I can access this fine in a Browser, and put this URL in to the AITool. However, I get this in the error log:

Code:
[14.08.2020, 20:28:49.612]: Starting analysis of C:\aiinput\frontsd.20200814_202849597.jpg
[14.08.2020, 20:28:49.617]: (1/6) Uploading image to DeepQuestAI Server
[14.08.2020, 20:28:49.633]: (2/6) Waiting for results
[14.08.2020, 20:28:49.639]: (3/6) Processing results:
[14.08.2020, 20:28:49.640]: System.NullReferenceException | Object reference not set to an instance of an object. (code: -2147467261 )
[14.08.2020, 20:28:49.676]: ERROR: Processing the following image 'C:\aiinput\frontsd.20200814_202849597.jpg' failed. Failure in AI Tool processing the image.

Any ideas? I'm thinking that the image is getting to the docker container fine, and being processed (below is the Docker log), but then not getting passed back to AI Tool. However, I am finding my way by feel...
Code:
[GIN] 2020/08/14 - 19:19:40 | 403 |       237.6µs |      172.20.0.1 | POST     /v1/vision/detection
[GIN] 2020/08/14 - 19:28:49 | 403 |        38.5µs |      172.20.0.1 | POST     /v1/vision/detection

Any thoughts? I'm not sure if the 403 is referencing an http forbidden error, which is why it's returning null? So it gets the request, and understands it, but says no you're not allowed to make that request?

Do you have the API key entered into Deep Stack? I believe when I was having problems with the key I was getting 403 errors. If you browse to your Deep Stack instance via a web browser, does it show the installation as activated with a key?
 
Do you have the API key entered into Deep Stack? I believe when I was having problems with the key I was getting 403 errors. If you browse to your Deep Stack instance via a web browser, does it show the installation as activated with a key?
I'm running Deepstack as the latest docker version (deepquestai/deepstack:gpu) that doesn't require a key. The URL in a browser gives me the activated message:
1597435393883.png

However, that's on a 127.20.9.x subnet, where the log was talking about 127.0.0.x subnets. Hmmm....
 
FYI - My docker in WSL isn't the Docker Desktop version with WSL, it's the version of Docker in my Ubuntu install so I could get all the nvidia gubbins...

I tested using the GPU on Docker with this:
Code:
tim@WinServer:~$ docker run --gpus all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
Run "nbody -benchmark [-numbodies=<numBodies>]" to measure performance.
        -fullscreen       (run n-body simulation in fullscreen mode)
        -fp64             (use double precision floating point values for simulation)
        -hostmem          (stores simulation data in host memory)
        -benchmark        (run benchmark to measure performance)
        -numbodies=<N>    (number of bodies (>= 1) to run in simulation)
        -device=<d>       (where d=0,1,2.... for the CUDA device to use)
        -numdevices=<i>   (where i=(number of CUDA devices > 0) to use for simulation)
        -compare          (compares simulation results running once on the default GPU and once on the CPU)
        -cpu              (run n-body simulation on the CPU)
        -tipsy=<file.bin> (load a tipsy model file for simulation)

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.

> Windowed mode
> Simulation data stored in video memory
> Single precision floating point simulation
> 1 Devices used for simulation
GPU Device 0: "GeForce GT 1030" with compute capability 6.1

> Compute 6.1 CUDA device: [GeForce GT 1030]
3072 bodies, total time for 10 iterations: 2.987 ms
= 31.594 billion interactions per second
= 631.882 single-precision GFLOP/s at 20 flops per interaction

This is the Docker for Deepstack:
Code:
sudo docker run --restart=always --gpus all -e VISION-SCENE=True -v localstorage:/datastore -p 83:5000 --name deepstackgpu2 deepquestai/deepstack:gpu
 
Last edited:
Edge Vector is probably the issue. For Edge Vector to work the slider settings need to be quite a way to the left. Change Edge Vector to Simple and see if that fixes the issue.
dang, was hoping this would be it...but unfortunately same result. still makes no sense to me...i'm wondering if my ~80% RAM usage is the issue...maybe i need 16GB instead of 8gb to accommodate the break time which as far as i know is stored in RAM?
 
Nothing is glaring me in the eye on the settings -- every thing seems to be right. What if you move the sensitivity sliders all the way to the left?
i moved two of my cameras all the way to the left just now and walked right by both of them and not a single trigger/new image showed up to be processed.
 
I'm guessing that if you followed the video, you are using "SD'' camera feeds @ 640x480 but with your old motion detection settings of the full resolution image. 292 pixels in a 640x480 image is very large. Try lowering it.
I did follow the guide but 640x480 was too crappy for me so I switched back and used full 1080p resolution.
 
Still trying different settings to optimize this, but I get randomly high detection times occassionally, sometimes for hours at a time. Not sure what is causing the super high "Thread Queue Time" results. Here are an "average" log entry, and a high queueu time entry from the AITool log. I am running the VorlonCD fork of AITool on a Docker install of DeepQuest.

Average Detection:

[14.08.2020, 17:48:04.944]: DetectObjects> Starting analysis of X:\BlueIris\AI_Input\AI_LR.20200814_174804887.jpg...
[14.08.2020, 17:48:04.945]: DetectObjects> (1/6) Uploading image to DeepQuestAI Server at [14.08.2020, 17:48:06.785]: DetectObjects> (2/6) Posted in 1838ms, Received a 492 byte response.
[14.08.2020, 17:48:06.787]: DetectObjects> (3/6) Processing results...
[14.08.2020, 17:48:06.788]: DetectObjects> Detected objects:couch (66.71%), tv (99.66%), tv (92.99%), microwave (74.5%), refrigerator (46.53%),
[14.08.2020, 17:48:06.789]: DetectObjects> (4/6) Checking if detected object is relevant and within confidence limits:
[14.08.2020, 17:48:06.823]: DetectObjects> couch (66.71%) is irrelevant.
[14.08.2020, 17:48:06.850]: DetectObjects> tv (99.66%) is irrelevant.
[14.08.2020, 17:48:06.877]: DetectObjects> tv (92.99%) is irrelevant.
[14.08.2020, 17:48:06.903]: DetectObjects> microwave (74.5%) is irrelevant.
[14.08.2020, 17:48:06.929]: DetectObjects> refrigerator (46.53%) is irrelevant.
[14.08.2020, 17:48:06.937]: Save> Settings saved to C:\AI Tool 1.65\Release\aitool.Settings.json
[14.08.2020, 17:48:06.937]: DetectObjects> (6/6) Camera Living Room caused an irrelevant alert.
[14.08.2020, 17:48:06.940]: DetectObjects> 5x irrelevant, so it's an irrelevant alert.
[14.08.2020, 17:48:06.942]: DetectObjects> ...Object detection finished:
[14.08.2020, 17:48:06.943]: DetectObjects> Total Time: 2053ms (Count=9592, Min=1542ms, Max=147548ms, Avg=2982ms)
[14.08.2020, 17:48:06.945]: DetectObjects> DeepStack Time: 1838ms (Count=9592, Min=1480ms, Max=5388ms, Avg=1949ms)
[14.08.2020, 17:48:06.947]: DetectObjects> File lock Time: 56ms (Count=4018, Min=19ms, Max=213ms, Avg=59ms)
[14.08.2020, 17:48:06.950]: DetectObjects> Thread Queue Time: 0ms (Count=5606, Min=1ms, Max=144927ms, Avg=2184ms)
[14.08.2020, 17:48:08.942]: DetectObjects>

Long Detection

[14.08.2020, 10:57:26.541]: DetectObjects>
[14.08.2020, 10:57:26.542]: DetectObjects> Starting analysis of X:\BlueIris\AI_Input\AI_FrtDrv.20200814_105550277.jpg...
[14.08.2020, 10:57:26.543]: DetectObjects> (1/6) Uploading image to DeepQuestAI Server at [14.08.2020, 10:57:28.498]: DetectObjects> (2/6) Posted in 1953ms, Received a 304 byte response.
[14.08.2020, 10:57:28.500]: DetectObjects> (3/6) Processing results...
[14.08.2020, 10:57:28.501]: DetectObjects> Detected objects: person (43.22%), car (99.56%), car (64.41%),
[14.08.2020, 10:57:28.502]: DetectObjects> (4/6) Checking if detected object is relevant and within confidence limits:
[14.08.2020, 10:57:28.536]: DetectObjects> person (43.22%) is irrelevant.
[14.08.2020, 10:57:28.570]: DetectObjects> car (99.56%) is irrelevant.
[14.08.2020, 10:57:28.608]: DetectObjects> car (64.41%) is irrelevant.
[14.08.2020, 10:57:28.614]: Save> Settings saved to C:\AI Tool 1.65\Release\aitool.Settings.json
[14.08.2020, 10:57:28.614]: DetectObjects> (6/6) Camera Front Driveway caused an irrelevant alert.
[14.08.2020, 10:57:28.616]: DetectObjects> 1x not in confidence range; 2x irrelevant, so it's an irrelevant alert.
[14.08.2020, 10:57:28.617]: DetectObjects> ...Object detection finished:
[14.08.2020, 10:57:28.618]: DetectObjects> Total Time: 98271ms (Count=2331, Min=1542ms, Max=98944ms, Avg=42505ms)
[14.08.2020, 10:57:28.620]: DetectObjects> DeepStack Time: 1953ms (Count=2331, Min=1480ms, Max=4881ms, Avg=1874ms)
[14.08.2020, 10:57:28.621]: DetectObjects> File lock Time: 0ms (Count=1289, Min=20ms, Max=198ms, Avg=60ms)
[14.08.2020, 10:57:28.623]: DetectObjects> Thread Queue Time: 96197ms (Count=1050, Min=1ms, Max=96923ms, Avg=41315ms)
[14.08.2020, 10:57:28.625]: DetectObjects>

Any thoughts on what might cause this to vary so much? When there is an instance like this it ties everything up and no more alerts come in until it settles back down. Thanks for any direction anyone might have!
 
This is the Docker for Deepstack:
Code:
sudo docker run --restart=always --gpus all -e VISION-SCENE=True -v localstorage:/datastore -p 83:5000 --name deepstackgpu2 deepquestai/deepstack:gpu
Issue 1 - I was using VISION-SCENE not VISION-DETECTION. Numpty.
 
Still trying different settings to optimize this, but I get randomly high detection times occassionally, sometimes for hours at a time. Not sure what is causing the super high "Thread Queue Time" results. Here are an "average" log entry, and a high queueu time entry from the AITool log. I am running the VorlonCD fork of AITool on a Docker install of DeepQuest.

[14.08.2020, 10:57:28.618]: DetectObjects> Total Time: 98271ms (Count=2331, Min=1542ms, Max=98944ms, Avg=42505ms)
[14.08.2020, 10:57:28.620]: DetectObjects> DeepStack Time: 1953ms (Count=2331, Min=1480ms, Max=4881ms, Avg=1874ms)
[14.08.2020, 10:57:28.621]: DetectObjects> File lock Time: 0ms (Count=1289, Min=20ms, Max=198ms, Avg=60ms)
[14.08.2020, 10:57:28.623]: DetectObjects> Thread Queue Time: 96197ms (Count=1050, Min=1ms, Max=96923ms, Avg=41315ms)
[14.08.2020, 10:57:28.625]: DetectObjects>

Any thoughts on what might cause this to vary so much? When there is an instance like this it ties everything up and no more alerts come in until it settles back down. Thanks for any direction anyone might have!

Only one file can be processed at a time by DeepStack. I'm guessing too many images are being generated so they all have to wait in line. The thread queue wait is how long it took for the previous images to be processed. Are you sure BI is set to generate images "Only when triggered"? 2000ms for deepstack is a bit long. If you could put it on faster hardware or a solid state drive that may help reduce time.
 
Hi Guys, I would like to run my cameras and deepstack virtually via my ESXI server. Currently it is running on 2 e5-2690 v4 CPUs and 500GB of RAM. The CPUs do not support QuickSync (adding a single camera spikes my CPU usage on 6 cores to 50%) so I am hoping I can add a couple GTX 1080s for Blue Iris and for DeepStack AI and offload all of the compute to the GPUs. Has anyone tried this approach? If so how did it go?
 
I use Blue Iris with the BI mobile app (for receiving alerts and playback) and tested this AI tool and it's been working nicely in "day time". However there only three issues i came across:
  • When i get visitors, their cars gets parked in my front yard and if the BI motion detection triggers randomly because of the lightning or shadows(caused by weather) it automatically accepts those alerts as positive since the car is there. Naturally the AI is doing his job since it's telling you what you configured it to look for in the picture and it's BI motion detection is at fault since it's being inaccurate. (i believe cannot be done much about this).
  • AI tool does not work at all if you have "highlight" selected in triggers under motion detection in BI. <-- this might be good to put in tut.
  • Night time in black and white is.... not working at all for me. AI tool cannot seem to trigger/intercept anything in a black and white image. So i have the AI set to work only in day time for now.
1597626959019.png
 
Last edited:
  • Like
Reactions: meissen