[tool] [tutorial] Free AI Person Detection for Blue Iris

robpur

Getting comfortable
Joined
Jul 31, 2014
Messages
278
Reaction score
1,350
Location
Washington State
I notice that you @robpur have 2012.12 at the end though, is that just your container name or a newer version?
It's the latest version that was recently released, fetched with

sudo docker pull deepquestai/deepstack:jetpack-2020.12

I don't know how it compares with what you are running.
 

joshwah

Pulling my weight
Joined
Apr 25, 2019
Messages
298
Reaction score
146
Location
australia
I dont understand why my AI tool randomly just keeps storing images in the queue... it's soo frustrating... i try reboot, delete all images in the folder etc.. and it always does it randomly... at the moment, i have 32 images in the 'queue' and everything shows it is up and running.

Edit: what makes it even more weird is, if I go to settings --> AI Server URL(s) --> edit and upload a test image, it works straight away... it does not go into the "queue"... ?
 
Last edited:

maximosm

Young grasshopper
Joined
Jan 8, 2015
Messages
95
Reaction score
6
Hi guys,

Here's an example to show you what I mean with the same image used against the CPU and Jetson versions, on the same mode.

You can see that the level of confidence on the person (97% vs 41%) is wildly different and other objects were not picked up at all, such as the bowl.

Jetson

View attachment 77606

CPU

View attachment 77607
Du you use both versions to a jetson nano device?
 

AskNoOne

n3wb
Joined
Dec 20, 2020
Messages
7
Reaction score
5
Location
UK
Du you use both versions to a jetson nano device?
Good question. No, the CPU version I am comparing it with is a Windows version that I downloaded off the DeepStack website 6 months or so ago and been using with Blue Iris since. It's only an i3 NUC (ESXi VM) so the plan was to offload the intense image processing to a Jetson as the NUC is already kept pretty busy with Blue Iris and Home Assistant.
 

maximosm

Young grasshopper
Joined
Jan 8, 2015
Messages
95
Reaction score
6
Good question. No, the CPU version I am comparing it with is a Windows version that I downloaded off the DeepStack website 6 months or so ago and been using with Blue Iris since. It's only an i3 NUC (ESXi VM) so the plan was to offload the intense image processing to a Jetson as the NUC is already kept pretty busy with Blue Iris and Home Assistant.
I am using a linux NUC i7 as a test with the deepstuck and i was thinking to go to jetson...But now i dont know if it is a good idea..
 

AskNoOne

n3wb
Joined
Dec 20, 2020
Messages
7
Reaction score
5
Location
UK
I am using a linux NUC i7 as a test with the deepstuck and i was thinking to go to jetson...But now i dont know if it is a good idea..
Hopefully one of the guys who's cleverer than me can figure out why they are different and knows how to build docker images.

I don't understand why different models for the different versions would be used to be honest. Perhaps it was done because the Nano only has 2GB or 4GB of RAM but I would rather have (much) better recognition and just increase the Swap File size.

It's probably worth getting one for playing around with anyway if you can afford it but I would personally hold off on using the Jetson for object detection until it improves.
 

robpur

Getting comfortable
Joined
Jul 31, 2014
Messages
278
Reaction score
1,350
Location
Washington State
Good question. No, the CPU version I am comparing it with is a Windows version that I downloaded off the DeepStack website 6 months or so ago and been using with Blue Iris since. It's only an i3 NUC (ESXi VM) so the plan was to offload the intense image processing to a Jetson as the NUC is already kept pretty busy with Blue Iris and Home Assistant.
Since you downloaded it six months ago then it's not the current version. The new Beta versions were released this month. The one that you are running is rather old. Same goes for the Jetson version. There was a very recent new release. Perhaps you have somewhat been comparing apples to oranges.

 

AskNoOne

n3wb
Joined
Dec 20, 2020
Messages
7
Reaction score
5
Location
UK
Since you downloaded it six months ago then it's not the current version. The new Beta versions were released this month. The one that you are running is rather old. Same goes for the Jetson version. There was a very recent new release. Perhaps you have somewhat been comparing apples to oranges.

I'm just a bit surprised that a 6 month old apple is better than a 1 week old orange :-(
 

robpur

Getting comfortable
Joined
Jul 31, 2014
Messages
278
Reaction score
1,350
Location
Washington State
I'm just a bit surprised that a 6 month old apple is better than a 1 week old orange :-(
Better can be relative. Perhaps as someone previously mention, that the thresholds have been changed so the scales between the newer and older versions can not be directly compared. It would be interesting to see how the latest Nano version compares to the latest Windows and generic Docker versions, and then how those compare to the older version. If it's just a matter of scale, and not the accuracy of the newer versions, then I think compensation can be made in AI Tool with the confidence limits setting, but I'm not absolutely sure about that.
 

cjowers

Getting the hang of it
Joined
Jan 28, 2020
Messages
107
Reaction score
36
Location
AUS
Better can be relative. Perhaps as someone previously mention, that the thresholds have been changed so the scales between the newer and older versions can not be directly compared. It would be interesting to see how the latest Nano version compares to the latest Windows and generic Docker versions, and then how those compare to the older version. If it's just a matter of scale, and not the accuracy of the newer versions, then I think compensation can be made in AI Tool with the confidence limits setting, but I'm not absolutely sure about that.
yep, and it's likely possible to adjust the minimum confidence level of detection (0.45) in deepstack - Object Detection | DeepStack v1.1.2 documentation | minimum confidence

lots of other good info being added to their documentation too.

With all the speed gains in their recent models, i wouldn't be surprised if there was some compromise on the accuracy. the general models are getting better and better overtime, but you can always create a custom model for your image environment, which could be much more accurate (if you do it right) since it is specific to your images.
 

Tinbum

Pulling my weight
Joined
Sep 5, 2017
Messages
448
Reaction score
126
Location
UK
Better can be relative. Perhaps as someone previously mention, that the thresholds have been changed so the scales between the newer and older versions can not be directly compared. It would be interesting to see how the latest Nano version compares to the latest Windows and generic Docker versions, and then how those compare to the older version. If it's just a matter of scale, and not the accuracy of the newer versions, then I think compensation can be made in AI Tool with the confidence limits setting, but I'm not absolutely sure about that.
I use Docker CPU, Windows GPU and Jetsun and find the results very similar but times different. GPU best 81ms , docker CPU 220ms second then Jetsun 450ms all on high.
 

Tinbum

Pulling my weight
Joined
Sep 5, 2017
Messages
448
Reaction score
126
Location
UK
Is there a variation that load shares between the GPU and CPU? Or is this a one or the other thing?
No. I use the GPU as URL 1 Jetsun as 2 and CPU as 3 and have Queued unchecked.

To quote Chris;

This is the way the queuing system works right now:

If only one image is in the queue and you have "Queued" checked in Settings the URL's take turns:
1. Image 1 in queue gets sent to URL 1

2. Image 1 in queue gets sent to URL 2

3. Image 1 in queue gets sent to URL 3


If more than one image is in the queue and you have "Queued" checked in Settings:

1. Image 1 in queue gets sent to URL 1

2. Image 2 in queue gets sent to URL 2

3. Image 3 in queue gets sent to URL 3 (or waits until a URL is available and takes them in order)


If "Queued" is not checked, then only the first URL will be used UNLESS it is busy in which case the next URL in line will be used only if needed.
 

robpur

Getting comfortable
Joined
Jul 31, 2014
Messages
278
Reaction score
1,350
Location
Washington State
To quote Chris;
This is the way the queuing system works right now:
This is good to know. I prefer to throw most of the work at the Nano, use my desktop next, and keep as much of the load off of my BI machine as possible, so I'm unchecking the queued box.
 

robpur

Getting comfortable
Joined
Jul 31, 2014
Messages
278
Reaction score
1,350
Location
Washington State
I use Docker CPU, Windows GPU and Jetsun and find the results very similar but times different. GPU best 81ms , docker CPU 220ms second then Jetsun 450ms all on high.
Based on your report it appears that the difference in confidence level that AskNoOne saw between the Nano and CPU versions has more to do with the version of DS being used, and not the platform. This is assuming that you are using the latest DS on all your platforms.

I've only been running the new versions for a few days but I just went through all of my motion alerts and I didn't find any alert that should have been flagged by AI that was not flagged. So at least for now in my situation the new version of DS has not missed any objects. I did have a false alert today when a wasp was crawling on a lens, but the older Windows version did the same thing.
 

SyconsciousAu

Getting comfortable
Joined
Sep 13, 2015
Messages
872
Reaction score
825
No. I use the GPU as URL 1 Jetsun as 2 and CPU as 3 and have Queued unchecked.

To quote Chris;

This is the way the queuing system works right now:

If only one image is in the queue and you have "Queued" checked in Settings the URL's take turns:
1. Image 1 in queue gets sent to URL 1

2. Image 1 in queue gets sent to URL 2

3. Image 1 in queue gets sent to URL 3


If more than one image is in the queue and you have "Queued" checked in Settings:

1. Image 1 in queue gets sent to URL 1

2. Image 2 in queue gets sent to URL 2

3. Image 3 in queue gets sent to URL 3 (or waits until a URL is available and takes them in order)


If "Queued" is not checked, then only the first URL will be used UNLESS it is busy in which case the next URL in line will be used only if needed.
Have you tested if that actually improves processing times? I ran a 2700 images stress test with BI running on both CPU in Docker and the Windows GPU version, and it didn't process them any faster. About 12 minutes (16:13:43 - 16:25:36) to do all 2700 images, about the same result I got from the CPU only.

Give me another 12 minutes I'm going to run a GPU only test.
 

joshwah

Pulling my weight
Joined
Apr 25, 2019
Messages
298
Reaction score
146
Location
australia
furthermore, to my earlier errors with all images going into the queue...

---- All URL's are in use, disabled, camera name doesnt match or time range was not met. (1 inuse, 0 disabled, 0 wrong camera, 0 not in time range, 0 at max per month limit) Waiting...

See screenshot: Screenshot

Any ideas?
 

SyconsciousAu

Getting comfortable
Joined
Sep 13, 2015
Messages
872
Reaction score
825
furthermore, to my earlier errors with all images going into the queue...

---- All URL's are in use, disabled, camera name doesnt match or time range was not met. (1 inuse, 0 disabled, 0 wrong camera, 0 not in time range, 0 at max per month limit) Waiting...

See screenshot: Screenshot

Any ideas?
I just had that error. I thought the beta of deepstack GPU that I'm running crashed.
 

SyconsciousAu

Getting comfortable
Joined
Sep 13, 2015
Messages
872
Reaction score
825
Have you tested if that actually improves processing times? I ran a 2700 images stress test with BI running on both CPU in Docker and the Windows GPU version, and it didn't process them any faster. About 12 minutes (16:13:43 - 16:25:36) to do all 2700 images, about the same result I got from the CPU only.

Give me another 12 minutes I'm going to run a GPU only test.
GPU(Windows Beta)+CPU(Docker) 11m53s

So first observation is GPU only queues more images than both, but not that many more, 1076 max queue size for GPU vs 1006 for both GPU and CPU.

Time wise 17:16:54 - 17:30:31 13m37s

I thought I would run CPU only again just to make sure

CPU only queued less images than Both GPU only and combined CPU and GPU at 902.

Time wise it started at 17:38:20 and finished at 17:50:05 - 11m45s

There appears to be zero benefit to running the windows GPU Beta Version in addition to the CPU docker version. Strangely the GPU version reports slightly faster processing times than the CPU version, so it has to be something to do with the way the Windows GPU version is passing the requests to the card and returning them.

This test was run on a I7-10750H with a GTX1650. YMMV.
 
Top