[tool] [tutorial] Free AI Person Detection for Blue Iris

So I'm confused with how to use Docker Desktop. I have it installed, see the little whale, but am I supposed to install Linux or anything along those lines? I don't see any terminal into which I can insert the command lines.

As mayop said. To offer a specific example, I used the windows program 'Powershell' (open as administrator) to type those command in, which runs docker module (and displays a log). I believe the docker desktop needs to be running the in the background already. You can open the docker desktop dashboard to open modules and get a command line interface, but that seems more complicated.
 
Use the windows terminal:

Example for what I used before I switched to the beta version:

Download Deepstack:
docker pull deepquestai/deepstack:noavx

Setup the container.
docker run --restart=always -e MODE=Low -e VISION-DETECTION=True -e API-KEY="" -v localstorage:/datastore -p 5000:5000 --name deepstack deepquestai/deepstack:noavx

"-e MODE=Low" is optional (Default is medium). It's not as accurate but gives a faster result. I have noticed a few false results so far but not enough to switch back to medium.
"--restart=always" will keep it running if it stops or if the PC is restarted.
In the above example it runs on port 5000.

I'm using the beta version of deepstack now which doesn't require activation.


I've also modified ai tool to have it markup and save the positive images to a sub folder so I don't need to use ai tool to see what it detected. I used to have it send that image to telegram when I was debugging but now I have it save the images on my NAS server. Example of an image
I also added a [prefix] variable to mine so I can use the following trigger url which will use the "Input file begins with" camera setting.

http://localhost:80/admin?camera=[prefix]&flagalert=1&memo=[summary]&user={user}&pw={password}

Nice writeup, I'll copy some information over to the setup post.

Just one thing:
I think no parent would like other people to have photos of their children, maybe even multiple photos every day from their way to school and back. The image quality maybe wont suffice for identification or sth, but that's not the point. Surveilling and analyzing public areas is dangerous and corrupts every citizens right of freedom. I'm very happy about not having to live in a 1984-reallife country like China. On top, most cams aren't (and partly can't be) sufficiently secured, so that possibly everybody can spy on the schoolchild and other people. Therefore: In the future, the setup guide will kindly ask every user to do me a favor in return and not use AI Tool to analyze public spaces.
 
Last edited:
  • Like
Reactions: Simcole and ReXX
Yes I can add a switch to turn off sending the image along.

regarding the Raspberry and the Compute stick: what are the processing times each?
Awesome! I'll be keeping an eye out for that then :)

I guess training the deepstack is not possible, right? For example: He is pretty convinced that there is a "Broccoli" in my driveway while it is only a round-shaped tree in the pavement :)

The processing times are around 0.6 seconds (If I interpret the below correctly)

Code:
[31.07.2020, 07:46:31.877]: Starting analysis of D:\Kamera\aiinput\gartenhd-ss.20200731_074630566.jpg
[31.07.2020, 07:46:31.895]: (1/6) Uploading image to DeepQuestAI Server
[31.07.2020, 07:46:32.402]: (2/6) Waiting for results
[31.07.2020, 07:46:32.416]: (3/6) Processing results:
[31.07.2020, 07:46:32.430]:    Detected objects:potted plant (57,96%),
[31.07.2020, 07:46:32.446]: (4/6) Checking if detected object is relevant and within confidence limits:
[31.07.2020, 07:46:32.462]:    potted plant (57,96%):
[31.07.2020, 07:46:32.514]:    potted plant (57,96%) is irrelevant.
[31.07.2020, 07:46:32.552]: (6/6) Camera Garten2 caused an irrelevant alert.
[31.07.2020, 07:46:32.586]: Adding irrelevant detection to history list.
[31.07.2020, 07:46:32.606]: 1x irrelevant, so it's an irrelevant alert.
 
Awesome! I'll be keeping an eye out for that then :)

I guess training the deepstack is not possible, right? For example: He is pretty convinced that there is a "Broccoli" in my driveway while it is only a round-shaped tree in the pavement :)

The processing times are around 0.6 seconds (If I interpret the below correctly)

Code:
[31.07.2020, 07:46:31.877]: Starting analysis of D:\Kamera\aiinput\gartenhd-ss.20200731_074630566.jpg
[31.07.2020, 07:46:31.895]: (1/6) Uploading image to DeepQuestAI Server
[31.07.2020, 07:46:32.402]: (2/6) Waiting for results
[31.07.2020, 07:46:32.416]: (3/6) Processing results:
[31.07.2020, 07:46:32.430]:    Detected objects:potted plant (57,96%),
[31.07.2020, 07:46:32.446]: (4/6) Checking if detected object is relevant and within confidence limits:
[31.07.2020, 07:46:32.462]:    potted plant (57,96%):
[31.07.2020, 07:46:32.514]:    potted plant (57,96%) is irrelevant.
[31.07.2020, 07:46:32.552]: (6/6) Camera Garten2 caused an irrelevant alert.
[31.07.2020, 07:46:32.586]: Adding irrelevant detection to history list.
[31.07.2020, 07:46:32.606]: 1x irrelevant, so it's an irrelevant alert.

Woa 0.6s that's impressive for such a tiny device, I think I'll get one for for testing aswell.
That would probably be a monstrous broccoli :D - no training sadly is not possible, this angers me too.
1.67 does not include the Telegram switch yet, but it's coming soon.
 
Under Record, Is it not possible to set the size of the JPEG? I want to trigger Snapshots for AI Tool, but I don't want a 4K resolution for that. Ik know I can use substreams but they are a bit too low res. 720p would be ideal, but I can only set the quality of the JPG not the dimensions.
 
Under Record, Is it not possible to set the size of the JPEG? I want to trigger Snapshots for AI Tool, but I don't want a 4K resolution for that. Ik know I can use substreams but they are a bit too low res. 720p would be ideal, but I can only set the quality of the JPG not the dimensions.
For myself, BI is using the sub-stream for my JPEG's and main stream for the direct to disk. The snapshots are 1080p not 4k, so BI is using the sub-stream. I am using sub-stream 2 not 1 on my assorted cameras, which are set to 1080p, 3fps. Mine have limits on Substream , such as D1\VGA\CIF. Substream 2 I can chose between C1, 720p and 1080p. Perhaps check your cams to see if you have a substream 2 to see if that has the 720p you are after.
 
Last edited:
Woa 0.6s that's impressive for such a tiny device, I think I'll get one for for testing aswell.
That would probably be a monstrous broccoli :D - no training sadly is not possible, this angers me too.
1.67 does not include the Telegram switch yet, but it's coming soon.

Yeah plus I'm running my iobroker on that raspi and also connected a 170° 5MP camera to it with IR-Cut, so I'd say I'm making the best of that budget :D
 
Use the windows terminal:

Example for what I used before I switched to the beta version:

Download Deepstack:
docker pull deepquestai/deepstack:noavx

Setup the container.
docker run --restart=always -e MODE=Low -e VISION-DETECTION=True -e API-KEY="" -v localstorage:/datastore -p 5000:5000 --name deepstack deepquestai/deepstack:noavx

"-e MODE=Low" is optional (Default is medium). It's not as accurate but gives a faster result. I have noticed a few false results so far but not enough to switch back to medium.
"--restart=always" will keep it running if it stops or if the PC is restarted.
In the above example it runs on port 5000.

I'm using the beta version of deepstack now which doesn't require activation.


I've also modified ai tool to have it markup and save the positive images to a sub folder so I don't need to use ai tool to see what it detected. I used to have it send that image to telegram when I was debugging but now I have it save the images on my NAS server. Example of an image
I also added a [prefix] variable to mine so I can use the following trigger url which will use the "Input file begins with" camera setting.

http://localhost:80/admin?camera=[prefix]&flagalert=1&memo=[summary]&user={user}&pw={password}

Thanks, so if I ran this in powershell "docker pull deepquestai/deepstack:noavx", I assume that isn't the beta version and therefore I need to input my API code in between the quotation marks on the next line?

In this line: docker run --restart=always -e MODE=Low -e VISION-DETECTION=True -e API-KEY="" -v localstorage:/datastore -p 5000:5000 --name deepstack deepquestai/deepstack:noavx

Where it shows -p 5000:5000, if I use port 81, would it be 8081:8081?

Some folks have entered the timezone, is that not necessary?

Oh, and also, the "noavx" command, if I'm running an i7-6700, do I still use the noavx command with that as I thought that was for older processors?
 
@GentlePumpkin, I don't seem to be getting a notification now on alerts, are these the right settings? I.e., if I check "Motion Zones", I end up with the usual array of false alerts since every time motion is triggered it sends an alert.

But if I turn off Motion zones, I get zero alerts to my email and phone, even though AITools is showing pictures and I'm getting a "Flagged" notification in BI when it detects a human. But it's not triggering an alert.

1596196014077.png
 
Last edited:
I have Hikvision 4K cameras but I think they do not support higher resolution substreams, even at lower FPS.

That's correct, unfortunately they only support up to 640x480.
 
The 5000:5000 refers to the external and internal ports that DQ will respond to and internally use.

So if you want to access DQ using port 81 you would use 81:5000, is you want to access DQ using 8081 then use 8081:5000
 
  • Like
Reactions: pbc
Dahua definitely do.
my 4MP hikvisions also do, in addition to sub-stream they have 3rd, 4th and custom streams. On mine these all support 1080p. Perhaps the 4K hikvisions can just to 720p

I can't even get 720p in a substream. I have these:

Hikvision DS-2CD2185FWD-I (4k)
Hikvision DS-2CD2045FWD-I (2K)

I only see a Main stream and sub stream in the settings, where can I find a third stream?
 
I can't even get 720p in a substream. I have these:

Hikvision DS-2CD2185FWD-I (4k)
Hikvision DS-2CD2045FWD-I (2K)

I only see a Main stream and sub stream in the settings, where can I find a third stream?
Both those models support a 720p third stream.
If it's not on the video page, it is typically under
"System > Maintenance > System Service> Software", then check the box labelled "Enable Third Stream". The camera will need to reboot. In BI from memory use 103 instead of 101/102 to access the third stream, but can't recall 100% and I am away.
 
That's correct, unfortunately they only support up to 640x480.
Most Hikvisions will support 720p as a Third Stream (despite substream being a measly 640 × 480.
It is typically under "System > Maintenance > System Service> Software", then check the box labelled "Enable Third Stream". But can vary via model. The camera will need to reboot. In BI from memory use 103 instead of 101/102 to access the third stream.
If you check your cameras specs, it will list a third stream (most do), some Hikvisions even support 5 streams. For this use, you can keep the fps low, but the 3rd stream can go to 25-30fps on most.
 
Thanks, It will disable H264+ and H265+. It would just work much beter if BlueIris implemented a way to downscale the JPEG captures using the Main stream 4K. Just a simple option to not only set the quality but also the resolution of the JPEG.
 
Thanks, It will disable H264+ and H265+. It would just work much beter if BlueIris implemented a way to downscale the JPEG captures using the Main stream 4K. Just a simple option to not only set the quality but also the resolution of the JPEG.
Agree. I was never able to get H254+ or H265+ working reliably with H\W decod and BlueIris, so I just use H265. Has H265+ always worked for you? Perhaps I should try it again.