[tool] [tutorial] Free AI Person Detection for Blue Iris

pbc

Getting comfortable
Joined
Jul 11, 2014
Messages
1,024
Reaction score
156
Hmmm...trying out the new 1.67 using the instructions. Haven't even got to setting up the cameras in the AItool yet, and am getting notifications left and right due to the wind blowing my front door plants around...I must not have something configured properly in the camera itself, as it is doing what I thought it would do!
 

pbc

Getting comfortable
Joined
Jul 11, 2014
Messages
1,024
Reaction score
156
Ah...I need to uncheck Trigger Sources IIRC. What about Only when Triggered in the Record page?

Weird...something is still up. Deepstack is constantly analysing now and getting snapshots every 3 seconds due to the wind which wasn't happening before with the clones (and CPU running up to 80% every other second).

1596146731577.png

Wow, had to turn off these two cameras and Deepstack as my CPU usage was shooting to 100% the entire time.

@GentlePumpkin Any clue why that would happen? The motion triggers were otherwise the same as I had on the AI Camera Clones with 1.65, but didn't have the issue of the camera taking a snapshot every 3 seconds just because of wind. At least, I think they were, I stupidly deleted the AI cloned cameras without backing up first.
 
Last edited:

pbc

Getting comfortable
Joined
Jul 11, 2014
Messages
1,024
Reaction score
156
If you look back I posted the command line that I used to do the pull which also starts.
So I'm confused with how to use Docker Desktop. I have it installed, see the little whale, but am I supposed to install Linux or anything along those lines? I don't see any terminal into which I can insert the command lines.
 

mayop

n3wb
Joined
Jul 20, 2020
Messages
29
Reaction score
22
Location
Canada
So I'm confused with how to use Docker Desktop. I have it installed, see the little whale, but am I supposed to install Linux or anything along those lines? I don't see any terminal into which I can insert the command lines.
Use the windows terminal:

Example for what I used before I switched to the beta version:

Download Deepstack:
docker pull deepquestai/deepstack

Setup the container.
docker run --restart=always -e MODE=Low -e VISION-DETECTION=True -v localstorage:/datastore -p 5000:5000 --name deepstack deepquestai/deepstack

"-e MODE=Low" is optional (Default is medium). It's not as accurate but gives a faster result. I have noticed a few false results so far but not enough to switch back to medium.
"--restart=always" will keep it running if it stops or if the PC is restarted.
In the above example it runs on port 5000.

I'm using the beta version of deepstack now which doesn't require activation.


I've also modified ai tool to have it markup and save the positive images to a sub folder so I don't need to use ai tool to see what it detected. I used to have it send that image to telegram when I was debugging but now I have it save the images on my NAS server. Example of an image
I also added a [prefix] variable to mine so I can use the following trigger url which will use the "Input file begins with" camera setting.

http://localhost:80/admin?camera=[prefix]&flagalert=1&memo=[summary]&user={user}&pw={password}
 
Last edited:

cjowers

Getting the hang of it
Joined
Jan 28, 2020
Messages
107
Reaction score
36
Location
AUS
So I'm confused with how to use Docker Desktop. I have it installed, see the little whale, but am I supposed to install Linux or anything along those lines? I don't see any terminal into which I can insert the command lines.
As mayop said. To offer a specific example, I used the windows program 'Powershell' (open as administrator) to type those command in, which runs docker module (and displays a log). I believe the docker desktop needs to be running the in the background already. You can open the docker desktop dashboard to open modules and get a command line interface, but that seems more complicated.
 

GentlePumpkin

IPCT Contributor
Joined
Sep 4, 2017
Messages
193
Reaction score
321
Use the windows terminal:

Example for what I used before I switched to the beta version:

Download Deepstack:
docker pull deepquestai/deepstack:noavx

Setup the container.
docker run --restart=always -e MODE=Low -e VISION-DETECTION=True -e API-KEY="" -v localstorage:/datastore -p 5000:5000 --name deepstack deepquestai/deepstack:noavx

"-e MODE=Low" is optional (Default is medium). It's not as accurate but gives a faster result. I have noticed a few false results so far but not enough to switch back to medium.
"--restart=always" will keep it running if it stops or if the PC is restarted.
In the above example it runs on port 5000.

I'm using the beta version of deepstack now which doesn't require activation.


I've also modified ai tool to have it markup and save the positive images to a sub folder so I don't need to use ai tool to see what it detected. I used to have it send that image to telegram when I was debugging but now I have it save the images on my NAS server. Example of an image
I also added a [prefix] variable to mine so I can use the following trigger url which will use the "Input file begins with" camera setting.

http://localhost:80/admin?camera=[prefix]&flagalert=1&memo=[summary]&user={user}&pw={password}
Nice writeup, I'll copy some information over to the setup post.

Just one thing:
I think no parent would like other people to have photos of their children, maybe even multiple photos every day from their way to school and back. The image quality maybe wont suffice for identification or sth, but that's not the point. Surveilling and analyzing public areas is dangerous and corrupts every citizens right of freedom. I'm very happy about not having to live in a 1984-reallife country like China. On top, most cams aren't (and partly can't be) sufficiently secured, so that possibly everybody can spy on the schoolchild and other people. Therefore: In the future, the setup guide will kindly ask every user to do me a favor in return and not use AI Tool to analyze public spaces.
 
Last edited:

surfer90

n3wb
Joined
Jul 3, 2020
Messages
3
Reaction score
0
Yes I can add a switch to turn off sending the image along.

regarding the Raspberry and the Compute stick: what are the processing times each?
Awesome! I'll be keeping an eye out for that then :)

I guess training the deepstack is not possible, right? For example: He is pretty convinced that there is a "Broccoli" in my driveway while it is only a round-shaped tree in the pavement :)

The processing times are around 0.6 seconds (If I interpret the below correctly)

Code:
[31.07.2020, 07:46:31.877]: Starting analysis of D:\Kamera\aiinput\gartenhd-ss.20200731_074630566.jpg
[31.07.2020, 07:46:31.895]: (1/6) Uploading image to DeepQuestAI Server
[31.07.2020, 07:46:32.402]: (2/6) Waiting for results
[31.07.2020, 07:46:32.416]: (3/6) Processing results:
[31.07.2020, 07:46:32.430]:    Detected objects:potted plant (57,96%),
[31.07.2020, 07:46:32.446]: (4/6) Checking if detected object is relevant and within confidence limits:
[31.07.2020, 07:46:32.462]:    potted plant (57,96%):
[31.07.2020, 07:46:32.514]:    potted plant (57,96%) is irrelevant.
[31.07.2020, 07:46:32.552]: (6/6) Camera Garten2 caused an irrelevant alert.
[31.07.2020, 07:46:32.586]: Adding irrelevant detection to history list.
[31.07.2020, 07:46:32.606]: 1x irrelevant, so it's an irrelevant alert.
 

GentlePumpkin

IPCT Contributor
Joined
Sep 4, 2017
Messages
193
Reaction score
321
Awesome! I'll be keeping an eye out for that then :)

I guess training the deepstack is not possible, right? For example: He is pretty convinced that there is a "Broccoli" in my driveway while it is only a round-shaped tree in the pavement :)

The processing times are around 0.6 seconds (If I interpret the below correctly)

Code:
[31.07.2020, 07:46:31.877]: Starting analysis of D:\Kamera\aiinput\gartenhd-ss.20200731_074630566.jpg
[31.07.2020, 07:46:31.895]: (1/6) Uploading image to DeepQuestAI Server
[31.07.2020, 07:46:32.402]: (2/6) Waiting for results
[31.07.2020, 07:46:32.416]: (3/6) Processing results:
[31.07.2020, 07:46:32.430]:    Detected objects:potted plant (57,96%),
[31.07.2020, 07:46:32.446]: (4/6) Checking if detected object is relevant and within confidence limits:
[31.07.2020, 07:46:32.462]:    potted plant (57,96%):
[31.07.2020, 07:46:32.514]:    potted plant (57,96%) is irrelevant.
[31.07.2020, 07:46:32.552]: (6/6) Camera Garten2 caused an irrelevant alert.
[31.07.2020, 07:46:32.586]: Adding irrelevant detection to history list.
[31.07.2020, 07:46:32.606]: 1x irrelevant, so it's an irrelevant alert.
Woa 0.6s that's impressive for such a tiny device, I think I'll get one for for testing aswell.
That would probably be a monstrous broccoli :D - no training sadly is not possible, this angers me too.
1.67 does not include the Telegram switch yet, but it's coming soon.
 

OccultMonk

Young grasshopper
Joined
Jul 25, 2020
Messages
72
Reaction score
13
Location
A Mountain hilltop
Under Record, Is it not possible to set the size of the JPEG? I want to trigger Snapshots for AI Tool, but I don't want a 4K resolution for that. Ik know I can use substreams but they are a bit too low res. 720p would be ideal, but I can only set the quality of the JPG not the dimensions.
 

spammenotinoz

Getting comfortable
Joined
Apr 4, 2019
Messages
345
Reaction score
276
Location
Sydney
Under Record, Is it not possible to set the size of the JPEG? I want to trigger Snapshots for AI Tool, but I don't want a 4K resolution for that. Ik know I can use substreams but they are a bit too low res. 720p would be ideal, but I can only set the quality of the JPG not the dimensions.
For myself, BI is using the sub-stream for my JPEG's and main stream for the direct to disk. The snapshots are 1080p not 4k, so BI is using the sub-stream. I am using sub-stream 2 not 1 on my assorted cameras, which are set to 1080p, 3fps. Mine have limits on Substream , such as D1\VGA\CIF. Substream 2 I can chose between C1, 720p and 1080p. Perhaps check your cams to see if you have a substream 2 to see if that has the 720p you are after.
 
Last edited:

surfer90

n3wb
Joined
Jul 3, 2020
Messages
3
Reaction score
0
Woa 0.6s that's impressive for such a tiny device, I think I'll get one for for testing aswell.
That would probably be a monstrous broccoli :D - no training sadly is not possible, this angers me too.
1.67 does not include the Telegram switch yet, but it's coming soon.
Yeah plus I'm running my iobroker on that raspi and also connected a 170° 5MP camera to it with IR-Cut, so I'd say I'm making the best of that budget :D
 

pbc

Getting comfortable
Joined
Jul 11, 2014
Messages
1,024
Reaction score
156
Use the windows terminal:

Example for what I used before I switched to the beta version:

Download Deepstack:
docker pull deepquestai/deepstack:noavx

Setup the container.
docker run --restart=always -e MODE=Low -e VISION-DETECTION=True -e API-KEY="" -v localstorage:/datastore -p 5000:5000 --name deepstack deepquestai/deepstack:noavx

"-e MODE=Low" is optional (Default is medium). It's not as accurate but gives a faster result. I have noticed a few false results so far but not enough to switch back to medium.
"--restart=always" will keep it running if it stops or if the PC is restarted.
In the above example it runs on port 5000.

I'm using the beta version of deepstack now which doesn't require activation.


I've also modified ai tool to have it markup and save the positive images to a sub folder so I don't need to use ai tool to see what it detected. I used to have it send that image to telegram when I was debugging but now I have it save the images on my NAS server. Example of an image
I also added a [prefix] variable to mine so I can use the following trigger url which will use the "Input file begins with" camera setting.

http://localhost:80/admin?camera=[prefix]&flagalert=1&memo=[summary]&user={user}&pw={password}
Thanks, so if I ran this in powershell "docker pull deepquestai/deepstack:noavx", I assume that isn't the beta version and therefore I need to input my API code in between the quotation marks on the next line?

In this line: docker run --restart=always -e MODE=Low -e VISION-DETECTION=True -e API-KEY="" -v localstorage:/datastore -p 5000:5000 --name deepstack deepquestai/deepstack:noavx

Where it shows -p 5000:5000, if I use port 81, would it be 8081:8081?

Some folks have entered the timezone, is that not necessary?

Oh, and also, the "noavx" command, if I'm running an i7-6700, do I still use the noavx command with that as I thought that was for older processors?
 

pbc

Getting comfortable
Joined
Jul 11, 2014
Messages
1,024
Reaction score
156
@GentlePumpkin, I don't seem to be getting a notification now on alerts, are these the right settings? I.e., if I check "Motion Zones", I end up with the usual array of false alerts since every time motion is triggered it sends an alert.

But if I turn off Motion zones, I get zero alerts to my email and phone, even though AITools is showing pictures and I'm getting a "Flagged" notification in BI when it detects a human. But it's not triggering an alert.

1596196014077.png
 
Last edited:

IAmATeaf

Known around here
Joined
Jan 13, 2019
Messages
3,308
Reaction score
3,293
Location
United Kingdom
The 5000:5000 refers to the external and internal ports that DQ will respond to and internally use.

So if you want to access DQ using port 81 you would use 81:5000, is you want to access DQ using 8081 then use 8081:5000
 
  • Like
Reactions: pbc

spammenotinoz

Getting comfortable
Joined
Apr 4, 2019
Messages
345
Reaction score
276
Location
Sydney
what 4K POE cameras support higher substreams?
Dahua 8MP definitely do.
my 4MP hikvisions also do, in addition to sub-stream they have 3rd, 4th and custom streams. On mine these all support 1080p. Perhaps the 4K hikvisions can just to 720p
 
Last edited:
Top