[tool] [tutorial] Free AI Person Detection for Blue Iris

barnyard

n3wb
Joined
Aug 9, 2020
Messages
24
Reaction score
5
Location
United States
Ah, ok. I see it now, just thought it was a white hydrant.

Sent from my Pixel 3 using Tapatalk
 
Last edited:

pmcross

Pulling my weight
Joined
Jan 16, 2017
Messages
371
Reaction score
185
Location
Pennsylvania
Yes, it's a small fire hydrant on the right-hand side, painted white. Here's the actual mask image -->
Is this the masked image from AI Tool? The reason that I ask is because the masked area should show as black and not white. I believe that you have the mask inverted. When you open the actual .png image of the mask that AI Tool uses you should only see the masked area and not the image/background.

Here is the image (attached) with the mask that you name your camera name and put in the AI Tool\camera directory:


Here is the camera in AI Tool with the mask option enabled:
 

Attachments

Last edited:

Tinbum

Pulling my weight
Joined
Sep 5, 2017
Messages
446
Reaction score
126
Location
UK
@Tinbum - I would prop the pi up a little, let it see outside the window. Poor little guy probably feels neglected sitting over there in the corner half covered in the national geographic magazines. Or is that just mine? :) (only used as a 'PiHole')

Hmm do you have the BI resize image option enabled? If so, try disabling to see if anything changes.

The DPI scaling on your monitor could be a factor. See if it happens when you set to 100% dpi. Or maybe play with AITOOL shortcut > compatibility tab > DPI settings.

Are you VNC'd'd or remote desktop to view the image? If so, see if that is a factor.
No resizing in BI and I can@t understand it as its ok when analyzed by deepstack on a desktop but not by the Pi.

1599578618705.png

1599578682330.png
1599589071866.png
 

Attachments

Last edited:

Chris Dodge

Pulling my weight
Joined
Aug 9, 2019
Messages
90
Reaction score
114
Location
massachusetts
@Tinbum - A bug in the pi version? Are using beta? See what happens when you run in a different mode. I think the switch is something like -MODE=High or medium or low. I seem to recall looking into the python code, that those switches may actually resize the image to a different resolution on the deepstack side of things
 

Tinbum

Pulling my weight
Joined
Sep 5, 2017
Messages
446
Reaction score
126
Location
UK
@Tinbum - A bug in the pi version? Are using beta? See what happens when you run in a different mode. I think the switch is something like -MODE=High or medium or low. I seem to recall looking into the python code, that those switches may actually resize the image to a different resolution on the deepstack side of things
I was wondering if its a bug in the pi but no one else on here has mentioned it. I guess it doesn't really matter that much when running the 1.67 AITool but when you start running it in your version the dynamic masks will be in two different positions for the same object depending if its a desktop or a pi analysing the image.

The Pi, I think, has only the one version of the deepstack software, which is actually an Alpha. I will probably try emailing the developers as I can't yet post on the forum.
 

Eatoff

n3wb
Joined
Aug 28, 2020
Messages
19
Reaction score
3
Location
Australia
Tried out my Pi4 2GB RAM with Intel NCS2 and i have to say i'm pretty impressed. Unfortunately there is no easy way to get the service to auto start on boot or anything just yet, but the processing times are very quick.

Image processing times are between 66ms ( 640x352 image from my reolink substream) and 160ms (1920x1080 image from my EZVIZ spotlight camera). Why are my times so much faster than others? I have it plugged into the USB3 port if others havent been.
 

Attachments

Tinbum

Pulling my weight
Joined
Sep 5, 2017
Messages
446
Reaction score
126
Location
UK
Tried out my Pi4 2GB RAM with Intel NCS2 and i have to say i'm pretty impressed. Unfortunately there is no easy way to get the service to auto start on boot or anything just yet, but the processing times are very quick.
Are your detected images shown in the correct place?

I used usb 3 port. I couldn't get it to install on a 4 so copied an sd card from my 3B+.



To add start up script at boot

sudo crontab -e

Select nano if you are prompted to ask for an editor.
Add a line at the end of the file that reads like this:
@reboot sudo deepstack start "VISION-DETECTION=True"

Save and exit. In nano, you do that by hitting CTRL + X, answering Y and hitting Enter when prompted.
 
Last edited:

tadoan

n3wb
Joined
Jul 2, 2020
Messages
2
Reaction score
0
Hi Yall! My AITOOL worked great for about a week, but now I'm finding I have to restart the program daily at random times due to inactivity. It doesn't crash, just doesnt deliver anymore images to DQAI anymore, despite there being triggers from BI into my input folder. I checked log and around the time it quits working, there's lots of this:
[08.09.2020, 15:03:30.675]: Loading object rectangles...
[08.09.2020, 15:03:30.681]: 0 - 378, 84, 391, 103
[08.09.2020, 15:03:30.685]: Done.
[08.09.2020, 15:03:30.909]: Loading object rectangles...
[08.09.2020, 15:03:30.913]: 0 - 379, 84, 390, 103
[08.09.2020, 15:03:30.917]: Done.
[08.09.2020, 15:03:31.106]: Loading object rectangles...
[08.09.2020, 15:03:31.111]: 0 - 379, 84, 390, 103
[08.09.2020, 15:03:31.116]: Done.
[08.09.2020, 15:03:31.288]: Loading object rectangles...
[08.09.2020, 15:03:31.295]: 0 - 379, 84, 390, 103
[08.09.2020, 15:03:31.299]: Done.
[08.09.2020, 15:03:31.410]: Loading object rectangles...
[08.09.2020, 15:03:31.414]: 0 - 379, 84, 390, 103
[08.09.2020, 15:03:31.418]: Done.
[09.09.2020, 08:17:55.508]: Loading object rectangles...
[09.09.2020, 08:17:55.514]: 0 - 379, 84, 390, 103
[09.09.2020, 08:17:55.521]: Done.
[09.09.2020, 08:20:15.434]: Loading object rectangles...
[09.09.2020, 08:20:15.439]: 0 - 379, 84, 390, 103
[09.09.2020, 08:20:15.443]: Done.
[09.09.2020, 08:20:18.252]: Loading history list from cameras/history.csv ...
[09.09.2020, 08:20:20.641]: Loading object rectangles...

Deepstack is accessible and working.
machine: Win10 (Intel4770k, 32gb RAM)

Any help would be appreciated. The only way I can get it to work again is to restart the program. It doesn't crash and obviously is doing something, just not it's main task.
 

pmcross

Pulling my weight
Joined
Jan 16, 2017
Messages
371
Reaction score
185
Location
Pennsylvania
Tried out my Pi4 2GB RAM with Intel NCS2 and i have to say i'm pretty impressed. Unfortunately there is no easy way to get the service to auto start on boot or anything just yet, but the processing times are very quick.

Image processing times are between 66ms ( 640x352 image from my reolink substream) and 160ms (1920x1080 image from my EZVIZ spotlight camera). Why are my times so much faster than others? I have it plugged into the USB3 port if others havent been.
When sending images to the PI with the NCS2, are these times under heavy load, i.e. how many cameras/images are you sending to be analyzed? The reason that I ask is because I have 11 outdoor cameras which about 3-6 that overlap at any given time so AI Tool can throw multiple images to Deepstack. Just wondering if the Pi and NCS2 is up to this task? I do have my images pulling from sub stream with a resolution of 680x420 or 1280X720 depending on the cam.
 

Eatoff

n3wb
Joined
Aug 28, 2020
Messages
19
Reaction score
3
Location
Australia
Are your detected images shown in the correct place?

I used usb 3 port. I couldn't get it to install on a 4 so copied an sd card from my 3B+.



To add start up script at boot

sudo crontab -e

Select nano if you are prompted to ask for an editor.
Add a line at the end of the file that reads like this:
@reboot sudo deepstack start "VISION-DETECTION=True"

Save and exit. In nano, you do that by hitting CTRL + X, answering Y and hitting Enter when prompted.
Legend, that startup script worked. The boxes are in the correct places on the images when reviewing them in the AI Tool.

When sending images to the PI with the NCS2, are these times under heavy load, i.e. how many cameras/images are you sending to be analyzed? The reason that I ask is because I have 11 outdoor cameras which about 3-6 that overlap at any given time so AI Tool can throw multiple images to Deepstack. Just wondering if the Pi and NCS2 is up to this task? I do have my images pulling from sub stream with a resolution of 680x420 or 1280X720 depending on the cam.
Those times were me manually triggering the streams, so no, not a heavy load. Something seems to be triggering multiple images from the 1080 camera though, and i'm getting one every second during motion, but they are getting processed faster than they are being generated.
 

pmcross

Pulling my weight
Joined
Jan 16, 2017
Messages
371
Reaction score
185
Location
Pennsylvania
Legend, that startup script worked. The boxes are in the correct places on the images when reviewing them in the AI Tool.



Those times were me manually triggering the streams, so no, not a heavy load. Something seems to be triggering multiple images from the 1080 camera though, and i'm getting one every second during motion, but they are getting processed faster than they are being generated.
Thanks for this info. Can you report back after testing under a heavy load? I'm still deciding on buying another machine to run Deepstack on or to buy a Pi with a NCS2. For most of my cameras I have BI taking a snapshot every 3-5 seconds depending on camera location. My response times are anywhere from 600 ms to 1.5 s. This is of course dependent on how many cameras are triggered, etc.
 

Tinbum

Pulling my weight
Joined
Sep 5, 2017
Messages
446
Reaction score
126
Location
UK
Legend, that startup script worked. The boxes are in the correct places on the images when reviewing them in the AI Tool.
Would you mind going through how you installed it on the Pi 4 as I couldn't get it installed even from a totally new install..
 

Eatoff

n3wb
Joined
Aug 28, 2020
Messages
19
Reaction score
3
Location
Australia
Would you mind going through how you installed it on the Pi 4 as I couldn't get it installed even from a totally new install..
Fresh install of buster lite (with desktop). Then did the update and upgrade process.

i followed the instructions here - Using DeepStack on Raspberry PI - ALPHA

via SSH i ran:
wget
and then i had to bash install-deepstack.sh and it worked.

started it up, entered my license key. plugged in NCS2. Thats it.
 

Tinbum

Pulling my weight
Joined
Sep 5, 2017
Messages
446
Reaction score
126
Location
UK

bat1939

n3wb
Joined
Sep 9, 2020
Messages
11
Reaction score
2
Location
United States
So I am new to this form. I setup my config using The Hookup's YouTube video, however I want to see how to make this system faster and better. DeepStack is running on my Home Assistant Dell 7010 i5-3470S with 8GB RAM UbuntuOS . My BlueIris Windows machine is an i7-3770 with 32GB RAM with a Nvidia 1050Ti. When I tried to migrate DeepStack over to the Windows machine, I noticed that the speed was slower. The speeds that I am seeing on the Home Assistant box is Total Time of 1256ms. I have migrated to use VorlonCD mod of the AI Tool and I am loving the extra data. I do have a few questions. Normally I run the AI Tool as a service but with the updated mod I love seeing the speed data however when I launch the exe it wont show the data due to another instance being ran. Makes sense, so how is everyone auto running the program? I also do not want to have to login to get the program to run.

Another question is anyone using the multi DeepStack? How does it preform? Any benefit? Trying to make the processing faster of the images.

Another question is has anyone got DeepStack GPU working on a Windows machine?
 

Tinbum

Pulling my weight
Joined
Sep 5, 2017
Messages
446
Reaction score
126
Location
UK
Fresh install of buster lite (with desktop). Then did the update and upgrade process.

i followed the instructions here - Using DeepStack on Raspberry PI - ALPHA

via SSH i ran:
wget
and then i had to bash install-deepstack.sh and it worked.

started it up, entered my license key. plugged in NCS2. Thats it.
Thanks that worked, the only difference to what I had been doing was the lite version.
 
Top