[tool] [tutorial] Free AI Person Detection for Blue Iris

@Tinbum - A bug in the pi version? Are using beta? See what happens when you run in a different mode. I think the switch is something like -MODE=High or medium or low. I seem to recall looking into the python code, that those switches may actually resize the image to a different resolution on the deepstack side of things

I was wondering if its a bug in the pi but no one else on here has mentioned it. I guess it doesn't really matter that much when running the 1.67 AITool but when you start running it in your version the dynamic masks will be in two different positions for the same object depending if its a desktop or a pi analysing the image.

The Pi, I think, has only the one version of the deepstack software, which is actually an Alpha. I will probably try emailing the developers as I can't yet post on the forum.
 
Tried out my Pi4 2GB RAM with Intel NCS2 and i have to say i'm pretty impressed. Unfortunately there is no easy way to get the service to auto start on boot or anything just yet, but the processing times are very quick.

Image processing times are between 66ms ( 640x352 image from my reolink substream) and 160ms (1920x1080 image from my EZVIZ spotlight camera). Why are my times so much faster than others? I have it plugged into the USB3 port if others havent been.
 

Attachments

  • Intel NCS2.png
    Intel NCS2.png
    32.5 KB · Views: 51
Tried out my Pi4 2GB RAM with Intel NCS2 and i have to say i'm pretty impressed. Unfortunately there is no easy way to get the service to auto start on boot or anything just yet, but the processing times are very quick.

Are your detected images shown in the correct place?

I used usb 3 port. I couldn't get it to install on a 4 so copied an sd card from my 3B+.



To add start up script at boot

sudo crontab -e

Select nano if you are prompted to ask for an editor.
Add a line at the end of the file that reads like this:
@reboot sudo deepstack start "VISION-DETECTION=True"

Save and exit. In nano, you do that by hitting CTRL + X, answering Y and hitting Enter when prompted.
 
Last edited:
  • Like
Reactions: Eatoff
Hi Yall! My AITOOL worked great for about a week, but now I'm finding I have to restart the program daily at random times due to inactivity. It doesn't crash, just doesnt deliver anymore images to DQAI anymore, despite there being triggers from BI into my input folder. I checked log and around the time it quits working, there's lots of this:
[08.09.2020, 15:03:30.675]: Loading object rectangles...
[08.09.2020, 15:03:30.681]: 0 - 378, 84, 391, 103
[08.09.2020, 15:03:30.685]: Done.
[08.09.2020, 15:03:30.909]: Loading object rectangles...
[08.09.2020, 15:03:30.913]: 0 - 379, 84, 390, 103
[08.09.2020, 15:03:30.917]: Done.
[08.09.2020, 15:03:31.106]: Loading object rectangles...
[08.09.2020, 15:03:31.111]: 0 - 379, 84, 390, 103
[08.09.2020, 15:03:31.116]: Done.
[08.09.2020, 15:03:31.288]: Loading object rectangles...
[08.09.2020, 15:03:31.295]: 0 - 379, 84, 390, 103
[08.09.2020, 15:03:31.299]: Done.
[08.09.2020, 15:03:31.410]: Loading object rectangles...
[08.09.2020, 15:03:31.414]: 0 - 379, 84, 390, 103
[08.09.2020, 15:03:31.418]: Done.
[09.09.2020, 08:17:55.508]: Loading object rectangles...
[09.09.2020, 08:17:55.514]: 0 - 379, 84, 390, 103
[09.09.2020, 08:17:55.521]: Done.
[09.09.2020, 08:20:15.434]: Loading object rectangles...
[09.09.2020, 08:20:15.439]: 0 - 379, 84, 390, 103
[09.09.2020, 08:20:15.443]: Done.
[09.09.2020, 08:20:18.252]: Loading history list from cameras/history.csv ...
[09.09.2020, 08:20:20.641]: Loading object rectangles...

Deepstack is accessible and working.
machine: Win10 (Intel4770k, 32gb RAM)

Any help would be appreciated. The only way I can get it to work again is to restart the program. It doesn't crash and obviously is doing something, just not it's main task.
 
Tried out my Pi4 2GB RAM with Intel NCS2 and i have to say i'm pretty impressed. Unfortunately there is no easy way to get the service to auto start on boot or anything just yet, but the processing times are very quick.

Image processing times are between 66ms ( 640x352 image from my reolink substream) and 160ms (1920x1080 image from my EZVIZ spotlight camera). Why are my times so much faster than others? I have it plugged into the USB3 port if others havent been.
When sending images to the PI with the NCS2, are these times under heavy load, i.e. how many cameras/images are you sending to be analyzed? The reason that I ask is because I have 11 outdoor cameras which about 3-6 that overlap at any given time so AI Tool can throw multiple images to Deepstack. Just wondering if the Pi and NCS2 is up to this task? I do have my images pulling from sub stream with a resolution of 680x420 or 1280X720 depending on the cam.
 
Are your detected images shown in the correct place?

I used usb 3 port. I couldn't get it to install on a 4 so copied an sd card from my 3B+.



To add start up script at boot

sudo crontab -e

Select nano if you are prompted to ask for an editor.
Add a line at the end of the file that reads like this:
@reboot sudo deepstack start "VISION-DETECTION=True"

Save and exit. In nano, you do that by hitting CTRL + X, answering Y and hitting Enter when prompted.

Legend, that startup script worked. The boxes are in the correct places on the images when reviewing them in the AI Tool.

When sending images to the PI with the NCS2, are these times under heavy load, i.e. how many cameras/images are you sending to be analyzed? The reason that I ask is because I have 11 outdoor cameras which about 3-6 that overlap at any given time so AI Tool can throw multiple images to Deepstack. Just wondering if the Pi and NCS2 is up to this task? I do have my images pulling from sub stream with a resolution of 680x420 or 1280X720 depending on the cam.

Those times were me manually triggering the streams, so no, not a heavy load. Something seems to be triggering multiple images from the 1080 camera though, and i'm getting one every second during motion, but they are getting processed faster than they are being generated.
 
Legend, that startup script worked. The boxes are in the correct places on the images when reviewing them in the AI Tool.



Those times were me manually triggering the streams, so no, not a heavy load. Something seems to be triggering multiple images from the 1080 camera though, and i'm getting one every second during motion, but they are getting processed faster than they are being generated.
Thanks for this info. Can you report back after testing under a heavy load? I'm still deciding on buying another machine to run Deepstack on or to buy a Pi with a NCS2. For most of my cameras I have BI taking a snapshot every 3-5 seconds depending on camera location. My response times are anywhere from 600 ms to 1.5 s. This is of course dependent on how many cameras are triggered, etc.
 
Legend, that startup script worked. The boxes are in the correct places on the images when reviewing them in the AI Tool.

Would you mind going through how you installed it on the Pi 4 as I couldn't get it installed even from a totally new install..
 
Would you mind going through how you installed it on the Pi 4 as I couldn't get it installed even from a totally new install..

Fresh install of buster lite (with desktop). Then did the update and upgrade process.

i followed the instructions here - Using DeepStack on Raspberry PI - ALPHA

via SSH i ran:
wget
and then i had to bash install-deepstack.sh and it worked.

started it up, entered my license key. plugged in NCS2. Thats it.
 
So I am new to this form. I setup my config using The Hookup's YouTube video, however I want to see how to make this system faster and better. DeepStack is running on my Home Assistant Dell 7010 i5-3470S with 8GB RAM UbuntuOS . My BlueIris Windows machine is an i7-3770 with 32GB RAM with a Nvidia 1050Ti. When I tried to migrate DeepStack over to the Windows machine, I noticed that the speed was slower. The speeds that I am seeing on the Home Assistant box is Total Time of 1256ms. I have migrated to use VorlonCD mod of the AI Tool and I am loving the extra data. I do have a few questions. Normally I run the AI Tool as a service but with the updated mod I love seeing the speed data however when I launch the exe it wont show the data due to another instance being ran. Makes sense, so how is everyone auto running the program? I also do not want to have to login to get the program to run.

Another question is anyone using the multi DeepStack? How does it preform? Any benefit? Trying to make the processing faster of the images.

Another question is has anyone got DeepStack GPU working on a Windows machine?
 
Fresh install of buster lite (with desktop). Then did the update and upgrade process.

i followed the instructions here - Using DeepStack on Raspberry PI - ALPHA

via SSH i ran:
wget
and then i had to bash install-deepstack.sh and it worked.

started it up, entered my license key. plugged in NCS2. Thats it.

Thanks that worked, the only difference to what I had been doing was the lite version.
 
  • Like
Reactions: Eatoff
I have migrated to use VorlonCD mod of the AI Tool and I am loving the extra data. I do have a few questions. Normally I run the AI Tool as a service but with the updated mod I love seeing the speed data however when I launch the exe it wont show the data due to another instance being ran. Makes sense, so how is everyone auto running the program? I also do not want to have to login to get the program to run.

Basically if you want to view the UI for a while or change settings you have to stop the service, start the app manually, then when finished restart the service later. OR disable the service and set the app to run on startup, then configure windows to automatically log into that machine.

I'm working on making it a true service, but slow going.
 
Basically if you want to view the UI for a while or change settings you have to stop the service, start the app manually, then when finished restart the service later. OR disable the service and set the app to run on startup, then configure windows to automatically log into that machine.

I'm working on making it a true service, but slow going.
I think your tool added great value. Awesome job!!!!
 
  • Like
Reactions: Chris Dodge
I got it working using Docker pulling from deepquestai/deepstack:gpu
View attachment 70220
That interesting. I had tried the beta and it didn't work but just tried your suggestion deepquestai/deepstack:gpu and that works.

Speed doesn't seem any better though.

Edit- times do seem better but its processing a number of images then stopping.
 
Last edited:
Update: This doesn't seem to be an AI Tool issue, or even an "issue", really. I notice in BI that I have my motion sensor "min. object size" set to like 250. So by the time the little dog gets to the middle of my yard, the dog is smaller than the min object size. It was simple. :-)

So getting back to AI Tool for a moment, hopefully this is a simple question to answer.

Background: I have a single camera, I'm running the VorlonCD mod v1.67.8.35473, recording the sub-stream 24x7, sending sub-stream jpg's to a folder for Deepstack and AI Tool to process, and then sending a trigger to the HD stream to start recording if a person is detected. (I should just put all that in my sig! :lol: ). That's been working pretty good so far.

I'm now experimenting with detecting dogs and cats (just checked the boxes in AI Tool last night).

Fortuitously, a little doggy roamed into my yard this morning, and I got an HD recording of it. However, the recording stopped after 23 seconds, even though the dog was clearly still right in the middle of the camera's view, but walking away. And admittedly, it's a little dog...maybe weighs 5lbs at the most.

So, the question is: why did BI stop recording while the dog was still roaming around in the yard?

Thanks!
 
Last edited: