[tool] [tutorial] Free AI Person Detection for Blue Iris

as long as you are using .......deepstack deepquestai/deepstack:latest
it should always pull the newest version. That is my understanding. I use it on Windows however so if you are doing anything else that may be incorrect.

Yep, that's what I use in my Docker Pull command.
 
  • Like
Reactions: balucanb
Got this up and running, thank you! Took me a bit of effort to figure out the terminology for the mask portion but I got that sorted out.

I did however notice a HUGE performance difference between DeepStack Windows and DeepStack docker installs. The windows DeepStack would continuously max out the CPU (I'd jump from 12% idle to 100% CPU Utilization on 6c&1.8ghz) was also kind of sluggish processing images whereas the Docker CPU container was quick and rarely exceeds 20% of 2cores & 1.8ghz
 
Got this up and running, thank you! Took me a bit of effort to figure out the terminology for the mask portion but I got that sorted out.

I did however notice a HUGE performance difference between DeepStack Windows and DeepStack docker installs. The windows DeepStack would continuously max out the CPU (I'd jump from 12% idle to 100% CPU Utilization on 6c&1.8ghz) was also kind of sluggish processing images whereas the Docker CPU container was quick and rarely exceeds 20% of 2cores & 1.8ghz

I saw the same thing, the Windows version pegged the CPU so hard that it was messing up the recordings. They would get stuck on one frame for the entire duration of the actual processing.
 
How many cores and what kind of processor you guys are running for it to max the CPU out won't out using docker? I just recently got a custom PC built im curious to know if I will have the same issues with CPU getting maxed out.

Sent from my SM-G965U using Tapatalk
 
Okie dokie. Believe it or not I am trying...trying SO HARD, to do some custom models for deepstack. I KNOW RIGHT- WTH is this guy thinking?! Seriously here is where I am at and hope someone here is already doing this and can help me.
I have fumbled through the following-
Preparing Your Dataset
Step 1: Install LabelIMG
Step 2: Organize Your Dataset
Step 3: Run LabelIMG
Change Annotation to YOLO Format
Step 4: Annotate Your Dataset
Annotate Your Train /Test Dataset
Cloned the Google Caolb
copied it to my GD, mounted it, uploaded and unzipped my dataset folders.

Now I am stuck. Do I need to create a new cell and run some code or something so it "trains"? I am not seeing any model files or pth files so either I just think I did everything right to this point or just don't understand where to go from here.

As always TIA for any help.
Bumping my own post to say THANKS John Olafenwa for the help in getting the deepstack custom training model up and running. it's currently humming away training on my ugly mug.
For those that are interested John has put out a vidieo on the process- DeepStack Object Detection Guide - YouTube
 
  • Like
Reactions: Nierka
How many cores and what kind of processor you guys are running for it to max the CPU out won't out using docker? I just recently got a custom PC built im curious to know if I will have the same issues with CPU getting maxed out.

Sent from my SM-G965U using Tapatalk

In my case it was a 4th gen i7 - 4770K. Native Windows version would peg it, Docker for Windows version doesn't.
 
Thanks glad to hear that money well spent! I was trying to future proof as much as possible and keep the performance level as high as possible. I was thinking of running all my home automation and security cameras on it wanted to make sure it could handle good bit of cameras at high MP per camera still have to be careful I guess. Just on the fence what system to go with. Looked at blue iris, still looking though. I'm about AI keeping these false alarms down. Had cameras in the past terrible false alarms even with expensive cameras.

Sent from my SM-G965U using Tapatalk
 
Yeah, can't go wrong with that CPU. 10 cores / 20 threads makes it very good for this task. I also am running Blue Iris. I would say having run both versions I would still run Docker for Windows and install one of the Docker builds of DeepStack. Its updated far more frequently.
 
@cscoppa thank for the advice. I haven't used docker for Windows I may need your guidance for that. Seems like docker has grown greatly. Works great on some thing guess depends who you ask.

Sent from my SM-G965U using Tapatalk
 
  • Like
Reactions: cscoppa
@Tinbum / @Village Guy in the Windows version of AI-Tool you can select 1 or all the different detection models from the deepstack tab, When you run the Docker version that tab is gone so how would you tell it to run more than 1 model? Do you just run sudo docker run -e XXXX-DETECTION=True -v localstorage:/datastore -p 80:5000 \ deepquestai/deepstack
multiple times and change where I put XXXX to each model you want it to use?
Thanks.
 
About to jump into the Deepstack AI and AI-tool game to help knock down false alerts above and beyond what simple motion detection false alerts create.
Some questions.
Currently, my Blue Iris server runs on a i5-9700K cpu, direct-to-disc 24/7 continuous for 14-16 cameras, 8TB vido storage HD, 1TB M.2 SSD main drive, 16 GB optimized RAM, leaving me with around 35%-45% CPU usage.
What CPU usage could one expect with the use of Deepstack and the duration of the CPU usage, located on the same machine?
Windows vs Docker. Which version is recommended when it comes to ease of use, resources used, upgradeablity, functionality? I've never had issues with Windows 10 for my server so am comfortable with it's use. Have actually never dabbled with Docker in any sense or form. The above Youtube video of FamilyTechExec... he runs with the Windows version without batting an eye. I bring this up due to an earlier post mentioning that the Docker version might get more updates than the Windows version?
 
Currently, my Blue Iris server runs on a i5-9700K cpu, direct-to-disc 24/7 continuous for 14-16 cameras, 8TB vido storage HD, 1TB M.2 SSD main drive, 16 GB optimized RAM, leaving me with around 35%-45% CPU usage.
What CPU usage could one expect with the use of Deepstack and the duration of the CPU usage, located on the same machine?
Windows vs Docker. Which version is recommended when it comes to ease of use, resources used, upgradeablity, functionality? I've never had issues with Windows 10 for my server so am comfortable with it's use. Have actually never dabbled with Docker in any sense or form. The above Youtube video of FamilyTechExec... he runs with the Windows version without batting an eye. I bring this up due to an earlier post mentioning that the Docker version might get more updates than the Windows version?

what gpu do you have? integrated 630? adding a discrete gpu or dedicated jetson could probably help a lot with so many cameras.
I haven't tried your setup, but that is not a very bulky CPU for AI vision tasks, and integrated GPUs tend to be weak as well... Maybe almost approaching 1sec response time for sequential 2MP images?? (Just a guess)

but you might as well wait a couple days for them to release the updated native windows version and try that first. it should have similar speeds to docker version I would think (<1s per image, possibly much less). If speed no good, try docker version. Configure them to max the processor usage during the 'busy' motion times (adjust resolution, seconds per frame, # cameras, etc.) and see if acceptable for you, or if you need a GPU / jetson nano / etc to meet your needs.

It sounds like windows is not a huge market for them, and obviously hasn't be a major focus of their development (so less updates likely), but there is reason to be excited about the soon to be released version with windows gpu support for simplicity with BI.
 
@Tinbum / @Village Guy in the Windows version of AI-Tool you can select 1 or all the different detection models from the deepstack tab, When you run the Docker version that tab is gone so how would you tell it to run more than 1 model? Do you just run sudo docker run -e XXXX-DETECTION=True -v localstorage:/datastore -p 80:5000 \ deepquestai/deepstack
multiple times and change where I put XXXX to each model you want it to use?
Thanks.
I'm personally not familiar with running more than one version of deepstack simultaneously . That said I suspect your proposal would return an error complaining that the port is already in use. Each version would need it's own port to be defined. No sure how AITool would handle that scenario.
 
balucanb said:
@Tinbum / @Village Guy in the Windows version of AI-Tool you can select 1 or all the different detection models from the deepstack tab, When you run the Docker version that tab is gone so how would you tell it to run more than 1 model? Do you just run sudo docker run -e XXXX-DETECTION=True -v localstorage:/datastore -p 80:5000 \ deepquestai/deepstack
multiple times and change where I put XXXX to each model you want it to use?
Thanks.

I'm personally not familiar with running more than one version of deepstack simultaneously . That said I suspect your proposal would return an error complaining that the port is already in use. Each version would need it's own port to be defined. No sure how AITool would handle that scenario.
I think he is speaking of the AI model (object detection / face / scene recognition), in which case you'd probably run the command again on another port, or add all into one line, using the same format (VISION-DETECTION=True, or whatever the model is you want). It should come up on the command line once it starts, listing all API's in use.
Not sure how it works with multiple models, but i could see the objects and faces being used simultaneously for BI usage.
 
I think he is speaking of the AI model (object detection / face / scene recognition), in which case you'd probably run the command again on another port, or add all into one line, using the same format (VISION-DETECTION=True, or whatever the model is you want). It should come up on the command line once it starts, listing all API's in use.
Not sure how it works with multiple models, but i could see the objects and faces being used simultaneously for BI usage.
@Village Guy , @cjowers Thanks for replying. I agree it may not be possible to run more than one version. Village guy I suspect you may be correct or more than likely I am doing something wrong- edit Pretty positive I'm doing something wrong since I have about a 2-3 weeks of exp. with docker now. :) I have been working on a custom detection model using the instructions just put out and when trying to deploy it I am indeed getting errors. I have a working vision-detection model running in/on (?) Docker desktop using port 8383, I tried to run this new one on the same port and it threw a error. The working one was running when I tried. I guess you can't run 2 instances on the same port? I stopped it and tried again, same thing error, I then tried port 80, errored out. There very well could be something else going on since I am pretty clueless concerning Docker. If anyone thinks they can figure it out, I screen shot what was going on- see attachment. Thanks

Update- As I suspected the problem was the user (me) @johnolafenwa looked at my code from the attachment and I missed a keystroke there should be a dash between cpu and 2020, so it should be cpu-2020.12 Still waiting to see if I can run more than one model at the same time....
 

Attachments

  • custom model problem.JPG
    custom model problem.JPG
    165.2 KB · Views: 20
Last edited:
I have a BI in a pc with 10 cameras and cpu working under 12% Wenn i use The AI the cpu goes 40-80 to analyse the move and after that down to 8-12%. I dont have a gpu .The question is,if i use GPU will this help the AI not to uses CPU .Is there any tip to reduce cpu?
 
@Village Guy , @cjowers Thanks for replying. I agree it may not be possible to run more than one version. Village guy I suspect you may be correct or more than likely I am doing something wrong- edit Pretty positive I'm doing something wrong since I have about a 2-3 weeks of exp. with docker now. :) I have been working on a custom detection model using the instructions just put out and when trying to deploy it I am indeed getting errors. I have a working vision-detection model running in/on (?) Docker desktop using port 8383, I tried to run this new one on the same port and it threw a error. The working one was running when I tried. I guess you can't run 2 instances on the same port? I stopped it and tried again, same thing error, I then tried port 80, errored out. There very well could be something else going on since I am pretty clueless concerning Docker. If anyone thinks they can figure it out, I screen shot what was going on- see attachment. Thanks

Update- As I suspected the problem was the user (me) @johnolafenwa looked at my code from the attachment and I missed a keystroke there should be a dash between cpu and 2020, so it should be cpu-2020.12 Still waiting to see if I can run more than one model at the same time....
To the best of my knowledge you must use a different port address for each version. Try using 8384 for the second version but you will still have an issue with aitool handling more than one port. Port 80 is probably already being used by some other app.

Why can't you incorporate everything into one module?