[tool] [tutorial] Free AI Person Detection for Blue Iris

pmcross

Pulling my weight
Joined
Jan 16, 2017
Messages
371
Reaction score
185
Location
Pennsylvania
I'm currently running everything that I mentioned except for Deepstack on my home office computer which is from 2009 and runs an Intel i7-860 with 8GB of RAM. CPU typically runs under 50% and I've never had an issue watching Plex (Don't generally have more than 1 stream running at a time). My current computer has a Passmark CPU score of 2900. The Optiplex I am looking at has a score of a little over 8000 and I'd be putting in 16GB of RAM. I would have thought I'd be more than ok. Your CPU has a Passmark score of more than 13000 which is where I start to have a disconnect because I would have thought you'd have much lower usage based on what you describe you have on your system.

Anyone have a POV on using Passmark CPU scores as a method to roughly figure out if they'll have enough computing power. I'm using my current CPU performance and figuring that a score that is almost 3x higher would give me more than enough room to add Deepstack and still have lots of capacity to spare.



View attachment 64646
I am running 15 cameras at around 1110 MP. Not sure how many you are running.
 

IAmATeaf

Known around here
Joined
Jan 13, 2019
Messages
3,287
Reaction score
3,252
Location
United Kingdom
The prob with the 720 res images is that it then becomes difficult for DeepQuest to accurately identify objects.
 

pmcross

Pulling my weight
Joined
Jan 16, 2017
Messages
371
Reaction score
185
Location
Pennsylvania
I’ve got rid of substreams on my clones so that I can get high res pictures.

Have set it all up on 4 of my cams with masks and will monitor it over the next few days.

Had to disable this on my 2 driveway cameras as they kept on missing a person walking up the driveway. I normally have 2 cars parked and the person would walk between the cars but DeepQuest was having trouble picking up that it was a person so no recording. Not too sure what can be done to resolve this but until then I’ve simply put the cams back to motion detection.
Others have created two clones, one for cars and one for people. That way you get alerted if someone walks between your cars.


Sent from my iPhone using Tapatalk
 

spammenotinoz

Getting comfortable
Joined
Apr 4, 2019
Messages
345
Reaction score
274
Location
Sydney
As I posted earlier, I really love this software to send "targeted alerts" to my Telegram app, but there is also other uses for people like myself who use Constant Recording and that is,

I also use this software to flag Clips with People in BlueIris. (okay so that is not the intent of Flags, but it works for me), adding in this trigger

http://localhost:[BI Port]/admin?camera=[short cam name]&flagalert=1&memo=[text to appear in BI]&user=[user]&pw=[password]

Not the most secure adding user and passwords in the tool, but I did create a new specific user for this trigger and locked down the access in BI as much as possible, restrict user to local lan, no viewing ect..

Hope this helps and or someone out, or someone has a much better approach to what I am doing. (ie: a quick way to see only events with people)

Note: Something has changed. It's no longer "Flagging the Alerts", you now need to "Trigger" first and then Flag. (I can't seem to get that working on one command anymore)
Flagging made it easier to find the Clips of Interest..:(
 
Last edited:

IAmATeaf

Known around here
Joined
Jan 13, 2019
Messages
3,287
Reaction score
3,252
Location
United Kingdom
Others have created two clones, one for cars and one for people. That way you get alerted if someone walks between your cars.


Sent from my iPhone using Tapatalk
The problem for me is that people aren’t being reliably detected.
 

IAmATeaf

Known around here
Joined
Jan 13, 2019
Messages
3,287
Reaction score
3,252
Location
United Kingdom
I’ve reenabled my driveway cams and again they’ve missed detecting a person, I’ll upload the pics later to see if anybody can suggest anything I can try.

For the cool down time, if I want cam to keep recording until there is not further motion do I just set the cool down time to 0?
 

pmcross

Pulling my weight
Joined
Jan 16, 2017
Messages
371
Reaction score
185
Location
Pennsylvania
I’ve reenabled my driveway cams and again they’ve missed detecting a person, I’ll upload the pics later to see if anybody can suggest anything I can try.

For the cool down time, if I want cam to keep recording until there is not further motion do I just set the cool down time to 0?
This is set in BI under the trigger-->break time for recording after motion has ceased. The cool down time in AI Tool sets the time between multiple triggers, from what I understand.
 

IAmATeaf

Known around here
Joined
Jan 13, 2019
Messages
3,287
Reaction score
3,252
Location
United Kingdom
This is set in BI under the trigger-->break time for recording after motion has ceased. The cool down time in AI Tool sets the time between multiple triggers, from what I understand.
The cameras in this case are being triggered by AITools so according to thy first post if it had previously triggered then it wouldn’t trigger again within the cooldown time but I want it to keep triggering whilst it detects motion and a specified object.
 

pmcross

Pulling my weight
Joined
Jan 16, 2017
Messages
371
Reaction score
185
Location
Pennsylvania
The cameras in this case are being triggered by AITools so according to thy first post if it had previously triggered then it wouldn’t trigger again within the cooldown time but I want it to keep triggering whilst it detects motion and a specified object.
I believe that you’d want to set the cool down period to 0 in AI Tool, which disables the cool down period.


Sent from my iPhone using Tapatalk
 

Spaldo

n3wb
Joined
Oct 2, 2017
Messages
7
Reaction score
2
UPDATE: I found the error which might help others.

I tested this morning again a bit more, and I can install it on my Windows workstation and it works, so I assume it has something to do with the Virtualisation and Hardware Settings the Deepstack Server needs.

Solution:
For those who run Deepstack in a virtual enviroment like Proxmox.
When creating the machine for example Ubuntu with Docker and Deepstack in Docker need to choose the CPU they have or want to emulate and not just kvm64.
Deepstack seems to atart the analysis with hardware commands to the cpu and needs the architecture. When switched from kvm64 to Sandybridge (in my case the Intel CPU) it works.


Thanks
Arthur
Firstly thanks to @GentlePumpkin for this. I have been having the same issue as @videocopter and have been searching/hitting my head against the wall.

As per the guide posted, I am running my PC on windows, with Home Assistant in a VirtualBox VM. Then I have put DeepQuest into a Docker within HA. I understand what you mean @videocopter about the hardware, I am also getting the DeepQuest page without the prompt for my API, thus, it is not working with the AI Tool.

The problem is, I just can't work out where to change the CPU emulation as posted above. Can you help out with that? Is it in VirtualBox or Portainer? and if so, can you guide me a little :) thanks
 

pmcross

Pulling my weight
Joined
Jan 16, 2017
Messages
371
Reaction score
185
Location
Pennsylvania
Firstly thanks to @GentlePumpkin for this. I have been having the same issue as @videocopter and have been searching/hitting my head against the wall.

As per the guide posted, I am running my PC on windows, with Home Assistant in a VirtualBox VM. Then I have put DeepQuest into a Docker within HA. I understand what you mean @videocopter about the hardware, I am also getting the DeepQuest page without the prompt for my API, thus, it is not working with the AI Tool.

The problem is, I just can't work out where to change the CPU emulation as posted above. Can you help out with that? Is it in VirtualBox or Portainer? and if so, can you guide me a little :) thanks
Just a heads up here. If you are running the Windows version of Deepstack, your CPU needs to support AVX. If it doesn’t Deepstack will activate, but AI Tool will not be able to communicate with Deepstack. I went into this in great detail in post #338 in this thread. You can run the no AVX version in Docker on Ubuntu.


Sent from my iPhone using Tapatalk
 

Spaldo

n3wb
Joined
Oct 2, 2017
Messages
7
Reaction score
2
Just a heads up here. If you are running the Windows version of Deepstack, your CPU needs to support AVX. If it doesn’t Deepstack will activate, but AI Tool will not be able to communicate with Deepstack. I went into this in great detail in post #338 in this thread. You can run the no AVX version in Docker on Ubuntu.
Hi @pmcross , thanks for the quick reply. I am running an Intel i5-10400, so hopefully it should support everything. However, I am not running the Windows version of DeepStack. I am running it in a container, so that it auto loads. The windows version of Deepstack doesn't load automatically
 

pmcross

Pulling my weight
Joined
Jan 16, 2017
Messages
371
Reaction score
185
Location
Pennsylvania
Hi @pmcross , thanks for the quick reply. I am running an Intel i5-10400, so hopefully it should support everything. However, I am not running the Windows version of DeepStack. I am running it in a container, so that it auto loads. The windows version of Deepstack doesn't load automatically
Hi @Spaldo

I believe that CPU supports AVX. I’m not exactly sure where the CPU emulation setting is at on VirtualBox, but it would definitely be set in VirtualBox. Check under the advanced CPU setting under the VM setting in VirtualBox.


Sent from my iPhone using Tapatalk
 

IAmATeaf

Known around here
Joined
Jan 13, 2019
Messages
3,287
Reaction score
3,252
Location
United Kingdom
Could we have it so that as the images are processed they are then moved into another folder? Or would this not be a good idea? Appreciate that this would require more work to then get history to pull the image from the other folder but would help to keep the main input folder clean and responsive.
 

B-Murda

Getting the hang of it
Joined
Jun 16, 2020
Messages
32
Reaction score
26
Location
USA
For those of you having detection issues what performance mode are you running Deepstack in? Default is medium but you can mess with the accuracy using High also...
I set mine to High, it takes a little more CPU but to be honest I'm not seeing massive delays or anything crazy. Maybe a few seconds tops to process an image.

My setup is an i7700k running ESXi w/ 32GB ram.
I have BI on a dedicated VM with 12GB ram assigned and 4 core. My 7 cameras are all running 4k 20fps 16k bitrate (max) @ h265 25/7 recording (14TB hdd). I have sub-stream enabled and set to max also and use this for alerting.
Home Assist is setup in a second VM and Deepstack is on that via portaner. I have that set to use 2 cores and 4gp.
I set the env mode to High. AI tools runs on the BI host so the images are local to it and just sent to AI for processing.

It is literally 2 seconds give or take 1 to process an image. It seems to only add another second if I disable sub-stream and make it process 4k images.

So for detection issues you need to make sure you have your sub-stream quality maxed and bit rate max if you want best detection and set DS AI to High mode maybe if you're not getting what you want.

I am getting some FP's though like sometimes it thinks a fire hydrant is a person or a plant that's fairly far away is a person but this is also using sub-stream images. I might just ditch sub-stream unless BI can update. BI has a setting to use high definition images for alerts etc. but if you're using sub-streaming this doesn't seem to do anything since they are already poop quality. I'd love to see him change it so toggling that box makes it use main stream for the snap shot but when he created that option originally sub-streaming wasn't even a dream so maybe soon it can be and that will also help detection.

Anyway just wanted to share about the deepstar performance mode. I noticed no one has mentioned it as of yet. Everyone is after these simple plug and play solutions but doesn't seem to want to do any actual research on the tools used to help reduce efforts required by others. I get it. I love plug and play also but no one size solution will work for everyone. Please take some time to read through how deepstack works and configuration options etc. this could save many of you troubles and help answer some questions.
 

xdq

Young grasshopper
Joined
Jun 12, 2017
Messages
48
Reaction score
19
Location
m
Anyway just wanted to share about the deepstar performance mode. I noticed no one has mentioned it as of yet.
Meanwhile, back in April :D Seriously though GentlePumpkin should add this to the installation guide for others to see.

Has anyone figured out definitively what low, medium, and high are referring to? Does high mean "high speed, lowest cpu utilization"? Or does high mean "highest level of analysis, slowest speed"? The website is super unclear.

Here's the text from the getting started website.

Performance
DeepStack offers three modes allowing you to tradeoff speed for performance. During startup, you can specify performance mode to be , “High” , “Medium” and “Low”
The default mode is “Medium”
Speed Modes are not available on the Raspberry PI Version
 

pbc

Getting comfortable
Joined
Jul 11, 2014
Messages
1,024
Reaction score
156
Just getting started with this tool and stuck on the setup. Have a front door camera called "FrontDoor", and setup the AIFrontDoor cam as noted. For this URL, let's say my camera is at 192.168.1.31 and for instance my username and password is admin and 123456.

Is the proper URL ....
As I keep getting
This site can’t be reached
localhost refused to connect.



Also tried:

 
Last edited:

IAmATeaf

Known around here
Joined
Jan 13, 2019
Messages
3,287
Reaction score
3,252
Location
United Kingdom
The 80 is the port number that BI uses for web services, check to ensure that that is correct. So for example I use port 8081.

Edit Is the & in the URL you posted just a copy and paste error/correction?
 
Last edited:
Top