[tool] [tutorial] Free AI Person Detection for Blue Iris

Others have created two clones, one for cars and one for people. That way you get alerted if someone walks between your cars.


Sent from my iPhone using Tapatalk

The problem for me is that people aren’t being reliably detected.
 
I’ve reenabled my driveway cams and again they’ve missed detecting a person, I’ll upload the pics later to see if anybody can suggest anything I can try.

For the cool down time, if I want cam to keep recording until there is not further motion do I just set the cool down time to 0?
 
I’ve reenabled my driveway cams and again they’ve missed detecting a person, I’ll upload the pics later to see if anybody can suggest anything I can try.

For the cool down time, if I want cam to keep recording until there is not further motion do I just set the cool down time to 0?
This is set in BI under the trigger-->break time for recording after motion has ceased. The cool down time in AI Tool sets the time between multiple triggers, from what I understand.
 
This is set in BI under the trigger-->break time for recording after motion has ceased. The cool down time in AI Tool sets the time between multiple triggers, from what I understand.

The cameras in this case are being triggered by AITools so according to thy first post if it had previously triggered then it wouldn’t trigger again within the cooldown time but I want it to keep triggering whilst it detects motion and a specified object.
 
The cameras in this case are being triggered by AITools so according to thy first post if it had previously triggered then it wouldn’t trigger again within the cooldown time but I want it to keep triggering whilst it detects motion and a specified object.

I believe that you’d want to set the cool down period to 0 in AI Tool, which disables the cool down period.


Sent from my iPhone using Tapatalk
 
  • Like
Reactions: IAmATeaf
UPDATE: I found the error which might help others.

I tested this morning again a bit more, and I can install it on my Windows workstation and it works, so I assume it has something to do with the Virtualisation and Hardware Settings the Deepstack Server needs.

Solution:
For those who run Deepstack in a virtual enviroment like Proxmox.
When creating the machine for example Ubuntu with Docker and Deepstack in Docker need to choose the CPU they have or want to emulate and not just kvm64.
Deepstack seems to atart the analysis with hardware commands to the cpu and needs the architecture. When switched from kvm64 to Sandybridge (in my case the Intel CPU) it works.


Thanks
Arthur

Firstly thanks to @GentlePumpkin for this. I have been having the same issue as @videocopter and have been searching/hitting my head against the wall.

As per the guide posted, I am running my PC on windows, with Home Assistant in a VirtualBox VM. Then I have put DeepQuest into a Docker within HA. I understand what you mean @videocopter about the hardware, I am also getting the DeepQuest page without the prompt for my API, thus, it is not working with the AI Tool.

The problem is, I just can't work out where to change the CPU emulation as posted above. Can you help out with that? Is it in VirtualBox or Portainer? and if so, can you guide me a little :) thanks
 
Firstly thanks to @GentlePumpkin for this. I have been having the same issue as @videocopter and have been searching/hitting my head against the wall.

As per the guide posted, I am running my PC on windows, with Home Assistant in a VirtualBox VM. Then I have put DeepQuest into a Docker within HA. I understand what you mean @videocopter about the hardware, I am also getting the DeepQuest page without the prompt for my API, thus, it is not working with the AI Tool.

The problem is, I just can't work out where to change the CPU emulation as posted above. Can you help out with that? Is it in VirtualBox or Portainer? and if so, can you guide me a little :) thanks

Just a heads up here. If you are running the Windows version of Deepstack, your CPU needs to support AVX. If it doesn’t Deepstack will activate, but AI Tool will not be able to communicate with Deepstack. I went into this in great detail in post #338 in this thread. You can run the no AVX version in Docker on Ubuntu.


Sent from my iPhone using Tapatalk
 
Just a heads up here. If you are running the Windows version of Deepstack, your CPU needs to support AVX. If it doesn’t Deepstack will activate, but AI Tool will not be able to communicate with Deepstack. I went into this in great detail in post #338 in this thread. You can run the no AVX version in Docker on Ubuntu.

Hi @pmcross , thanks for the quick reply. I am running an Intel i5-10400, so hopefully it should support everything. However, I am not running the Windows version of DeepStack. I am running it in a container, so that it auto loads. The windows version of Deepstack doesn't load automatically
 
Hi @pmcross , thanks for the quick reply. I am running an Intel i5-10400, so hopefully it should support everything. However, I am not running the Windows version of DeepStack. I am running it in a container, so that it auto loads. The windows version of Deepstack doesn't load automatically

Hi @Spaldo

I believe that CPU supports AVX. I’m not exactly sure where the CPU emulation setting is at on VirtualBox, but it would definitely be set in VirtualBox. Check under the advanced CPU setting under the VM setting in VirtualBox.


Sent from my iPhone using Tapatalk
 
Could we have it so that as the images are processed they are then moved into another folder? Or would this not be a good idea? Appreciate that this would require more work to then get history to pull the image from the other folder but would help to keep the main input folder clean and responsive.
 
For those of you having detection issues what performance mode are you running Deepstack in? Default is medium but you can mess with the accuracy using High also...
I set mine to High, it takes a little more CPU but to be honest I'm not seeing massive delays or anything crazy. Maybe a few seconds tops to process an image.

My setup is an i7700k running ESXi w/ 32GB ram.
I have BI on a dedicated VM with 12GB ram assigned and 4 core. My 7 cameras are all running 4k 20fps 16k bitrate (max) @ h265 25/7 recording (14TB hdd). I have sub-stream enabled and set to max also and use this for alerting.
Home Assist is setup in a second VM and Deepstack is on that via portaner. I have that set to use 2 cores and 4gp.
I set the env mode to High. AI tools runs on the BI host so the images are local to it and just sent to AI for processing.

It is literally 2 seconds give or take 1 to process an image. It seems to only add another second if I disable sub-stream and make it process 4k images.

So for detection issues you need to make sure you have your sub-stream quality maxed and bit rate max if you want best detection and set DS AI to High mode maybe if you're not getting what you want.

I am getting some FP's though like sometimes it thinks a fire hydrant is a person or a plant that's fairly far away is a person but this is also using sub-stream images. I might just ditch sub-stream unless BI can update. BI has a setting to use high definition images for alerts etc. but if you're using sub-streaming this doesn't seem to do anything since they are already poop quality. I'd love to see him change it so toggling that box makes it use main stream for the snap shot but when he created that option originally sub-streaming wasn't even a dream so maybe soon it can be and that will also help detection.

Anyway just wanted to share about the deepstar performance mode. I noticed no one has mentioned it as of yet. Everyone is after these simple plug and play solutions but doesn't seem to want to do any actual research on the tools used to help reduce efforts required by others. I get it. I love plug and play also but no one size solution will work for everyone. Please take some time to read through how deepstack works and configuration options etc. this could save many of you troubles and help answer some questions.
 
Anyway just wanted to share about the deepstar performance mode. I noticed no one has mentioned it as of yet.

Meanwhile, back in April :D Seriously though GentlePumpkin should add this to the installation guide for others to see.

Has anyone figured out definitively what low, medium, and high are referring to? Does high mean "high speed, lowest cpu utilization"? Or does high mean "highest level of analysis, slowest speed"? The website is super unclear.

Here's the text from the getting started website.

Performance
DeepStack offers three modes allowing you to tradeoff speed for performance. During startup, you can specify performance mode to be , “High” , “Medium” and “Low”
The default mode is “Medium”
Speed Modes are not available on the Raspberry PI Version
 
Just getting started with this tool and stuck on the setup. Have a front door camera called "FrontDoor", and setup the AIFrontDoor cam as noted. For this URL, let's say my camera is at 192.168.1.31 and for instance my username and password is admin and 123456.

Is the proper URL ....
As I keep getting
This site can’t be reached
localhost refused to connect.



Also tried:

 
Last edited:
The 80 is the port number that BI uses for web services, check to ensure that that is correct. So for example I use port 8081.

Edit Is the & in the URL you posted just a copy and paste error/correction?
 
Last edited:
The 80 is the port number that BI uses for web services, check to ensure that that is correct. So for example I use port 8081.

Ah, would that be the port I chose in the Webserver setting of BlueIris? I.e., the circled one below?

1593381079066.png
 
This is odd, so the popup comes up for BI's webserver URL access, but none of my usernames or passwords work to get past it. I know I've been in it before.

1593381602148.png

1593381697494.png
 
Still can't seem to figure this out. The AITool is capturing and analyzing images...

1593395385523.png

But it keeps getting errors:

1593395487469.png
1593395588599.png

1) I am not sure why it has two triggers with two different user names seeing as I only have 1 camera setup in the AITool.
2) If I go directly to the webserver address for BI I get this screen and inserting the AI user name and password I set up gets me in just fine.

1593395705402.png

If instead I use the URL http://localhost:XXX/admin?trigger&camera=FrontDoor&user=AIXXX&pw=XXXXX

I get this popup, and no user name or address works with it.

1593395833272.png

Further, and strangely, my BlueIris app on my Android phone stopped working and gives me a "Unable to connect: Reason: no matching user/pw"

Tried rebooting everything.

WTH?
 
My setup is an i7700k running ESXi w/ 32GB ram.
I have BI on a dedicated VM with 12GB ram assigned and 4 core. My 7 cameras are all running 4k 20fps 16k bitrate (max) @ h265 25/7 recording (14TB hdd). I have sub-stream enabled and set to max also and use this for alerting.
Home Assist is setup in a second VM and Deepstack is on that via portaner. I have that set to use 2 cores and 4gp.
I set the env mode to High. AI tools runs on the BI host so the images are local to it and just sent to AI for processing.
@B-Murda I’m interested in how you are running the Deepstack within portainer, I tried but couldn’t get DS to fire up properly. It wasn’t asking me for my API on first boot, so it wouldn’t process requests. Did you change anything or any environment options to get it working / playing well with DS hardware requirements?
 
Last edited: