[tool] [tutorial] Free AI Person Detection for Blue Iris

The pi's are hardwired to the same switch as the computer. Tried wifi as well. I havent managed to find reference to a bug but have emailed deepstack twice but had no reply. Its due to rain here tomorrow so a day in trying new instals i think.
Make the bug "public", email doesn't always count. Find more refererences here and let them know the details. If you tell them "It doesnt work" without good details, they will also ignore.
 
Hi guys,

First question:
Sometimes after restarting pc deepstack does not run properly with Chris forked AITool, if i install deepstack on docker and i configure it to AutoStart wouldn't it be more reliable?
And no, I wouldn't want to have different instances of deepstack as fail over.

Second question:
After the AITool triggers on, i get a push notification on Blue Iris app as a external Alert, which is excellent, however the push notification comes up with an captured image, i would love to replace that image with the one i see in AITool history on specific camera, how can i do that? (i know BI stores those images in a temp folder maybe if i just let AITool replace them?) is this posible?
 
I am running a Raspberry Pi 4b with an NCS 2 and it is working very well for me. I also was interested in running the NCS on a PC, but the problem is that Deepstack for the NCS is only supported on a Raspberry Pi.
Ah, I was not sure about this. Kind of sad to hear, I'd rather run a NCS over GPU in my server as it'll allow for more options, lower power, etc.

I am currently running a Pi4 for my garage door, a bit overkill for it but using an amazing program for it. I debated switching over to a NodeMCU for the garage door and pulling the Pi into DeepStack service if it can outpace my VM and be more reliable. It isn't like I need redundancy or anything, if my server is off so is Blue Iris, just the massively delayed alerts are causing issues.
 

Thanks.
The results are good but on par with my threadripper results. I expected XXms with the stick and not XXXms. I think I have to delay my plans to get a pi4 and the NCS2.

Is this actually the NCS2? Is it plugged into the USB3 port?

Another member posted some results a while back. They looked much better and this is why I was considering the PI4 and NCS2. Maybe he was just posting results from test runs with single camera and pictures being generated within a wide time frame?

Tried out my Pi4 2GB RAM with Intel NCS2 and i have to say i'm pretty impressed. Unfortunately there is no easy way to get the service to auto start on boot or anything just yet, but the processing times are very quick.

Image processing times are between 66ms ( 640x352 image from my reolink substream) and 160ms (1920x1080 image from my EZVIZ spotlight camera). Why are my times so much faster than others? I have it plugged into the USB3 port if others havent been.

Deepstack1 ms.PNG

This is what I get:

Deepstack ms.PNG
 
@Chris Dodge I have "Enable dynamic masking" enabled. I notice even the "draw mask custom" is not respected. If dynamic masking is enabled does that cancel out / disable custom and "png masks?

Reading the latest release notes: I found this.
"The database is now an SQLite database. The cameras folder is no longer used at all" But I noticed that my custom mask bmp file is stored in cameras folder. Is the folder not used any more? I did test this by creating a new custom mask for another camera tonight and it did store the bmp file in the cameras folder. I am running 1.67.8.33855 built 9/30

BTW thank you for your work. The time and effort put into creating this and the time to provide feedback / support is much apricated. The dynamic mask is one of the coolest ideas I have seen.
 
Last edited:
Hi guys,

Second question:
After the AITool triggers on, i get a push notification on Blue Iris app as a external Alert, which is excellent, however the push notification comes up with an captured image, i would love to replace that image with the one i see in AITool history on specific camera, how can i do that? (i know BI stores those images in a temp folder maybe if i just let AITool replace them?) is this posible?

You can use the &jpeg=path parameter in the trigger URL to specify an image for BI to use as the alert image. You can also combine that with the merge annotation and copy/save settings in AI Tool to provide the marked up image.
 
Thanks.
The results are good but on par with my threadripper results. I expected XXms with the stick and not XXXms. I think I have to delay my plans to get a pi4 and the NCS2.

Is this actually the NCS2? Is it plugged into the USB3 port?

Another member posted some results a while back. They looked much better and this is why I was considering the PI4 and NCS2. Maybe he was just posting results from test runs with single camera and pictures being generated within a wide time frame?



This is what I get:

View attachment 71866

Yep it is a NCS2 and is plugged into a USB 3 port. I’m not sure what the other member was running in regards to cameras, but I have 13 cameras on Deepstack/AI Tool and have most of my cameras cloned in BI and am pulling a sub stream of 640X480. The other cameras are pulling 1920X1280.


Sent from my iPhone using Tapatalk
 
You can use the &jpeg=path parameter in the trigger URL to specify an image for BI to use as the alert image. You can also combine that with the merge annotation and copy/save settings in AI Tool to provide the marked up image.

can you provide more detail on the "copy/save settings" part? I added the &jpg=[imagepath] and checked merge annotations into images, but not sure about the copy. I see copy alert images to folder... is that the feature ? do i have to copy the image to a new image? if so, do hard code or can i use [imagepath] variable? thanks
 
I am running a Raspberry Pi 4b with an NCS 2 and it is working very well for me. I also was interested in running the NCS on a PC, but the problem is that Deepstack for the NCS is only supported on a Raspberry Pi.
How much RAM does your Pi have? The Pi4 I might try to use is 1GB, wondering if that will be a problem.
 
When running Ai Tool as a service. Is there any way to make the console windows appear? Or do I have to stop service, start Ai Tool manually and start service again after editing my config?
 
When running Ai Tool as a service. Is there any way to make the console windows appear? Or do I have to stop service, start Ai Tool manually and start service again after editing my config?
You can simply manually open the program but unless you stop the service you will be unable to change and save parameters. The latest version has additional facilities to handle a second instance of the program. Right now you cannot simply open the program that is running under the service. Bottom line is you need to stop the service to make changes and then restart the service. I hope that helps.
 
@Chris Dodge I have "Enable dynamic masking" enabled. I notice even the "draw mask custom" is not respected. If dynamic masking is enabled does that cancel out / disable custom and "png masks?

I got this notification that I thought would have been masked out by my custom mask. On looking at the mask though it only covers some of the van so perhaps the same is happening with yours.
1601736430672.png

1601736385763.png
 
How much RAM does your Pi have? The Pi4 I might try to use is 1GB, wondering if that will be a problem.

Mine has 4 GB of RAM. Not sure how it would perform with only 1 GB. I can check my memory usage later today and let you know.


Sent from my iPhone using Tapatalk
 
If I missed it in the 90+ pages of this thread, sorry. Hope you smart folks can help me out. I have the Original gentlepumpkin version of this installed V1.67. I downloaded the the AITOOL-VORLONCD.zip version and would like to jump in and start using it. Here is my question- what do I need to do to install the VORLANCD? Should uninstall the original or just overwrite everything? I'm a user not a coder so I need ya'all to break it down Barney style for me if you would. THANKS!
 
can you provide more detail on the "copy/save settings" part? I added the &jpg=[imagepath] and checked merge annotations into images, but not sure about the copy. I see copy alert images to folder... is that the feature ? do i have to copy the image to a new image? if so, do hard code or can i use [imagepath] variable? thanks

If you want BI to use the image that was analyzed by AI Tool/Deepstack as the alert image, then I believe you can just use the [ImagePath] variable. If you want BI to use the annotated image that includes the box around detected objects and text label, then enabled AI Tool's "Merge Annotations into images" and "Copy Alert images to folder" options and point the URL trigger "&jpeg=path" to that path instead. You can click the green text at the top of the Actions > Settings window to show you what the variable will output. If I got any of that wrong, I'm sure @Chris Dodge could correct me. :)
 
  • Like
Reactions: Senor Pibb
For the odd display issues, make a shortcut to AITOOLS somewhere, right-click > properties > compatibility tab > change high dpi settings, and see if they help for the next time you start it. Or maybe edit aitools.exe.config and set the value for DpiAwareness to "false" rather than "PerMonitorV2".

That worked for me on a laptop.
 
Mine has 4 GB of RAM. Not sure how it would perform with only 1 GB. I can check my memory usage later today and let you know.


Sent from my iPhone using Tapatalk
Thank you! Is there any possible way you could limit your DeepStack install to under 1GB to simulate what it would be like so I could get an idea of what the response time would be? If it is too much of a hassle, don't bother. I am just worried that I go purchase a NCS2 and find out it ends up worse on my Pi4 than the VM.
 
Thank you! Is there any possible way you could limit your DeepStack install to under 1GB to simulate what it would be like so I could get an idea of what the response time would be? If it is too much of a hassle, don't bother. I am just worried that I go purchase a NCS2 and find out it ends up worse on my Pi4 than the VM.
Isn't the PI4 2Gb minimum. My PI 4 is faster than my PI3b+ by about 1/3 at least
 
Isn't the PI4 2Gb minimum. My PI 4 is faster than my PI3b+ by about 1/3 at least
No, 1GB models exist, they are basically the lowest model though. Considering I was previously using a Pi with 512MB of RAM I figured the 1GB would be more than enough for most projects I'd use it. DeepStack really requires quite a lot and not really something I'd ever consider for a Pi if my VM wasn't having so much trouble and/or an NCS2 would work on PC.