[tool] [tutorial] Free AI Person Detection for Blue Iris

Hey guys, I'm running 1.67.8 built in Sept, so thinking of upgrading. I originally installed it for whatever reason into my Documents folder in windows (vs straight to C: or program files). With the installer, do I simply select the exact same folder it is currently installed in to overwrite / upgrade this?

Also, I see I need to delete the "input folder" selection from each camera as it's now redundant, anything else I need to do to get it to work properly or will reinstalling into that same folder retain all cameras and settings accordingly?
 
Just tried the AITool installer for the 1st time. (have installed by zip file in the past) I installed it to c:\aitool but on first run it brings up a dialog box asking if I want to import previous settings from registrery, but I can't get to the dialog box as its in the background behind the splash screen while "loading" is displayed. I have tried hoovering over the app in the start bar and using right click to "move", or "size" either window without luck. Any suggestions?
 
Last edited:
Hello, it s a little confuse, do we have just turn off AITool and then remplace/delete AITool.exe(my case v2.0.760.7721) for AIToolSetup.2.0.1152.exe and magic happen? because I don't see any zip file to unzip it, just the exe file here VorlonCD/bi-aidetection :thumbdown:
 
Hello everybody..I have already upgraded to latest version (2.0.1153.7801)

Here the steps to how to do it.

  1. Stop AITool at BI
  2. Download and unzip file (I prefered C:\Program File location) VorlonCD/bi-aidetection (CODE green button)
  3. Run [as admin] C:\Program Files\bi-aidetection-master\src\UI\Installer\AIToolSetup.2.0.1152.exe (IT WILL UPGRADE TO 2.0.1153.7801 automatically)
Install process will start and ask if you want to keep last setting registry (last AITool setting(cams, etc,etc)).....here there is a bug(in my case) where Installing windows appears above and does not allow you to select the options of yes or no, just look for a way to do it with the keyboard


ENJOY !! :headbang:
 
Last edited:
  • Like
Reactions: Pentagano
Something that would be REALLY helpful is if using the "Copy alert images to folder" action, there was an option to include the dynamic mask in that image if it exists. A simple black rectangle over top of the coordinates where the dynamic mask exists.

Alternatively, a variable that contains those rectangle coordinates that could be passed to the "Run external program" action.

This way, if the actions are used to pass an image to another program, the non-moving masked items won't be visible to that program. (Parked cars vs. moving cars).

Edit: I'm looking into doing this with a external script and reading out of the sqlite database directly to get the coords of the mask. It looks like it may be possible. Would still be a neat feature to have in the UI though!
 
Last edited:
I currently use Dahua's IVS, but would prefer to be able to configure object recognition in BI instead. I am curious if you guys run the built-in Deepstack inference on the substream, or the main stream?

I noticed that when I run the analysis off the main stream (I don't use sub streams at the moment), the CPU spikes up to a 100% when the python process is running.
I have seen posts mentioning almost no CPU usage from Deepstack, so maybe processing full 4MP is the reason why I'm seeing high usage? Or perhaps you guys use hardware-accelerated inference with a GPU?

As an aside, I also noticed that (perhaps due to a delay in processing), the Deepstack labeling does not work as well as Dahuas, which is almost instant. In many cases by the time a label is applied, the object has already moved out of frame. I also get issues with cars that were already parked triggering alerts (might be able to tweak that).
 
I currently use Dahua's IVS, but would prefer to be able to configure object recognition in BI instead. I am curious if you guys run the built-in Deepstack inference on the substream, or the main stream?

I noticed that when I run the analysis off the main stream (I don't use sub streams at the moment), the CPU spikes up to a 100% when the python process is running.
I have seen posts mentioning almost no CPU usage from Deepstack, so maybe processing full 4MP is the reason why I'm seeing high usage? Or perhaps you guys use hardware-accelerated inference with a GPU?

As an aside, I also noticed that (perhaps due to a delay in processing), the Deepstack labeling does not work as well as Dahuas, which is almost instant. In many cases by the time a label is applied, the object has already moved out of frame. I also get issues with cars that were already parked triggering alerts (might be able to tweak that).

There is the DeepStack integration that is part of BI in a recent update, and this thread you are in is a 3rd party add-on that someone has created that runs separate of BI.

Which attempt are you trying to do? If you are using DS that is part of the BI Integration, your question is probably better suited in one of those threads. Many here still run the 3rd party add-on like this one due to the granular level and more customization than is currently available in the BI Integration, but the 3rd party add-ons require you to set it up in Docker (or at least strongly encouraged)....I believe you are trying the BI Integration version.


Plus many of us have found the IVS AI to be superior to DeepStack at the moment and prevents your BI computer CPU from spiking...but it is limited to just sending ONVIF trigger commands for human or vehicle only, so if you want more granular level, you would need something like this 3rd party tool or DS integration:

 
  • Like
Reactions: David L
There is the DeepStack integration that is part of BI in a recent update, and this thread you are in is a 3rd party add-on that someone has created that runs separate of BI.

Which attempt are you trying to do? If you are using DS that is part of the BI Integration, your question is probably better suited in one of those threads.

You're right, I saw this pinned post so I assumed it was all DeepStack discussion related to BI.
I do use Dahua IVS, but like you said, it's limited to just telling BI "alert", no further context.
I will checkout the Dahua thread as well. While IVS worked well so far, I am having some false alerts with a new camera.

One thing I noticed, you can run the native BI Deepstack integration on ONVIF triggers as well, which lets you further classify (and confirm) the triggers without running motion detection in BI at all.
 
One thing I noticed, you can run the native BI Deepstack integration on ONVIF triggers as well, which lets you further classify (and confirm) the triggers without running motion detection in BI at all.
This sounds very interesting, to the extent I understand it (which isn't much). Can you give me some links that would tutor me?
 
This sounds very interesting, to the extent I understand it (which isn't much). Can you give me some links that would tutor me?

I don't have a tutorial on this, but with BI DeepStack integration enabled, you just need to enable DS BI processing on each camera, then ensure that "Apply to motion triggers only" is unchecked.

That way when BI received a trigger from your Dahua IVS (via ONVIF), it will trigger the DS processing on that capture.

1622832958868.png
 
  • Like
Reactions: David L and CAL7
Using AITool as it merge annotations into image and copies the alert images. Is there a way to only annotate non masked relevant objects?
 
I don't have a tutorial on this, but with BI DeepStack integration enabled, you just need to enable DS BI processing on each camera, then ensure that "Apply to motion triggers only" is unchecked.

That way when BI received a trigger from your Dahua IVS (via ONVIF), it will trigger the DS processing on that capture.

View attachment 91466
So I just checked mine, I think it is unchecked by default, since I don't remember unchecking this...fyi...Thanks...

1622977674701.png
 
  • Like
Reactions: CAL7
I'm trying to figure out what these "All URLs" entries mean in my log. Can anyone explain? I appreciate it!

Screen Shot 2021-06-14 at 4.48.39 PM.png
 
I ordered a Nvidia Quadro P400 off ebay the other day, it arrived a couple hours ago...

I'm in disbelief how easy, yet complicated it was to get Windows deepstack:gpu version working... Its litterally taken longer for windows to update than it did for me to set this up, and i even DDUed a AMD driver and applied new thermal paste during that time. ¯\(ツ)

Now I need this update to finish so i can see how much CPU im saving.

I'm just wondering if I really needed to install Cuda 10.1 and cuDNN like deepstack docs say or if they were already in the latest driver I downloaded and installed from Nvidia. Any one know the answer to that? I'm guessing cuDNN wasnt in there but I'm pretty sure CUDA 11 was.

Has anyone tried Windows GPU version vs. Docker GPU version? wondering if there's a difference and which is better.
 
Last edited:
I ordered a Nvidia Quadro P400 off ebay the other day, it arrived a couple hours ago...

I'm in disbelief how easy, yet complicated it was to get Windows deepstack:gpu version working... Its litterally taken longer for windows to update than it did for me to set this up, and i even DDUed a AMD driver and applied new thermal paste to is Quadro first. ¯\(ツ)

Now I just need this update to finish so i can see how much CPU im saving.

I'm just wondering if I really needed to install Cuda 10.1 and cuDNN or if they were already in the latest driver I downloaded and installed from Nvidia. Any one know the answer to that? I'm guessing cuDNN wasnt in there but I know CUDA 11 was.

Has anyone tried Windows GPU version vs. Docker GPU version? wondering if there's a difference and which is better.
I am running the DeepStack GPU version via BI Windows on an old nVidia 970 (4 Gigs Mem.), DeepStack is running fine, zero increase on CPU, via BI monitoring that I can tell. Sorry, can't really comment on Docker version...
 
I ordered a Nvidia Quadro P400 off ebay the other day, it arrived a couple hours ago...

I'm in disbelief how easy, yet complicated it was to get Windows deepstack:gpu version working... Its litterally taken longer for windows to update than it did for me to set this up, and i even DDUed a AMD driver and applied new thermal paste during that time. ¯\(ツ)

Now I need this update to finish so i can see how much CPU im saving.

I'm just wondering if I really needed to install Cuda 10.1 and cuDNN like deepstack docs say or if they were already in the latest driver I downloaded and installed from Nvidia. Any one know the answer to that? I'm guessing cuDNN wasnt in there but I'm pretty sure CUDA 11 was.

Has anyone tried Windows GPU version vs. Docker GPU version? wondering if there's a difference and which is better.
@whoami ™ I am looking to build a Rack Mounted Server to run BI on, my limit is 3U, would be great fit everything in a 2U, but I would need a low profile Video Card (for DeepStack). Even though your card is not low-profile it should work in a 3U rack. Please let us know how DeepStack works on your card. Appreciate it...
 
@whoami ™ I am looking to build a Rack Mounted Server to run BI on, my limit is 3U, would be great fit everything in a 2U, but I would need a low profile Video Card (for DeepStack). Even though your card is not low-profile it should work in a 3U rack. Please let us know how DeepStack works on your card. Appreciate it...
Nvidia Quadro P400 is in fact a low profile card, you just need to find a used one with the low profile bracket or buy one. It'a only a 30W GPU and does not need a 6 or 8-pin rail, it pulls its power directly from the 16x slot. With 7 cams feeding images to deepstack while rendering Remote Desktop graphics the highest load its placed on it is around 16%.

if your wanting ideas for a rack mount, im in the process of building this 1U... It took a lot of research to put this list together.

  • iStarUSA M-140-ITX 1U server case
  • ASUS P11C-I Mini ITX Server Motherboard LGA 1151 Intel C242
  • Microsemi Adaptec SAS Internal Cable, 1.6' (2281200-R)
  • Intel® Xeon® E-2278G 8C/16T 3.4ghz
  • Nvidia Quadro P400 or Nvidia T400
  • Flexible 16x PCIe Riser Cable
  • x2 Samsung M391A4G43AB1-CVF 32GB DDR4-2933 ECC
  • Sabrent 1TB Rocket NVMe PCIe M.2 2242
  • Noctua NF-A4x20 PWM, Premium Quiet Fans
  • drives depend on needs 3.5" Purples for Sec Cams, SSD for VM's ect.
 
Last edited:
  • Love
Reactions: David L
Nvidia Quadro P400 is in fact a low profile card, you just need to find a used one with the low profile bracket or buy one. It'a only a 30W GPU and does not need a 6 or 8-pin rail, it pulls its power directly from the 16x slot. With 7 cams feeding images to deepstack while rendering Remote Desktop graphics the highest load its placed on it is around 16%.

if your wanting ideas for a rack mount, im in the process of building this 1U... It took a lot of research to put this list together.

  • iStarUSA M-140-ITX 1U server case
  • ASUS P11C-I Mini ITX Server Motherboard LGA 1151 Intel C242
  • Microsemi Adaptec SAS Internal Cable, 1.6' (2281200-R)
  • Intel® Xeon® E-2278G 8C/16T 3.4ghz
  • Nvidia Quadro P400 or Nvidia T400
  • Flexible 16x PCIe Riser Cable
  • x2 Samsung M391A4G43AB1-CVF 32GB DDR4-2933 ECC
  • Sabrent 1TB Rocket NVMe PCIe M.2 2242
  • Noctua NF-A4x20 PWM, Premium Quiet Fans
  • drives depend on needs 3.5" Purples for Sec Cams, SSD for VM's ect.
Thank you this is what I need, I know what you mean, hard to find full list like this for a build. Sure would like to see some pics when completed, curious how you will mount video card with your riser cable. Also curious how you are going to cool your CPU, Fan/Heat Sink?
Looks very close to what I want, I have up to 3U on my rack I can go. Just started talking to these guys: Rackmount Mart - 2U Rackmount Chassis
 
Thank you this is what I need, I know what you mean, hard to find full list like this for a build. Sure would like to see some pics when completed, curious how you will mount video card with your riser cable. Also curious how you are going to cool your CPU, Fan/Heat Sink?
Looks very close to what I want, I have up to 3U on my rack I can go. Just started talking to these guys: Rackmount Mart - 2U Rackmount Chassis
To answer your question about the Quadro P400 GPU more throughly, DeepStack (python) places a load of 2.5-2.6% on the GPU every time it analizes an image. Average queue is 300ms

about the cooler, im not sure yet... the only progress Ive made on this build so far is I bout the 1U case 6 months ago, and I reciently ordered the mother board from newegg but its on back order.
 
  • Like
Reactions: David L
Recently my deepstack blueiris setup quit working after months of working flawlessly. II downloaded the new AI Tools installer but no matter what I cannot get aitools to detect objects. It keeps showing people as false alerts despite being setup to detect people. Anyone else run into similar problems?