[tool] [tutorial] Free AI Person Detection for Blue Iris

This may be nothing, but in windows docker,it runs in 2 modes linux platform or windows platform, I'm running my deepstack docker in linux platform. I've never tried it running in windows platform, don't know if it would run perfectly or run with errors in windows platform.
if you are using docker desktop gui rick click you should see something. I use the command line i specify volume mount point, which I can't do in docker desktop.
i also mapped the port to 8080 for AI Tool to use. e:\dockerData is folder i precreated before running docker for deepquestAI to store whatever data it wants there permanently.

i got this working only couple weeks ago, though the image I downloaded 5 weeks ago.

docker run --platform linux -dti --name "deepquestAI" --restart always --net="bridge" -v e:/dockerData:/datastore -e MODE=Medium -e VISION-SCENE=True -e VISION-FACE=True -e VISION-DETECTION=True -p 8080:5000 deepquestai/deepstack:latest

@pnakashian THANK YOU THANK YOU THANK YOU!! FINALLY GOT IT WORKING. Pretty excited now i can actually mess with it. Anyone having trouble with the stupid " Object reference not set to an instance of an object. (code: -2147467261 ) ERROR: Processing the following image 'E:\aiinput/FCSD.20200601_161832518.jpg' failed. Failure in AI Tool processing the image."

TRY THIS! IT WORKED FOR ME! THANK YOU AGAIN!!!
 
Thank you for this tutorial ! This has made blue iris so more useful. Less noise and more legitimate notifications.
I unfortunately have no developer experience, however it would be awesome to see your tool expanded a little more. Why not add the ability to do license plate detection Tbar can log data to a database or flat file. I am going to continue to poke around with Home Assistant to see if I can do it there because I could also log it to that database. Love the idea that this could all happen locally using one stack.
thanks again for this awesome utility !
 
I am now getting errors in AI tool. I tried restarting the service as well as renaming the history.csv to history.csv.old I've since renamed the original, but continue to get errors in the logs. Has anyone else ran across this problem?

.12.2019, 14:37:46]: ERROR: Can't write to cameras/history.csv!
[17.12.2019, 14:37:46]: ERROR: Can't write to cameras/history.csv!
[17.12.2019, 14:37:46]: ERROR: Can't write to cameras/history.csv!
[17.12.2019, 14:37:46]: ERROR: Can't write to cameras/history.csv!
[17.12.2019, 15:04:45]: ERROR: Can't clean the cameras/history.csv!
[17.12.2019, 15:05:32]: ERROR: Can't clean the cameras/history.csv!
[17.12.2019, 15:12:22]: ERROR: Can't clean the cameras/history.csv!
[17.12.2019, 15:12:37]: ERROR: Can't clean the cameras/history.csv!

Were you ever able to solve this issue? I'm getting this error and haven't been able to find a solution in the thread.
 
Were you ever able to solve this issue? I'm getting this error and haven't been able to find a solution in the thread.

I did, I ended up going back a version. I copied the contents of the extracted folder, excluding the cameras directory, to the folder that AI Tool was in and this fixed my issue. Since then I’ve upgraded to the beta version that @GentlePumpkin released a few days ago and it is working well. I followed the same process to upgrade to the beta version.


Sent from my iPhone using Tapatalk
 
Just started looking into this, I'm going to apologise as I've not read this entire thread, just the first and last few pages but I have a few annoying/noddy questions.

DeepQuestAI is now DeepStackAI? When I registered I didn't get an API key, just what I think is a product key. Does this affect/impact the product and what I want from it?


I run my system headless, I've configured AITool.exe to run as a service using NSSM but what do people do about DeepStackAI which needs to be manually started and the key and port number entered each time? Wondering if I can just disconnect from my RDP session if that will leave DeepStackAI still running.

Still getting my head around it so wish me luck :D
 
Just started looking into this, I'm going to apologise as I've not read this entire thread, just the first and last few pages but I have a few annoying/noddy questions.

DeepQuestAI is now DeepStackAI? When I registered I didn't get an API key, just what I think is a product key. Does this affect/impact the product and what I want from it?


I run my system headless, I've configured AITool.exe to run as a service using NSSM but what do people do about DeepStackAI which needs to be manually started and the key and port number entered each time? Wondering if I can just disconnect from my RDP session if that will leave DeepStackAI still running.

Still getting my head around it so wish me luck :D

Deepstack recently went from paid (with a free offering for personal use) to open source. I’m not sure how that affects the activation, but I never entered an API key, but I did enter an activation key. This only needs to be entered once.

I run my system headless as well. Deepstack will run fine after disconnecting your RDP session. The trouble with Deepstack on Windows is with a reboot as you have to run the application after a reboot, there is no service for it. If you run it in Docker (not sure about Docker on Windows) you can have it start automatically when the OS boots (I’m running in Docker in Ubuntu and this is how mine is configured). I hope that this helps.


Sent from my iPhone using Tapatalk
 
  • Like
Reactions: IAmATeaf
I’ve got it all installed and working, am playing around with it, have cloned all 6 of my cams and left if all running.

I have noticed that it adds a significant load to the PC, mines now at 68% CPU whereas with substreams it was sitting at around 10%. Is that people’s experience?

Edit: Might have to disable it, it’s randomly peaking at 98% CPU?
 
Last edited:
I’ve got it all installed and working, am playing around with it, have cloned all 6 of my cams and left if all running.

I have noticed that it adds a significant load to the PC, mines now at 68% CPU whereas with substreams it was sitting at around 10%. Is that people’s experience?
I haven't seen much of an increase by cloning cameras. You should check the status to see if only one stream is pulling for each camera-this is how it should be. Also, you should disable all overlays on the cloned cameras as well as audio (if your camera supports audio). This CPU increase may be due to Deepstack (if you're running on the same machine). Deepstack is very CPU intensive. I moved Deepstack to a separate server for this reason.
 
  • Like
Reactions: IAmATeaf
FWIW - i was running the windows version of deepstack and it was taking about 65% of my CPU on average. I installed docker for windows and loaded the container version and CPU usage from deepstack went down to 25% on average. Image processing went from about 850ms per image to 700ms. So it seems that version takes a lot less resources and runs a bit faster too. YMMV of course.....
 
I did, I ended up going back a version. I copied the contents of the extracted folder, excluding the cameras directory, to the folder that AI Tool was in and this fixed my issue. Since then I’ve upgraded to the beta version that @GentlePumpkin released a few days ago and it is working well. I followed the same process to upgrade to the beta version.


Sent from my iPhone using Tapatalk

Well I removed AI Tool 1.65, installed 1.64 and rebuilt my cameras. I am still getting the error (ERROR: Can't write to cameras/history.csv!). I don't know what to do.

I'm also running into an issue with Mqtt and push notifications. The cameras are triggering the camera to record and I can get Mqtt messages/push notifications when testing but when the camera triggers, it doesn't post to Matt or push.
 
I disabled deepstack for the time being as it was pegging my CPU at 100%, quite surprised, not the greatest but not a bad CPU i5-6500.

Might look at running it in socket for Windows to see if as posted above that runs any better.
 
I disabled deepstack for the time being as it was pegging my CPU at 100%, quite surprised, not the greatest but not a bad CPU i5-6500.

Might look at running it in socket for Windows to see if as posted above that runs any better.

Let me know if running in Docker in Windows improves the CPU usage. I had to move Deepstack to another server for the same reason. It was running on a Xeon E5-2960 V2 10 core with Hyper-threading and was maxing out the CPU.


Sent from my iPhone using Tapatalk
 
I think I’ve finally gotten AI Tool to process and trigger consistently and within a reasonable time. However I’m still having issues with MQTT.

So my MQTT is on a different server (Synology NAS) than Blue Iris and AI Tool (Windows Server). My MQTT topics are received by NodeRed and Home Assistant.

  • MQTT test button from setup works
  • AI Tool triggers the cameras to record with motion
  • MQTT test button from individual camera works

    An AI Tool trigger from motion DOES NOT trigger the MQTT payload, or push notifications for that matter
I don’t know what I’m missing or where the disconnect is coming from. Anyone have any ideas?

UPDATE: I set up an MQTT for some other cameras I have which are not running on AI Tool. The MQTT triggered as it should. So since it is not a connection between Blue Iris and MQTT it must be related to AI Tool. I am currently using AI Tool 1.64 because 1.65 was causing errors writing to history.csv.

@GentlePumpkin do you have any ideas why AI Tool would trigger a recording in BI but somehow prevent alerts using MQTT or push notificaiton?
 
Last edited:
This is very likely a BlueIris configuration issue, not an AI Tool issue @CommittotheIndian. Before I wrote my own system from scratch that supported MQTT natively I had MQTT events firing from BlueIris when AI Tool kicked it to say motion happened.

FWIW, MQTT support at the AI end, without having to go through BlueIris to get it, is one of the main reasons I came up with my alternate approach. The AI detection causes an MQTT message to Home Assistant and NodeRed is responsible for deciding when to tell BlueIris to start recording based on that MQTT message.
 
Got a bit of a hurdle with part 2 of my plan, installed docker for Windows and it complains that virtualisation isn’t enabled. Problem is my BI PC is in the attic and headless so I’d need to get out the ladder, then take a screen up there etc.
 
Got a bit of a hurdle with part 2 of my plan, installed docker for Windows and it complains that virtualisation isn’t enabled. Problem is my BI PC is in the attic and headless so I’d need to get out the ladder, then take a screen up there etc.

Oof :( I feel some of your pain, I had the same problem with the PC I built for BlueIris. I didn't have to drag a ladder out and go in the attic, but I did have to disconnect it all and drag it over to my office with a monitor and keyboard. Then I spent 30 minutes online trying to find out how to enable the setting on my AMD CPU. It was named some sort of obscure unrelated thing in the performance options!

I will say that having a PC in the house with Docker on it really does open up a bunch of possibilities beyond just AI. Yesterday I migrated my Unifi controller software from a Synology to the BlueIris PC as a Docker container and now have trivial ability to upgrade when new releases come out. I should have done it months ago. Once Windows Subsystem for Linux 2 supports USB devices I'll be moving my Home Assistant install to the PC as well from the Raspberry Pi it is currently on.