[tool] [tutorial] Free AI Person Detection for Blue Iris

So just follow the steps as you did for the AITools but just do those 3 as you posted before instead? I am stuck as what to do @ step 5. Sorry, I am not good with "coding" and follow step/step tutorials till I get the hang of things and this is confusing me where I put this information and or how to even do it. I understand how it was done for AITools but that is the extent of it :/
You run the nssm install <service-name>
I called the first one "redis-server" and the config is like this:

redis-server.jpgredis-server2.jpg
Also under the logon tab, enter your username and password. Goes for them all.

The second one I called pyIntelligence, and is dependent on the first one.
pyIntelligence.jpgpyIntelligence2.jpgpyIntelligence3.jpg
Arguments: C:\DeepStack\intelligence.py -MODE=Medium -VFACE=False -VSCENE=False -VDETECTION=True


The last one I called Deepstack-server and is dependent on the second one
Deepstack-server.jpgDeepstack-server2.jpgDeepstack-server3.jpg
Arguments: -VISION-FACE=False -VISION-SCENE=False -VISION-DETECTION=True -ADMIN-KEY= -API-KEY= -PORT=88
 
You run the nssm install <service-name>
I called the first one "redis-server" and the config is like this:

View attachment 67269View attachment 67270
Also under the logon tab, enter your username and password. Goes for them all.

The second one I called pyIntelligence, and is dependent on the first one.
View attachment 67271View attachment 67272View attachment 67273
Arguments: C:\DeepStack\intelligence.py -MODE=Medium -VFACE=False -VSCENE=False -VDETECTION=True


The last one I called Deepstack-server and is dependent on the second one
View attachment 67274View attachment 67275View attachment 67276
Arguments: -VISION-FACE=False -VISION-SCENE=False -VISION-DETECTION=True -ADMIN-KEY= -API-KEY= -PORT=88
Oh, ok. SO you can call the service anything you want?
 
One feature request/idea: how about a checkbox to select whether to send telegram alert images with the original image (as it is now) or including the detection boxes overlay? Maybe that's possible already but I wasn't able to find it.

Anyone having questions about the above setup just let me know :)

Yes I can add a switch to turn off sending the image along.

regarding the Raspberry and the Compute stick: what are the processing times each?
 
new updates including many discussed and additional features and a massively simplified setup are coming in the next month. So stay tuned - and in case you consider setting AI Tool up with many camera, consider waiting. The new setup won‘t require configuring duplicate cams anymore
 
Did you try the suggestions to install the hyper-v feature and enable virtualisation in the BIOS?

For me, my BI PC is on W10 2004 which now has support for WSL2 (windows subsytem for Linix 2) which is what I opted to use. Did have a right royal struggle to enable virtualisation in the BIOS though as I run headless and the PC is in the loft but eventually found a HP utility that allows you to change BIOS options from the command line, so used that, then restarted, then installed Docker which enabled WSL2 etc....
I have Docker Desktop set up and am trying to get Deepstack to run on it. Can you provide a high level how you set up Docker Desktop? It is different than standard Docker that I'm used to running on Ubuntu. I'm able to do a pull for deepquestai/deepstack, but can't get it to run. I'm getting hung up on the container.
 
If you look back I posted the command line that I used to do the pull which also starts.
I got this working now, thanks! Now I will monitor to see how my CPU handles running Deepstack on Docker Desktop rather than the Windows version.
 
Just wanted to provide a quick update since switching to Docker Desktop. My performance is significantly better. My response times are anywhere from 500ms to just under 1 second now. Also, my CPU hovers around 35 to 40% and spikes to the 70 to 80’s when human activity is present, which is a stark contrast to running Deepstack on Windows, or even running the noavx version on my old PowerEdge 2950


Sent from my iPhone using Tapatalk
 
Hi all. I just found this project and wow this is awesome. Thanks GentlePumpkin and others for making this so cool.

Is there any other notification possible that's not telegram? I'd to read up on how to get this to send me alerts if someone pulls into my driveway with a picture. I don't have telegram but I do use homeassistant. I also saw that the free version of deepstack recognizes up to 5 faces I believe. Where do you set / train those to get similar alerts if they're strangers?
 
Hi all, thanks for this, it looks amazing :)

Unfortunately I get an error after the AiTool sends the image to deepstack server. AiTool log file errors looks like:

(1/6) uploading image
(2/6) waiting for results
Newtonsoft.json.jsonserializationexception |error converting value 404 to type 'windowsformsapp2.response'. path '', line 1, position 3. (code: -2146233088)
error: processing the following image 'xxx.jpg' failed. can't reach deepquest server at http://192.xxx.xx.xx:xx....etc


the deepstack log adds a POST line everytime an image appear in the input folder, when the aitool errors.


Any ideas? Using windows install (not docker) on win10, AItool v1.67. restarted, ran everything as admin, tried troubleshooting (server looks to be running?).

In blue Iris settings - webserver: I configured the advanced options as per the step-by-step instructions, but do i also need to turn on the http server option? and if so, what should the fields / addresses be in that form?
And just to confirm: I don't need to be connected to the internet for this, correct?

Thanks for any help,
C
 
I’m guessing that your CPU doesn’t support AVX. If it does please refer to post 238 in this thread; I went over the application requirements to run Deepstack on Windows.


Sent from my iPhone using Tapatalk
 
  • Like
Reactions: cjowers
I’m guessing that your CPU doesn’t support AVX. If it does please refer to post 238 in this thread; I went over the application requirements to run Deepstack on Windows.


Sent from my iPhone using Tapatalk


Thanks for the help pmcross - I suspect my cpu lacks AVX. its a xeon W2530, probably from 2009ish... Bummer :facepalm: :confused:

Any suggestions / options?

edit Looks like post #1100 mentions a non-avx docker version. I suppose I'll try that. Docker Hub
 
Last edited:
Welp, I tried docker with the -noavx deepstack, but I get the same error as above. any ideas?

I'm still a bit lost a with configuring the deepstack server from docker,
But It seems to respond to webbrowser calls to localhost:80 (or my BI pc static IP) so I'm using this in the AI tool deepstack url field, and not to the hyper-V adapter IP, docker network IP, or IP on the deepstack server log. so hopefully i've got this part correct?

what should I have in the AI tool 'trigger' URL for testing?
should BI settings have the webserver HTTP option on?
BI logs connections from the relevant user login, and deepstack server POSTs on new images arriving in target folder, so I think everything is configured.... just want to double check with the experts.

Thanks
 
Welp, I tried docker with the -noavx deepstack, but I get the same error as above. any ideas?

I'm still a bit lost a with configuring the deepstack server from docker,
But It seems to respond to webbrowser calls to localhost:80 (or my BI pc static IP) so I'm using this in the AI tool deepstack url field, and not to the hyper-V adapter IP, docker network IP, or IP on the deepstack server log. so hopefully i've got this part correct?

what should I have in the AI tool 'trigger' URL for testing?
should BI settings have the webserver HTTP option on?
BI logs connections from the relevant user login, and deepstack server POSTs on new images arriving in target folder, so I think everything is configured.... just want to double check with the experts.

Thanks

Glad to help. It sounds like you’ve got Deepstack running the noavx version in Docker. Below is the URL to put into AI Tool to trigger the camera:

IP:port/admin?camera=Camera short name&trigger&user=username&pw=password


Sent from my iPhone using Tapatalk
 
  • Like
Reactions: cjowers
  • Like
Reactions: cjowers
This goes in the camera tap for each camera.
You can test the URL from a browser to confirm that the syntax is correct.

Sent from my iPhone using Tapatalk

OMG its working!!!!!!!!! this is wicked!! a bit slow on my old machine 3s-7s per post, but no dramas there. limiting the object search criteria doesn't seem to help either. I will have a play with it some more, and configure to run on powerup. (start script or taskscheduler maybe?)

oo, imagine if this tool could be extended to identify / read license plates as well!!!



Thanks all. What finally sorted it out for me was to reinstall and then run deepquestai/deepstack:noavx using powershell. that way i could specify the port number of the deepstack url, and the VISION-DETECTION=True (not scene-detection) [code shown below], and then I used that port info for the AITool settings tab Deepstack URL.

PS C:\Users\me> docker run -e VISION-DETECTION=True -v localstorage:/datastore -p 85:5000 deepquestai/deepstack:noavx

then the AITool setting tab deepstack URL used was: :85

and what was probably obvious for everyone, but I never found any info on... BI - settings - webserver tab - http server option - should be ticked 'on'. (and use same LAN and port # that you use in AITool camera tab - Trigger URL field)

Thanks pmcross, gentle pumpkin, and any other content creators!
 
Last edited:
  • Like
Reactions: pmcross