[tool] [tutorial] Free AI Person Detection for Blue Iris

Tinbum

Pulling my weight
Joined
Sep 5, 2017
Messages
448
Reaction score
126
Location
UK
have you tried reinstalling dot NET 8?
 

chumoface

n3wb
Joined
Mar 26, 2022
Messages
4
Reaction score
1
Location
dc
have you tried reinstalling dot NET 8?
Yes, I also reinstalled visual studio installer and NET 6 in addition to 8

AI Tool version 2.2.24.8133 works fine

running
dotnet --list-runtimes

returns
Microsoft.AspNetCore.App 6.0.20 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 8.0.4 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.NETCore.App 6.0.16 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 8.0.1 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 8.0.4 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.WindowsDesktop.App 6.0.16 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
Microsoft.WindowsDesktop.App 8.0.1 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
Microsoft.WindowsDesktop.App 8.0.4 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
 

Chris Dodge

Pulling my weight
Joined
Aug 9, 2019
Messages
97
Reaction score
121
Location
massachusetts

Chris Dodge

Pulling my weight
Joined
Aug 9, 2019
Messages
97
Reaction score
121
Location
massachusetts
Yes, I also reinstalled visual studio installer and NET 6 in addition to 8

AI Tool version 2.2.24.8133 works fine

running
dotnet --list-runtimes

returns
Microsoft.WindowsDesktop.App 6.0.16 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
Sorry, missed this.

When I look at the installer code it is literally calling 'dotnet --list-runtimes' and simply making sure '
Code:
Microsoft.NETCore.App 6.0.
' appears in the list.

It outputs the result of that command to "%TMP%\dotnet.txt" then deletes it when its done.

Perhaps if your %TMP% environment variable (as apposed to %TEMP%) is not set to a valid folder that could be a factor.

Can you get to it if you enter %TMP% in a file explorer address bar, or do you get an error?

You can force a log file to be created for the install that might help. Run this from an administrative command prompt:

Code:
"C:\PathToSetup\AIToolSetup.2.6.53.exe" /LOG="%TEMP%\AITOOLSETUP.LOG"
 

dohat leku

Getting the hang of it
Joined
May 19, 2018
Messages
329
Reaction score
34
Location
usa
Guys - getting deepstack: timeout on all my alerts that are landing up in canceled alerts. My BI 5.5.5.13 x64 and Deepstack are from 2022. How do I find my deepstack version to post here and would this script running on a daily basis possibly work - saw it on reddit

del C:\DeepStack\redis\*.rdb

del C:\Users\username\appdata\Local\Temp\DeepStack\. /q
 

dohat leku

Getting the hang of it
Joined
May 19, 2018
Messages
329
Reaction score
34
Location
usa
Sorry meant to say this folder has 80gb of data - C:\Users\username\appdata\Local\Temp\DeepStack\.
 

dohat leku

Getting the hang of it
Joined
May 19, 2018
Messages
329
Reaction score
34
Location
usa
I'm also open to updating deepstack to the latest as long as it's a good move and it'll work with an old 2022 BI 5.5.5.13. vs just being safer to simply stay with an older version
 

Chris Dodge

Pulling my weight
Joined
Aug 9, 2019
Messages
97
Reaction score
121
Location
massachusetts
I'm also open to updating deepstack to the latest as long as it's a good move and it'll work with an old 2022 BI 5.5.5.13. vs just being safer to simply stay with an older version
Deepstack is a dead product and has been for a few years.

Uninstall deepstack, delete it and never look back. Use CodeProject.AI. They keep it up to date and it has a nice web interface for installing updates and components:

The latest version of AITOOL works great with it. Or use the latest version of BI directly with CodeProject and skip AITOOL if you like. Its not as powerful or flexible but it gets the job done.
 

dohat leku

Getting the hang of it
Joined
May 19, 2018
Messages
329
Reaction score
34
Location
usa
So I have to install codeproject, then AItool. Got it - do you think it would work well with an older version of BI from 2022 - 5.5.5.13 - I guess it would because all BI is doing is simply sending it alerts and awaiting a response right?
 

Chris Dodge

Pulling my weight
Joined
Aug 9, 2019
Messages
97
Reaction score
121
Location
massachusetts
So I have to install codeproject, then AItool. Got it - do you think it would work well with an older version of BI from 2022 - 5.5.5.13 - I guess it would because all BI is doing is simply sending it alerts and awaiting a response right?
Should work fine with older versions of BI
 

whoami ™

Pulling my weight
Joined
Aug 4, 2019
Messages
233
Reaction score
224
Location
South Florida
Is it possible to run multiple instances of CodeProject AI like it is Deepstack?

Ive been using Deepstack all this time running 4 instances at once but figured CPAI has been around long enough to be a better option but with my first testing I cant see how to run multiple instances of the same model and with a single instance it built up 100+ images in queue on the first day and is unusable.
 

Schrodinger's Cat

Young grasshopper
Joined
Nov 17, 2020
Messages
49
Reaction score
22
Location
USA
Is the Github monthly/one time sponsor setup the best way to support this project or is there a more direct way to support who's actively doing the development at the moment? AITool is awesome and I really want to kick a few bucks into the pot in hopes it sticks around.
 

Chris Dodge

Pulling my weight
Joined
Aug 9, 2019
Messages
97
Reaction score
121
Location
massachusetts
Is it possible to run multiple instances of CodeProject AI like it is Deepstack?

Ive been using Deepstack all this time running 4 instances at once but figured CPAI has been around long enough to be a better option but with my first testing I cant see how to run multiple instances of the same model and with a single instance it built up 100+ images in queue on the first day and is unusable.
Currently AITOOL handles the queue and only allows one request at a time to CPAI. I dont see an easy way like with Deepstack to run multiple instances.

However, I now see CPAI has its own queuing system that deepstack never had so it may be beneficial to update AITOOL to just hit cpai with everything we get and let it handle what it can. This will take some doing in the code so I'm not sure when I'll have time.

So the only option for now is to set up more CPAI servers to handle the requests:
  • [Same machine] Install Docker, then CodeProject AI docker image on it. (easiest, least resources)
  • [Same machine] Install Windows Subsystem for Linux (WSL), then install the Linux version
  • [Same machine] Install another virtual machine tool (vmware/virtual box], install windows then install CodeProject AI normally
  • Install on most any other machine in your network.
  • Get a cheep Raspberry pi, Jetson or Intel NAC and install the appropriate docker versions
  • Or of course keep running deepstack if you dont mind dealing with its instability
 

Chris Dodge

Pulling my weight
Joined
Aug 9, 2019
Messages
97
Reaction score
121
Location
massachusetts
@whoami ™ @Pentagano @Tinbum

New version:

  • Support CPAI Mesh / Queuing! Settings > AI SERVERS > Edit server > "Allow AI Server based queue". Because CodeProject.AI manages its own queue it can handle concurrent requests (Unlike Deepsack), we ignore the fact that it is "in use" and just keep giving it as many images as we get to process. So far this actually seems to work really well. It should prevent some cases of the default 100 image queue error from happening. Note: When you enable this it will be more rare that a server OTHER THAN THE FIRST is used.
  • 1717284995623.png
  • If you still want other AITOOL AI servers to be used there are a few things you can do:
    1) Reduce AI SERVER > Edit URL > Max AI server queue length setting. CPAI defaults to 1024, so if, for example, you dropped that down to 4, it would only try the next server in line when the queue was above 4. You will have to test in your environment to see if this makes sense as it may not.
    2) Reduce 'AI Server Queue Seconds larger than'. If its queue gets too high you can force it to go to the next AITOOLS server in the list.
    3) Reduce 'Skip if AITOOL Img Queue Larger Than' setting. If the AITOOL image queue is larger than this value, and the AI server
    has at least 1 item in its queue, skip to the next server to give it a
    chance to help lower the queue.
    4) Or, In AITOOL > Settings, enable "queued" checkbox. This way AITOOL will take turns and always use the server that was used the longest ago. This may not be ideal if some of the servers are much slower than others.

    Tip: In CPAI settings web page, enable MESH and make sure it can talk to the other servers you may have configured. (all have to be on the same network with open/forwarded UDP ports - docker to docker to physical instance may take some work to get to see each other). This way, CPAI will do the work of offloading to the next server in line!

    Tip: For faster queue processing, enable as many modules (YOLOv5 6.2, YOLOv5.NET, YOLOv8, etc). It will help spread the workload out so in some cases you dont even need more than one CPAI server.

    Tip: If you use IPCAM Animal and and a few others as 'linked servers', you will get errors if you have anything other than YOLOv5 6.2 enabled because the models have not been build for the others yet. I haven't found a good way around this yet.

    Tip: If the MESH cannot see DOCKER or VM based instances of CPAI servers, edit your C:\ProgramData\CodeProject\AI\serversettings.json file and manually add the servers it cannot automatically find. For example:

    "KnownMeshHostnames": [ "prox-docker", "pihole"],

  • Then, stop/start the Codeproject.ai service.


A few more:
  • Some new columns in Edit AI URL screen related to queue time, min, max, etc. AIQueueLength, AIQueueLengthCalcs, AIQueueTimeCalcs, etc.
  • Update setup to only check for .NET 8 rather than 6
  • Implement new easier to use version of Threadsafe classes. This should also shrink the json settings file a bit and make the code easier to read.
  • If you enable 'Ignore if offline' for a CPAI server that is running in mesh mode and mesh returns an error (ie the mesh computer was turned off for example) you will not see an error.
    Fixed bug where using linked servers, there may be duplicates or disabled urls in the list slowing down the overall response time.
  • Gotham City's corruption problem is still a work in progress. I'm Batman.
 

Attachments

whoami ™

Pulling my weight
Joined
Aug 4, 2019
Messages
233
Reaction score
224
Location
South Florida
TLDR: Old Deepstack good, CodeProject AI bad.

Over the last couple weeks I've been testing out versions of Deepstack and CPAI. I even bought a new GPU, a RTX A2000 12GB, to replace my Quadro P400 to see if the limiting factor for me with CPAI was my 2GB Pascal series P400. So far it appears it is not.

I still have a ways to go, I have not been able to get CPAI's YOLOv8 to detect my RTX A2000 or dove into to this MESH implementation but I have been able to get the old reliable version, DeepStack-Installer-GPU-2021.09.1 to work with my newer Ampere series card by replacing some of the dependencies in the windows_packages folder in Deepstacks directory.

The version DeepStack-Installer-GPU-2022.01.1 that works with newer series Nvidia cards has some issue where it is unable to process about 1 out of every 20 or so images and fills up the temp folder in $user/AppData/local/temp/DeepStack within a day or two and makes it unusable.


The old version DeepStack-Installer-GPU-2021.09.1 if left uncheck will miss 1 out of every few thousand images and will fill up the temp folder after many months and become unusable.

@Chris Dodge if you could incorporate a check box option in AITool deepstack tab that gives a option to run a script that checks $user_veriable/AppData/local/temp/DeepStack about ever 4hrs for files over 1hr old and deletes them I think it would solve deepstacks instability issue if using DeepStack-Installer-GPU-2021.09.1

Also it appears that the DeepStack project has been dead for a while so if someone could fork and or repackage DeepStack-Installer-GPU-2021.09.01.exe with the updated windows-packages for newer Nvidia cards it would be usable for others. I started looking into both of these things last night which I'm fairly confident I could figure out but I would be starting from scratch and would take me a long time to accomplish.

Link to post by @MikeLud1 with the instructions for the updated dependencies.

ATM IMO It seems that deepstack is either as accurate, or more accurate with object detection, and more efficient by running multiple instances, than using CodeProject AI.
 
Last edited:

Chris Dodge

Pulling my weight
Joined
Aug 9, 2019
Messages
97
Reaction score
121
Location
massachusetts
TLDR: Old Deepstack good, CodeProject AI bad.
Its been great for me.

You should test with Yolov5 3.1, Yolov5 6.2 and Yolov5.net (DirectML).

Yolov5 3.1: "Provides Object Detection using YOLOv5 3.1 targeting CUDA 10 or 11 for older GPUs. "
Yolov5 6.2: "Provides Object Detection using YOLOv5 6.2 targeting CUDA 11.5+, PyTorch < 2.0 for newer GPUs. "
Yolov5.net: "This module is best for those on Windows and Linux without CUDA enabled GPUs" (But it is just as fast as CUDA in my experience and smaller memory footprint because it uses DirectX for acceleration)

For better detection, edit settings for each module and set MODEL SIZE to large or huge. No, it wont be as fast, but not that big of a difference to matter.

Also consider that using all of this in a VM could be a factor for your issues since GPU is going through another layer. Not sure what VM you use, but if absolutely required maybe try the new version of VMWare workstation (which is now free) and make sure the latest vmware tools are installed within the vm. Or consider blowing away your drive, installing PROXMOX as the boot OS, then within there create 1 VM for windows and 1 VM for some linux version/Docker and install CPAI in there. It may be more compatiable that way and your whole system would be much more flexable.

And if you are running deepstack at the same time as codeproject, maybe there is some kind of conflict?

> more efficient by running multiple instances, than using CodeProject AI.
See my last post. The new version lets codeproject queue the images correctly and can potentially be much faster, especially with more than one module enabled. Maybe better than running multple deepstacks's's's.

> check box option in AITool deepstack tab
AITOOL > Deepstack tab has a reset button that deletes those temp files. And it will auto restart deepstack if you start getting errors from it. (but not on a schedule)

You should search for similar issues and ask for help on the Codeproject.ai forums to help get you fully up and running with that rather than resorting to using a dead unstable product.
 
Top