CodeProject.AI Version 2.9.0

If I'm running this on Docker on a Linux VM with no GPU, is YOLO 6.2 or .NET recommended?

All I'm getting is "Dayplate" when the occasional license plate is read. More often than not I get the "Nothing Found" even though the images are perfectly clear.

If I use the CodeAI Explorer, it reads this plate just fine:
View attachment 209457

But BI got that same plate returned from CPAI as "Dayplate". So, what am I missing here? Why am I I not getting the actual license plate number/letters returned from CPAI but can read it with the CPAI Explorer?

View attachment 209458 View attachment 209459
To your question, I would say try each for a day and see which you have better luck with. I seem to keep going back to .net myself. Not a ton of difference between them in my observations but a slightly better match rate in .net... Though what works for one does not mean it is universal.
I too am runngin cpu/linux/docker
I cannot speak to the plate stuff as I do not utilize that feature. I use mine to watch the dogs and wildlife out back.
 
  • Like
Reactions: biggen
Anyone else able to confirm this behavior?
I am running docker CPIA 2.9.4, see 2.9.5 docker hub. So I modified the compose file and started up. Did not really appear to pull down a new image (started too quickly) and in the web interface it still says 2.9.4. Modified the flag to latest with the same results. Ended up deleting the existing image to force a re-download upon container up command.
No joy on getting 2.9.5
any one able to get 2.9.5 pulled down?
There is something wrong with both 2.9.5 & latest for cpia. I tried running it on a separate docker installation on a different server with the same results. Each comes up running 2.9.4 and says 2.9.5 is available as an update. Not sure how to report it to the developers.
 
Tried the 2.9.x versions with the coral TPU module. But on none of the 2.9.X versions I can download the models. Running codeproject ai on unraid. Currently back to 2.8.0, there is everything working as it should.
 
Tried the 2.9.x versions with the coral TPU module. But on none of the 2.9.X versions I can download the models. Running codeproject ai on unraid. Currently back to 2.8.0, there is everything working as it should.
If you wanted you could try the process someone else posted a few pages back:
1 - Open Control Panel and select "Program and Features"
2 - Select "CodeProject.AI Server 2.x.x and then uninstall
3 - Open File Explorer and delete both the C:\Program Files\CodeProject and C:\ProgramData\CodeProject directories.
4 - Reinstall CPAI
 
Is there any support happening with CPAI anymore? Where do we post questions now in order to seek answers?
 
Is there any support happening with CPAI anymore? Where do we post questions now in order to seek answers?
Thats a great question. I did not see where to post a comment to the devs who are still maintaining the product. I want to let them know of the issues I am seeing with trying to update versions in dockers shrugs
Post it here if you figure it out.
 
Thats a great question. I did not see where to post a comment to the devs who are still maintaining the product. I want to let them know of the issues I am seeing with trying to update versions in dockers shrugs
Post it here if you figure it out.
What is the official website now for the project?
 
If you wanted you could try the process someone else posted a few pages back:
1 - Open Control Panel and select "Program and Features"
2 - Select "CodeProject.AI Server 2.x.x and then uninstall
3 - Open File Explorer and delete both the C:\Program Files\CodeProject and C:\ProgramData\CodeProject directories.
4 - Reinstall CPAI
Thanks for the reply.

I'm running CPAI in a docker container on Unraid. Did try a complete reïnstall with the 2.9.X versions, but this did not help.

Also tried modifying the modulesettings.json, to add the 2.9.X version which I was installing. But this didn't help either.

Currently wondering what the state of google coral support is? There does not seem to be a lot of activity there.
 
Thanks for the reply.

I'm running CPAI in a docker container on Unraid. Did try a complete reïnstall with the 2.9.X versions, but this did not help.

Also tried modifying the modulesettings.json, to add the 2.9.X version which I was installing. But this didn't help either.

Currently wondering what the state of google coral support is? There does not seem to be a lot of activity there.
Gotcha. Well I can only hope that the version you are running does what you need. In which case there is no need for updating for updating sake.
Best of luck!
 
Opinion wanted: I'm currently running BI in a VM on my i7-12700 (12th gen) Unraid server. Right now I have CPAI in a Docker, it's using CPU only. Currently the machine has the igpu exposed to the Docker containers (Plex HW transcoding) and I have an OLD GT730 passed through to the VM, which BI uses for the cameras.

I'm debating if I should either try to passthrough the IGPU and try and get DirectML working (would have to switch from Docker to running CPAI under the Windows VM I think) .... or perhaps swapping the GT730 out for a GTX1080 I have laying around, since it has the required CUDA support.

Just wondering if any of that is worth doing considering these inference times I'm getting:

Thanks in advance for any advice. :)

1734565034447.png
 
Last edited:
Opinion wanted: I'm currently running BI in a VM on my i7-12700 (12th gen) Unraid server. Right now I have CPAI in a Docker, it's using CPU only. Currently the machine has the igpu exposed to the Docker containers (Plex HW transcoding) and I have an OLD GT220 passed through to the VM, which BI uses for the cameras.

I'm debating if I should either try to passthrough the IGPU and try and get DirectML working (would have to switch from Docker to running CPAI under the Windows VM I think) .... or perhaps swapping the GT220 out for a GTX970 I have laying around, since it has the required CUDA support.

Just wondering if any of that is worth doing considering these inference times I'm getting:

Thanks in advance for any advice. :)

View attachment 210046
those are decent cpu times. if you do try to pass a gpu through, do a backup of your vm first just in case. I would expect a gpu time to drop into the double digits. Not sure what the benefit in your use case would be but there is one way to find out.
 
  • Love
Reactions: cscoppa
+1 above - your personal reaction time isn't going be any faster than the savings. Now if you were at 15 seconds (15,000ms) and could get to 100ms or even 1,000ms, that would be worth it. But sub 1,000ms isn't worth the risk and aggravation if it goes south.
 
+1 above - your personal reaction time isn't going be any faster than the savings. Now if you were at 15 seconds (15,000ms) and could get to 100ms or even 1,000ms, that would be worth it. But sub 1,000ms isn't worth the risk and aggravation if it goes south.

I remember in the early days (I think Deepstack) doing everything in Windows and getting the CUDA environment correct was challenging to say the least. My other concern is even if I drop the times down 40-50 ms, it's probably going to increase the power consumption quite a bit, and that will affect the UPS that it's plugged into. (run time plus overall electric bill)
 
Tried the 2.9.x versions with the coral TPU module. But on none of the 2.9.X versions I can download the models. Running codeproject ai on unraid. Currently back to 2.8.0, there is everything working as it should.
Maybe this is the reason for my issue.
My TPU always is getting detected but it is falling back to CPU after a couple of seconds.
And my second issue is that YOLOv5 6.2 doe not detect the GPU but YOLOv8 does.

I am using an Nvidia T400 under Windows.

1734643023164.png
 
Alright, circling back... I played musical chairs with the GPUs in the house. I swapped out the GT770 which didn't support a high enough version of CUDA with a GTX1080, definitely a big improvement. The one thing I haven't messed with yet is custom models, not sure that's needed with these times? (or will it improve accuracy?)

Usually the very first detection of a group is slower because the clock on the GPU slows down, then ramps up as the detections come in.

1734801346540.png
 
Last edited:
  • Like
Reactions: Skinny1
The one thing I haven't messed with yet is custom models, not sure that's needed with these times? (or will it improve accuracy?)
The custom models has labels removed from the models for example (bench, handbag and many more) so if you do not need to detect them it will eliminate false positives also they are more accurate.
Usually the very first detection of a group is slower because the clock on the GPU slows down, then ramps up as the detections come in.
The first detection is always slow because it loads the model into the memory, once it is in the memory it stays there.
 
  • Like
  • Love
Reactions: cscoppa and Skinny1
The custom models has labels removed from the models for example (bench, handbag and many more) so if you do not need to detect them it will eliminate false positives also they are more accurate.

The first detection is always slow because it loads the model into the memory, once it is in the memory it stays there.

Thanks!!! ... I think I got it now .... at first it was doing double detection, the custom model AND the "stock" one, had to uncheck "Default object detection" to fix that ... looks pretty good now:

EDIT: Bonus is the 1st detection after a long delay seems to be at a similar speed now, before with the default model that 1st one was much slower. Definitely worth doing. Guessing it has something to do with the size of the default model vs the ipcam-combined one.

1734805018601.png
 
Last edited: