CodeProject.AI Version 2.0

Thanks for the link. That's a good reference. I already had the drivers installed though it doesn't show anything for the USB coral under Coral Settings. It does show up when I run lsusb from the unraid CLI. I passed through the coral via the container Device setting. I've tried un-installing and re-installing the coral module about 10x. I never see if change from CPU to TPU.
It won't work until it's showing under coral driver settings on unRAID.
Go to tools > system settings and make sure it's not binded to vfio. It shouldn't have a check next to it. If you were to pass through to a VM then yes we would use vfio, but for this case we want it not to be.
 
ATM Coral is showing poor performance, not disimilar to CPU alone, at least in my experience .

I'm also finding myself unable to install CPAI at all in any form on the latest BI version I have (must check for new versions) and I've re-installed a clean version of Windows twice completely wiping the system!!!! Ken is aware and hopefully working on a fix.

I do wish they'd find a better way of getting BI and CPAI to work together or at elast uninstall and de-integrate properly. Whenever I try to get Coral working and something goes wrong, I often end up with CPAI not working and no amount of unistalls of either program will fix it,nor even DISM commands on windows resulting in a brand new installtion being required. As I don't have a back up software for my BI PC, this is a real pain in the A$$. I have to completely wipe the system and re-install windows, BI, CPAI, and re-configure everything which takes hours. The latest issue caused BI or CPAI.sys to go missing corrupt (I forget which one). We really need a clean installler or at least a repairer facility that finds out all the files and repairs eveything inlcuding the registry and the same with uninstallation - removes all traces including registry.
 
  • Like
Reactions: jrbeddow
ATM Coral is showing poor performance, not disimilar to CPU alone, at least in my experience .

I'm also finding myself unable to install CPAI at all in any form on the latest BI version I have (must check for new versions) and I've re-installed a clean version of Windows twice completely wiping the system!!!! Ken is aware and hopefully working on a fix.

I do wish they'd find a better way of getting BI and CPAI to work together or at elast uninstall and de-integrate properly. Whenever I try to get Coral working and something goes wrong, I often end up with CPAI not working and no amount of unistalls of either program will fix it,nor even DISM commands on windows resulting in a brand new installtion being required. As I don't have a back up software for my BI PC, this is a real pain in the A$$. I have to completely wipe the system and re-install windows, BI, CPAI, and re-configure everything which takes hours. The latest issue caused BI or CPAI.sys to go missing corrupt (I forget which one). We really need a clean installler or at least a repairer facility that finds out all the files and repairs eveything inlcuding the registry and the same with uninstallation - removes all traces including registry.

If reformatting is fixing the issue then the uninstallers are missing things. You check to see if there is any existing registry left over. I've used RegScanner in the past to hunt down registry entries. There could also be left over data in the appdata folder

EDIT:
also something like this or RegShot might be useful. Taking a snapshot of the registry before installing and then one after. Then compare them.
 
Last edited:
  • Like
Reactions: jrbeddow
It won't work until it's showing under coral driver settings on unRAID.
Go to tools > system settings and make sure it's not binded to vfio. It shouldn't have a check next to it. If you were to pass through to a VM then yes we would use vfio, but for this case we want it not to be.

---UPDATE---

It's detected now but getting errors warnings in the log. will test it out.

1695319641373.png




----------------Initial Post-----------------------------------------------
When researching how to unbind things to VFIO, I shutdown the server, removed the unRaid USB thumbdrive, and searched for vfio-pci.cfg and vfio-pci.bak. Neither of those files are on the thumbdrive so I'd assume nothing is bound.

Previously when I opened the Coral Settings the message said I do not have PCI based devices Coral devices:

1695318036802.png

It might be due to the fact that I cam using a USB 3 PCI card, https://www.amazon.com/gp/product/B01I39D15A/ ? I'll change the pci slot the card is in to see if that makes a difference.



I checked Tools > System and nothing is checked:

1695318920659.png

When I rebooted I did some some lines about installing the coral drivers, missed taking a screenshot though it is showing up as a google device under USB settings:

1695319014019.png:thumb:
 
Last edited:
As an Amazon Associate IPCamTalk earns from qualifying purchases.
So am I the only one that thinks the Coral support should get its own thread? That is not because I don't care.. I actually have been looking out for this and just think the info is gonna become a mess in this thread.
I asked Mike awhile back about having separate threads just cpu, with gpu, etc.
 
So am I the only one that thinks the Coral support should get its own thread? That is not because I don't care.. I actually have been looking out for this and just think the info is gonna become a mess in this thread.
Agreed.

Sent from my iPlay_50 using Tapatalk
 
So am I the only one that thinks the Coral support should get its own thread? That is not because I don't care.. I actually have been looking out for this and just think the info is gonna become a mess in this thread.

It wouldn't matter, people have loaded this thread with ANPR questions and that has it's own thread as well LOL:lmao:
 
Apologies for the word vomit, but I need some help plz.

Hate to even ask this, but I'm at my wits end and have no hair left to pull out at this point and desperately need some direction/guidance. Have been running a 3rd gen i5 desktop and deepstack with 9 cameras for the better part of three years. Deepstack worked VERY well, with the exception of randomly every five or six weeks, just dying and would require a reinstall to resume functioning. Discovered CodeProject.Ai, read hundreds of pages of forums postings on it, then decided to pull the trigger and migrate. That was.... Rough, but kind of successful?

Unfortunately I've developed a need for some LPR cameras, and noticed that despite CodeProject seems to be working (except the fact my poor i5 cpu was basically at 100% usage 100% of the time) I decided to perform a few upgrades and get myself a more powerful cpu, more ram, and after all that decided to get a newer version of BI while I was at it. Great, now I've got my LPR cams installed and decently tuned in, except, even now with an i7-7800 and 16gb of ram, my CodeProject.ai seems to be maxing out this system resource wise as well. This issue leads to sometimes seeing either abandoned object analyzation, or 20+ second returns to identify an object (on top of identifying everything under the sun as a person/car/truck/van/hose/cat/dog/chair every time there is ANY motion detected, but that's a different problem) if even at all.

More purchases were made, and unfortunately, this is where things get.... Bad? Complicated. Picked up 2x NVidia Tesla P4s and some extra ram for my main hypervisor server that runs 98% of the compute based stuff I do, Proxmox. The massive amount of reading I did, this sounded like it'd be a piece of cake! Even found a few guides to follow, none of which have really returned anything remotely functional. I'll bulletpoint the flustered cluster of what I'm attempting, hopefully to clarify my rambling a bit:

  • Blue Iris: i7-8700 desktop, 16gb ram, Server 2k19 running bare metal.
  • Hypervisor: 2x e5-2690v3s, 256gb ram, Proxmox 6.4 (don't judge me!) running bare metal.
    • GPU: 2x NVidia Tesla P4 headless datacenter cards
    • NVIDIA-SMI 460.106.00 Driver Version: 460.106.00 CUDA Version: 11.2
  • Proxmox LXC Container: Ubuntu 20.04LTS container, unprivileged = no, nesting = 1
    • <ctid>.conf contains:
      • lxc.cgroup.devices.allow: c 195:* rwm
        lxc.cgroup.devices.allow: c 238:* rwm
        lxc.cgroup.devices.allow: c 241:* rwm
        lxc.mount.entry: /dev/nvidia0 dev/nvidia0 none bind,optional,create=file
        lxc.mount.entry: /dev/nvidiactl dev/nvidiactl none bind,optional,create=file
        lxc.mount.entry: /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file
        lxc.mount.entry: /dev/nvidia-modeset dev/nvidia-modeset none bind,optional,create=file
        lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-uvm-tools none bind,optional,create=file
        lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-caps/nvidia-cap1 none bind,optional,create=file
        lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-caps/nvidia-cap2 none bind,optional,create=file
        lxc.apparmor.profile: unconfined
        lxc.cgroup.devices.allow: a
        lxc.cap.drop:
        lxc.mount.auto:
    • NVIDIA-SMI 460.106.00 Driver Version: 460.106.00 CUDA Version: 11.2 (no kernel driver install)
    • Docker version 24.0.6, build ed223bc (installed using instructions found here)
      • Running docker run -it --rm --gpus all ubuntu nvidia-smi returns an identical dataset found on my hypervisor host, along with the docker host lxc container
    • running nvidia-contianer-cli info returns the following:
      nvidia-container-cli info
      NVRM version: 460.106.00
      CUDA version: 11.2

      Device Index: 0
      Device Minor: 0
      Model: Tesla P4
      Brand: Tesla
      GPU UUID: GPU-83729f44-3fb8-b4ed-2efb-656e152d3d12
      Bus Location: 00000000:82:00.0
      Architecture: 6.1

      Device Index: 1
      Device Minor: 1
      Model: Tesla P4
      Brand: Tesla
      GPU UUID: GPU-eb6187de-dff9-83e5-ce33-140d9e466b12
      Bus Location: 00000000:83:00.0
      Architecture: 6.1
    • Attempting to run: docker run --name CodeProject.AI -d -p 32168:32168 --gpus all codeproject/ai-server:gpureturns:
      • Error response from daemon: Cannot restart container CP.AI: failed to create task for container: failed to create shim task: OCI runtime create failerunc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout:stderr: Auto-detected mode as 'legacy'
        nvidia-container-cli: requirement error: unsatisfied condition: cuda>=11.7, please update your driver to a newer version, or use an earlier cuda conter: unknown
  • Purpose: From the documentation I've read, you can use a singular GPU and use pcie passthrough to multiple lxc containers for distributed GPU compute loads. Ideally, I'd like plex, CP.ai, along with a few other things to be able to share a pair of GPUs, though right now, I'd be thrilled beyond belief if I could get CP.ai to even START on this damn thing!
What in the hell am I missing? At this point I've got no less than fifteen hours invested in this project and have only moderately progressed towards my goal of making something even functional, muchless maybe slightly improved identification times. I've uninstalled and reinstalled the driver/cuda on the baremetal hypervisor no fewer than ten times, countless reboots, deleted and recreated lxc containers probably two dozen times... Plexmediaserver runs on a dedicated 20.04LTS lxc container and has no problems at all utilizing the GPU for transcoding. I'd just LOVE to take one of the damn p4s out of the server and install it in the window PC running, but one of these cards won't physically fit in the chassis of my BI PC, nor do they have any active cooling as they're intended for datacenter chassis machines like I've got them installed in.


Any suggestion would be endlessly appreciated with this!!
 
This morning with BI back on 5.7.8.3 (4th August 2023) and Code Project AI on 2.0.8.0, I have a working AI system back. Global Settings>AI shows again the modules in the Use custom models box which had been missing with CP on 2.2.4 or going back to 2.1.11.
I'm sure Ken will get sorted the integration of BI with CP AI, but I might be away for a few days and want a working system.
 
  • Like
Reactions: David L
Apologies for the word vomit, but I need some help plz.

Hate to even ask this,

In my first attempt with CPAI As I recall I set it up with GPU processing. But I had Global hardware acceleration set to Nvidia NVDEC and stressed the system to the point where BI was slow to respond or even so slow it seemed unresponsive.
So decided to turn hardware acceleration OFF. I noticed a difference in system lag/performance right away....things began working better, and I noted a slight increse in CPU/GPU %ages, but nothing crazy high.....
Im running a Nvidia GTX 1060 6GB ram version.

So I'm wondering what your hardware acceleration is set to....
 
In my first attempt with CPAI As I recall I set it up with GPU processing. But I had Global hardware acceleration set to Nvidia NVDEC and stressed the system to the point where BI was slow to respond or even so slow it seemed unresponsive.
So decided to turn hardware acceleration OFF. I noticed a difference in system lag/performance right away....things began working better, and I noted a slight increse in CPU/GPU %ages, but nothing crazy high.....
Im running a Nvidia GTX 1060 6GB ram version.

So I'm wondering what your hardware acceleration is set to....

On BI specifically, in the Cameras global tab, "Hardware accelerated decode" is set to Intel +VPP. If you let the resource monitor run for a day only monitoring Blueiris.exe, that process isn't taking more than maybe ~15% of the CPU power, and the GPU (Intel QuickSync) never gets above maybe 10%. The issue I'm running into is CP.ai is murdering my resources, which is why I'm attempting to move it off the windows BlueIris box onto a VM and exclusively use hardware GPU for AI. CP.ai.server.exe and python.exe on the other hand, consume >90% of CPU power any time there is motion on any one of my cameras. On the windows Instance, I tried switching the "License Plate Reader" CP.ai module to "GPU" in hopes that maybe it could use the intel quicksync, however it restarts the module right back up to "CPU" and that along with YOLOv5 6.2, continues to absolutely wreck the processor.

I can't move my AI off using CPU on the windows box to a dedicated "virtual machine" using dedicated GPU processing time, because I can't get the damn docker container to start........
 
  • Like
Reactions: Flintstone61
Blue Iris 5.6.8.4
Deepstack Docker 22.01.01 on Ubuntu 22.04
This is my go-to when I get frustrated dealing with recent versions of Blue Iris and Code Project AI server.
I just need something that works.
:-(

Sent from my iPlay_50 using Tapatalk
 
Is anyone else missing the "installable" modules from the "Install Modules" page as of recently?

When I updated to 2.2.4-beta... my Coral USB TPU integration stopped working properly, so I decided to switch back to CPU temporarily, ensure it was stable... then move back to Coral once it was.

Now that it's stable... I'm looking to move back, but all of the modules that used to show under the "Install Modules" page are no longer there... only the currently-installed modules show to "uninstall".

Code:
Face Processing
Private
1.5
2023-08-12
Installed
GPL-3.0 A number of Face image APIs including detect, recognize, and compare.
Object Detection (YOLOv5 .NET)
Private
1.5
2023-05-04
Installed
MIT Provides Object Detection using YOLOv5 ONNX models with DirectML. This module is best for those on Windows and Linux without CUDA enabled GPUs
Object Detection (YOLOv5 6.2)
Private
1.6.1
2023-09-17
Installed

Not sure what's up... I have tried a reboot, etc - but thought I would ask here before reinstalling.

Everything is working fine on the CPU at the moment for detection, but just looking to transition back to Coral USB TPU, as the CPU fan on my little i7-7700 Optiplex is blasting every so often due to CodeProject and a 4K Dahua I just added.

Thanks!

Best Regards.

dg6464
 
Is anyone else missing the "installable" modules from the "Install Modules" page as of recently?

When I updated to 2.2.4-beta... my Coral USB TPU integration stopped working properly, so I decided to switch back to CPU temporarily, ensure it was stable... then move back to Coral once it was.

Now that it's stable... I'm looking to move back, but all of the modules that used to show under the "Install Modules" page are no longer there... only the currently-installed modules show to "uninstall".

Code:
Face Processing
Private
1.5
2023-08-12
Installed
GPL-3.0 A number of Face image APIs including detect, recognize, and compare.
Object Detection (YOLOv5 .NET)
Private
1.5
2023-05-04
Installed
MIT Provides Object Detection using YOLOv5 ONNX models with DirectML. This module is best for those on Windows and Linux without CUDA enabled GPUs
Object Detection (YOLOv5 6.2)
Private
1.6.1
2023-09-17
Installed

Not sure what's up... I have tried a reboot, etc - but thought I would ask here before reinstalling.

Everything is working fine on the CPU at the moment for detection, but just looking to transition back to Coral USB TPU, as the CPU fan on my little i7-7700 Optiplex is blasting every so often due to CodeProject and a 4K Dahua I just added.

Thanks!

Best Regards.

dg6464
Have you noticed that the entire CodeProject.com website is currently down? That is likely affecting the ability to do both new "full" installations as well as adding downloadable modules.
The whole CodeProject setup seems entirely too fragile IMHO. I currently have 2.1.18 working since it was first released many months ago, but I hesitate to touch it at all or attempt upgrading to 2.2.4 as I know from trying to recently install 2.1.18 on another test machine that the installation fails, so I have no return path if something goes wrong with upgrading to the latest version. The download "stub" used to be tiny (~410KB), now much bigger at around 25MB, but the true full installation on disk after it continues to download in the background for 20-30 minutes is huge, around 6-10GB. It's really a shame that we cannot get a true full, downloadable installation file that is genuinely self-contained and complete, so we aren't so reliant on the "cloud" to install as needed.
 
  • Like
Reactions: dg6464
Have you noticed that the entire CodeProject.com website is currently down? That is likely affecting the ability to do both new "full" installations as well as adding downloadable modules.
The whole CodeProject setup seems entirely too fragile IMHO. I currently have 2.1.18 working since it was first released many months ago, but I hesitate to touch it at all or attempt upgrading to 2.2.4 as I know from trying to recently install 2.1.18 on another test machine that the installation fails, so I have no return path if something goes wrong with upgrading to the latest version. The download "stub" used to be tiny (~410KB), now much bigger at around 25MB, but the true full installation on disk after it continues to download in the background for 20-30 minutes is huge, around 6-10GB. It's really a shame that we cannot get a true full, downloadable installation file that is genuinely self-contained and complete, so we aren't so reliant on the "cloud" to install as needed.

That... I did not - thanks for sharing. It's rare for an entire website to go down, so not something I typically check, especially with all of the bugs/glitches experienced on the regular.

I'll just give it a few days until they get things back up and running then... and give it another go.
 
  • Like
Reactions: jrbeddow
That... I did not - thanks for sharing. It's rare for an entire website to go down, so not something I typically check, especially with all of the bugs/glitches experienced on the regular.

I'll just give it a few days until they get things back up and running then... and give it another go.

I just relocated my CPAI machine to another place in the house and when it powered on again after some hours it did not work properly.
The vision.html page where it is easy to test that things are working displayed "Unable to contact AI Server".

And guess what, as soon as codeproject.com got up again my CPAI machine started to work for a few minutes. Now codeproject are down again and I have the "Unable to contact AI Server" problem again.

I´m running version 2.1.10-Beta. There is definetly a dependency to codeproject.com being up and I don´t like that at all!
 
  • Like
Reactions: dg6464
I just relocated my CPAI machine to another place in the house and when it powered on again after some hours it did not work properly.
The vision.html page where it is easy to test that things are working displayed "Unable to contact AI Server".

And guess what, as soon as codeproject.com got up again my CPAI machine started to work for a few minutes. Now codeproject are down again and I have the "Unable to contact AI Server" problem again.

I´m running version 2.1.10-Beta. There is definetly a dependency to codeproject.com being up and I don´t like that at all!

I was having a ton of issues with 2.2.4. I rolled back to the 2.0.8 docker container on my unraid machine. It's all working again.