CodeProject.AI Version 2.0

Have you noticed that the entire CodeProject.com website is currently down? That is likely affecting the ability to do both new "full" installations as well as adding downloadable modules.
The whole CodeProject setup seems entirely too fragile IMHO. I currently have 2.1.18 working since it was first released many months ago, but I hesitate to touch it at all or attempt upgrading to 2.2.4 as I know from trying to recently install 2.1.18 on another test machine that the installation fails, so I have no return path if something goes wrong with upgrading to the latest version. The download "stub" used to be tiny (~410KB), now much bigger at around 25MB, but the true full installation on disk after it continues to download in the background for 20-30 minutes is huge, around 6-10GB. It's really a shame that we cannot get a true full, downloadable installation file that is genuinely self-contained and complete, so we aren't so reliant on the "cloud" to install as needed.

One of the main reasons why I am still on Deepstack. I know many run CodeProject just fine, but we never saw as many issues with Deepstack as we do with Code Project.
 
I just relocated my CPAI machine to another place in the house and when it powered on again after some hours it did not work properly.
The vision.html page where it is easy to test that things are working displayed "Unable to contact AI Server".

And guess what, as soon as codeproject.com got up again my CPAI machine started to work for a few minutes. Now codeproject are down again and I have the "Unable to contact AI Server" problem again.

I´m running version 2.1.10-Beta. There is definetly a dependency to codeproject.com being up and I don´t like that at all!

When I was testing CodeProject I noticed that it would throw that error anytime I disconnected the computer from the internet, which would be similar to their website is down and installing the Microsoft loopback adapter fixed it:

right click on window start menu icon and select Device manager. Device manager window will immediately open (or you may use any other way how to open device manager window)
click on Action, and select Add legacy hardware

click Next on welcome screen
choose "Install the hardware that i manually select from a list" and click on Next

scroll down and select Network adapters from offered common hardware types and click on Next

select Microsoft as the manufacturer, and then select Microsoft KM-TEST Loopback adapter card model, click on Next

click on Next

click on Finish

Now that annoying "Unable to contact AI server" error is gone!
 
Apologies for the word vomit, but I need some help plz.

Hate to even ask this, but I'm at my wits end and have no hair left to pull out at this point and desperately need some direction/guidance. Have been running a 3rd gen i5 desktop and deepstack with 9 cameras for the better part of three years. Deepstack worked VERY well, with the exception of randomly every five or six weeks, just dying and would require a reinstall to resume functioning. Discovered CodeProject.Ai, read hundreds of pages of forums postings on it, then decided to pull the trigger and migrate. That was.... Rough, but kind of successful?

Unfortunately I've developed a need for some LPR cameras, and noticed that despite CodeProject seems to be working (except the fact my poor i5 cpu was basically at 100% usage 100% of the time) I decided to perform a few upgrades and get myself a more powerful cpu, more ram, and after all that decided to get a newer version of BI while I was at it. Great, now I've got my LPR cams installed and decently tuned in, except, even now with an i7-7800 and 16gb of ram, my CodeProject.ai seems to be maxing out this system resource wise as well. This issue leads to sometimes seeing either abandoned object analyzation, or 20+ second returns to identify an object (on top of identifying everything under the sun as a person/car/truck/van/hose/cat/dog/chair every time there is ANY motion detected, but that's a different problem) if even at all.

More purchases were made, and unfortunately, this is where things get.... Bad? Complicated. Picked up 2x NVidia Tesla P4s and some extra ram for my main hypervisor server that runs 98% of the compute based stuff I do, Proxmox. The massive amount of reading I did, this sounded like it'd be a piece of cake! Even found a few guides to follow, none of which have really returned anything remotely functional. I'll bulletpoint the flustered cluster of what I'm attempting, hopefully to clarify my rambling a bit:

  • Blue Iris: i7-8700 desktop, 16gb ram, Server 2k19 running bare metal.
  • Hypervisor: 2x e5-2690v3s, 256gb ram, Proxmox 6.4 (don't judge me!) running bare metal.
    • GPU: 2x NVidia Tesla P4 headless datacenter cards
    • NVIDIA-SMI 460.106.00 Driver Version: 460.106.00 CUDA Version: 11.2
  • Proxmox LXC Container: Ubuntu 20.04LTS container, unprivileged = no, nesting = 1
    • <ctid>.conf contains:
      • lxc.cgroup.devices.allow: c 195:* rwm
        lxc.cgroup.devices.allow: c 238:* rwm
        lxc.cgroup.devices.allow: c 241:* rwm
        lxc.mount.entry: /dev/nvidia0 dev/nvidia0 none bind,optional,create=file
        lxc.mount.entry: /dev/nvidiactl dev/nvidiactl none bind,optional,create=file
        lxc.mount.entry: /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file
        lxc.mount.entry: /dev/nvidia-modeset dev/nvidia-modeset none bind,optional,create=file
        lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-uvm-tools none bind,optional,create=file
        lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-caps/nvidia-cap1 none bind,optional,create=file
        lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-caps/nvidia-cap2 none bind,optional,create=file
        lxc.apparmor.profile: unconfined
        lxc.cgroup.devices.allow: a
        lxc.cap.drop:
        lxc.mount.auto:
    • NVIDIA-SMI 460.106.00 Driver Version: 460.106.00 CUDA Version: 11.2 (no kernel driver install)
    • Docker version 24.0.6, build ed223bc (installed using instructions found here)
      • Running docker run -it --rm --gpus all ubuntu nvidia-smi returns an identical dataset found on my hypervisor host, along with the docker host lxc container
    • running nvidia-contianer-cli info returns the following:
      nvidia-container-cli info
      NVRM version: 460.106.00
      CUDA version: 11.2

      Device Index: 0
      Device Minor: 0
      Model: Tesla P4
      Brand: Tesla
      GPU UUID: GPU-83729f44-3fb8-b4ed-2efb-656e152d3d12
      Bus Location: 00000000:82:00.0
      Architecture: 6.1

      Device Index: 1
      Device Minor: 1
      Model: Tesla P4
      Brand: Tesla
      GPU UUID: GPU-eb6187de-dff9-83e5-ce33-140d9e466b12
      Bus Location: 00000000:83:00.0
      Architecture: 6.1
    • Attempting to run: docker run --name CodeProject.AI -d -p 32168:32168 --gpus all codeproject/ai-server:gpureturns:
      • Error response from daemon: Cannot restart container CP.AI: failed to create task for container: failed to create shim task: OCI runtime create failerunc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout:stderr: Auto-detected mode as 'legacy'
        nvidia-container-cli: requirement error: unsatisfied condition: cuda>=11.7, please update your driver to a newer version, or use an earlier cuda conter: unknown
  • Purpose: From the documentation I've read, you can use a singular GPU and use pcie passthrough to multiple lxc containers for distributed GPU compute loads. Ideally, I'd like plex, CP.ai, along with a few other things to be able to share a pair of GPUs, though right now, I'd be thrilled beyond belief if I could get CP.ai to even START on this damn thing!
What in the hell am I missing? At this point I've got no less than fifteen hours invested in this project and have only moderately progressed towards my goal of making something even functional, muchless maybe slightly improved identification times. I've uninstalled and reinstalled the driver/cuda on the baremetal hypervisor no fewer than ten times, countless reboots, deleted and recreated lxc containers probably two dozen times... Plexmediaserver runs on a dedicated 20.04LTS lxc container and has no problems at all utilizing the GPU for transcoding. I'd just LOVE to take one of the damn p4s out of the server and install it in the window PC running, but one of these cards won't physically fit in the chassis of my BI PC, nor do they have any active cooling as they're intended for datacenter chassis machines like I've got them installed in.


Any suggestion would be endlessly appreciated with this!!
CP/AI has been solid here for some time.
Apologies, this is oranges/apples but maybe of some use.
1. In my SOHO, I am running BI 5.8.0.0 in a W10 virtual machine, this is a test lab with 3 cameras. Host is ESXi 7.03
2. I am running CPAI 2.2.4 in a Debian 11 Linux in a VM on the same ESXi system with an ancient NVIDIA K620 via passthrough. Make sure this VM has at least 2 cores, AI doesn't run properly with 1 core (in my experiences early on)
3. Installed cuDNN via tar and compile (not sure this is necessary)
4. Installed the cuda toolkit (also not sure).
5. Installed nvidia-docker2
6. I am using NVIDIA docker. (sudo nvidia-docker run --gpus all -p 32168:32168 codeproject/ai-server:gpu-2.2.4).
7. Made sure I could run some stuff through CP/AI on the Linux machine CP Explorer before even trying BI. Made sure it showed GPU on object detection (mine shows "YOLOv5 6.2 started GPU(CUDA)".
8. My detection times in BI are nothing to write home about but under 100msec.
Anytime I change something in either system, I reboot both VM's.

Our production site has 13 cameras, only 3 are using AI but results similar to above. Also has Linux system with Docker image. Will increase when we upgrade the video card.
Edit: have no hair to offer. :)
 
One of the main reasons why I am still on Deepstack. I know many run CodeProject just fine, but we never saw as many issues with Deepstack as we do with Code Project.
This is only because cpai is pushing lots of updates with changes. Deepstack had two versions after it was implemented with BI. Then they did nothing. Cpai 2.0.8 works perfectly across many machines I have. There are also many more users of cpai and it offers a wide variety of options not available in DS.
 
What GPU is recommended at this time? I'm looking at an nvidia 1660 ti but I'm really open to anything. I will have around 10 cameras including 2 HFW5241E-Z12E for LPR and a mix of 5442 and 4k color cams
 
Thanks, it's the same on my PC. .NET version with DirectML runs 1.5x faster than 6.2 with CUDA on the nvidia GPU. That's odd.
The accucary of inferences is similar. They are obviously different as order of low-level operations, etc should exactly meet to have the same result, that I highly doubt if could be possible as one is DirectML and the other is CUDA based..

However mine is in the 30 inferences/second range while yours is above 110. Do you run the .NET version on CPU or on GPU? If on CPU then what kind of CPU, if on GPU then what graphics card do you have?
How did you set up directML?
 
Just wanted to share that I switched to the mini pci-e coral yesterday and my inference speed has gone from ~200ms average on the usb coral with medium model size down to 40ms average. Pretty impressive. I'm on the latest Codeproject and BI.

For comparison, my GPU inference speeds are 150ms average on a gtx1660ti with 6gb ram.
 
Just wanted to share that I switched to the mini pci-e coral yesterday and my inference speed has gone from ~200ms average on the usb coral with medium model size down to 40ms average. Pretty impressive. I'm on the latest Codeproject and BI.

For comparison, my GPU inference speeds are 150ms average on a gtx1660ti with 6gb ram.
Impressive results. Keep us posted in a week or so to let us know how the overall stability/experience is going.
 
One interesting difference is the custom model. When using the USB coral that field in BI was blank but with the mini PCI-e coral it shows MobileNetSSD as a custom model.
 
Last edited:
One interesting difference is the custom model. When using the USB coral that field in BI was blank but with the mini PCI-e coral it shows MobileNetSSD as a custom model.
Interesting...did you find any additional info on this MobileNetSSD model? What objects it can use (person, car, animal, etc...) and whether it is optimized for night use?

Edit: Perhaps more importantly, are your comparison numbers generated from the same (like for like) models? And are the numbers from commonly used models (ipcam-combined, ipcam-general, ipcam-dark, etc...)?
 
Last edited:
Interesting...did you find any additional info on this MobileNetSSD model? What objects it can use (person, car, animal, etc...) and whether it is optimized for night use?

Edit: Perhaps more importantly, are your comparison numbers generated from the same (like for like) models? And are the numbers from commonly used models (ipcam-combined, ipcam-general, ipcam-dark, etc...)?

Haven't found much only that it's optimized for mobile devices. Maybe someone else can chime in that is familiar with it.
 
  • Like
Reactions: jrbeddow
Alright looks like the MobileNetSSD model can handle 90 objects. Based on coral.ai it uses the COCO dataset



 

Attachments

  • Capture.PNG
    Capture.PNG
    42.8 KB · Views: 17
Last edited:
  • Like
Reactions: jrbeddow
Alright looks like the MobileNetSSD model can handle 90 objects. Based on coral.ai it uses the COCO dataset



It detects the same objects as YOLOv5

1696788288124.png
 
You need to check ALPR for plates and for cameras that you do not want to use ALPR add alpr:0 to the cameras AI Custom model setting.
Hi Mike! i'd want to know how to customize AI camera settings to minimize resources waste. I have a ptz camera, that i want to recognize objects, faces and license-plates. In the general AI settings i checked "default objects,custom model, faces, alpr for plates", while in camera AI I have these settings photo_2023-10-11_10-47-27.jpg
what if i add : "alpr and faces" in custom models ? Is it necessary ?

In the other cameras that i don't want to do license-plate and faces recognition i have to specify "faces:0,license-plate:0,alpr:0" in custom model or i can simply uncheck "save unknown face to" (in order to disable face recognition) and leave ipcam-general as a custom model ?

Thank you so much.
 
Hey,
Ive got one of my cameras set up to only confirm people. I noticed when I get deliveries the camera picks up people as they leave my house. How can I get the initial trigger (vehicle) to continue analyzing and pick up the person exiting the vehicle? Would the issue be in the AI option for leading motion and or my Break time under triggers.

Thanks
 

Attachments

  • Screenshot 2023-10-15 at 2.51.14 PM.png
    Screenshot 2023-10-15 at 2.51.14 PM.png
    4.1 MB · Views: 48
  • Screenshot 2023-10-15 at 3.13.20 PM.png
    Screenshot 2023-10-15 at 3.13.20 PM.png
    2.8 MB · Views: 50
  • Screenshot 2023-10-15 at 3.01.15 PM.png
    Screenshot 2023-10-15 at 3.01.15 PM.png
    985.2 KB · Views: 49
  • Screenshot 2023-10-15 at 3.01.37 PM.png
    Screenshot 2023-10-15 at 3.01.37 PM.png
    842.8 KB · Views: 47
about face detection
how it works?
if i create a profile with multiple faces on the server from the browser the camera stop the checks and stop to add the unknown faces in my folder
i don't see any alert made with the new face added
how i can add a face and still get unknown faces?