CodeProject.AI Version 2.9.0

Thanks for the reply.

I'm running CPAI in a docker container on Unraid. Did try a complete reïnstall with the 2.9.X versions, but this did not help.

Also tried modifying the modulesettings.json, to add the 2.9.X version which I was installing. But this didn't help either.

Currently wondering what the state of google coral support is? There does not seem to be a lot of activity there.
Not much support, unfortunately. I have a working branch for some changes to support more models on more TPUs and make it faster here:

But the GitHub pull request has been sitting in limbo since June and has accumulated the alpha version of some new features:
  • OpenCV image scaling
  • Numba JIT compilation
  • Hosting models on Hugging Face
  • Testing MD5 file hashes for models
  • YOLOv9
  • Faster

I’d be happy to approve any pull requests if anyone wants to get things working the rest of the way in CPAI. I’ve been busy with other projects and haven’t had time to put into this. It mainly just needs testing and getting things figured out with the HF model downloads.

You can see the framerate I’ve gotten with various files and TPU counts in the options.py file in that branch.

Edit: Google hasn’t been doing well supporting the Coral TPU.
If the problem is that the Coral driver is too old and incompatible with current libraries, there is an updated repo to build from. He has done a good job of it:
GitHub - feranick/libcoral: C++ API for ML inferencing and transfer-learning on Coral devices
GitHub - feranick/libedgetpu: Source code for the userspace level runtime driver for Coral.ai devices.
GitHub - feranick/pycoral: Python API for ML inferencing and transfer-learning on Coral devices

I don't know if it will make things more stable and less of a pain, but it's unlikely to be worse...
 
Not much support, unfortunately. I have a working branch for some changes to support more models on more TPUs and make it faster here:

But the GitHub pull request has been sitting in limbo since June and has accumulated the alpha version of some new features:
  • OpenCV image scaling
  • Numba JIT compilation
  • Hosting models on Hugging Face
  • Testing MD5 file hashes for models
  • YOLOv9
  • Faster

I’d be happy to approve any pull requests if anyone wants to get things working the rest of the way in CPAI. I’ve been busy with other projects and haven’t had time to put into this. It mainly just needs testing and getting things figured out with the HF model downloads.

You can see the framerate I’ve gotten with various files and TPU counts in the options.py file in that branch.

Edit: Google hasn’t been doing well supporting the Coral TPU.
If the problem is that the Coral driver is too old and incompatible with current libraries, there is an updated repo to build from. He has done a good job of it:
GitHub - feranick/libcoral: C++ API for ML inferencing and transfer-learning on Coral devices
GitHub - feranick/libedgetpu: Source code for the userspace level runtime driver for Coral.ai devices.
GitHub - feranick/pycoral: Python API for ML inferencing and transfer-learning on Coral devices

I don't know if it will make things more stable and less of a pain, but it's unlikely to be worse...
is there a ready to run version? i want to try it :)
 
  • Like
Reactions: mailseth
is there a ready to run version? i want to try it :)
You would need to check out the code from my fork (‘git clone git@github.com:seth-planet/CodeProject.AI-ObjectDetectionCoral.git’ and ‘git checkout fix_single_fallback’). And then make a few changes in the code to get it running. I basically see three things to work on:
  • File downloading from HF. I expect we can download each model as-needed from HF and save a pile of bandwidth fees. They host models for free. We need to figure out that workflow within the code, as the latest options.py file has both the file name and MD5 hash. So we need to get that up and running correctly.
  • Validate the install process for new components or create a fallback for things like OpenCV and Numba. Hopefully this is as simple as adding them to requirements.txt.
  • Send me exceptions that you are seeing thrown and from where from flakey TPU(s). I can work on catching & handling them appropriately. I think we should be able to fall back to the CPU elegantly with the existing code, but I don't really know where to start and haven't tested it.
  • Bonus work: Are we using the right version of the Coral driver? Should we compile a new one? Are we compatible with the existing one?

Unfortunately, I don’t think I’m able to give a good guide on getting this up and running on your computer. You may be able to follow a similar guide from a different module somewhere? Also, like I mentioned, it will require some reworking of the code. I’m happy to help with what I can, but you’ll be thrown into the deep end for a lot of debugging. I’m happy to accept a Pull Request or even a ‘git diff’ of something simple. To get it up and running, you’d basically only need to figure out how to get the models downloaded & saved as needed from HF.
 
Hello, I have updated to 2.9.5 and the license plate reader is no longer reading out plates and just giving the occasional DayPlate. I kept my settings to mark plates as: * and objects:0,alpr
 
@MikeLud1 I have a CPAI 2.9.5 BlueIris with Intel GPU issue. How do I contact the developers to find out how to fix the issue?
Dorsey you can get BI support through their webpage.
That can be linked from within BI settings.
1738796519800.png

Otherwise please click on support@blueirissoftware.com for support or to share your comments, suggestions, or questions.
For additional support options, please use the ? Help icon in the software.
 
Dorsey you can get BI support through their webpage.
That can be linked from within BI settings.
View attachment 213711

Otherwise please click on support@blueirissoftware.com for support or to share your comments, suggestions, or questions.
For additional support options, please use the ? Help icon in the software.
I already sent an email to Ken and he pointed me to the developers for CPAI. I would have expected Ken to take a whack at what the issue was but he did not.
 
I already sent an email to Ken and he pointed me to the developers for CPAI. I would have expected Ken to take a whack at what the issue was but he did not.
ok so you want CPIA support.
In that case Mike Lud shared a link to that one page back on this very thread for you. :)
 
For those who have trouble running CodeProjectAI YOLO with GPU (CUDA), this is something you can try
I installed my CodeProjectAI in LXC in Proxmox, with 4GPU Sharing among each other LXC containers.

( # is a prompt, )
1. enter
# cd /usr/bin/codeproject.ai-server-2.9.5
For some reason, running setup.sh from here break everything.

/usr/bin/codeproject.ai-server-2.9.5# ls -al modules
drwxr-xr-x 7 root root 47 Feb 24 00:41 ALPR
drwxr-xr-x 5 root root 33 Dec 16 18:54 FaceProcessing
drwxr-xr-x 4 root root 28 Mar 10 09:29 LlamaChat
drwxr-xr-x 6 root root 22 Mar 15 21:27 MultiModeLLM
drwxr-xr-x 6 root root 42 Mar 11 01:11 ObjectDetectionYOLOv5-6.2
drwxr-xr-x 6 root root 13 Mar 11 17:51 ObjectDetectionYOLOv5Net
drwxr-xr-x 7 root root 43 Mar 16 04:45 ObjectDetectionYOLOv8
drwxr-xr-x 4 root root 19 Mar 18 16:22 Text2Image

2. go into module directory that you wanted to fix, example here is YOLO8
#> cd modules/ObjectDetectionYOLOv8

3. check folder names
cd /usr/bin/codeproject.ai-server-2.9.5/modules/ObjectDetectionYOLOv8# ls -al bin
drwxr-xr-x 3 root root 4 Dec 17 06:54 .
drwxr-xr-x 7 root root 43 Mar 16 04:45 ..
drwxr-xr-x 3 root root 4 Dec 17 06:55 debian

4. At first, my YOLO8 cannot load. Then I notice the path codeprojectai is try to access is something like modules/ObjectDetectionYOLOv8/bin/debuan gnu/linux/python/38/venv/
but this folder doesn't exist. with some understand of linux, I made a link using 'ln" symbolic link command
please pay attention of the folder location, is very crucial here.

/usr/bin/codeproject.ai-server-2.9.5/modules/ObjectDetectionYOLOv8#cd bin
/usr/bin/codeproject.ai-server-2.9.5/modules/ObjectDetectionYOLOv8/bin#ln -s debian 'debian gnu' #pay attention to ' '
/usr/bin/codeproject.ai-server-2.9.5/modules/ObjectDetectionYOLOv8/bin#ls -al
drwxr-xr-x 3 root root 4 Dec 17 06:54 .
drwxr-xr-x 7 root root 43 Mar 16 04:45 ..
drwxr-xr-x 3 root root 4 Dec 17 06:55 debian
lrwxrwxrwx 1 root root 6 Dec 17 06:52 'debian gnu' -> debian

5. now we can enter into next folder. you can go right into debian , but for demostrattion here, I would use 'debian gnu'
/usr/bin/codeproject.ai-server-2.9.5/modules/ObjectDetectionYOLOv8/bin#cd 'debian gnu'
/usr/bin/codeproject.ai-server-2.9.5/modules/ObjectDetectionYOLOv8/bin/debian gnu#ls -al
drwxr-xr-x 3 root root 4 Dec 17 06:55 .
drwxr-xr-x 3 root root 4 Apr 14 09:55 ..
drwxr-xr-x 3 root root 3 Dec 16 18:19 python38

6. again, there is no folder linux. so, create a symbolic link to route it to right folder
/usr/bin/codeproject.ai-server-2.9.5/modules/ObjectDetectionYOLOv8/bin/debian gnu#ln -s . linux #pay attention to .
/usr/bin/codeproject.ai-server-2.9.5/modules/ObjectDetectionYOLOv8/bin/debian gnu#ls -al
drwxr-xr-x 3 root root 4 Dec 17 06:55 .
drwxr-xr-x 3 root root 4 Apr 14 09:55 ..
lrwxrwxrwx 1 root root 1 Dec 17 06:55 linux -> .
drwxr-xr-x 3 root root 3 Dec 16 18:19 python38

7. we now go to virtual environment
/usr/bin/codeproject.ai-server-2.9.5/modules/ObjectDetectionYOLOv8/bin/debian gnu#cd linux/python38/venv/bin
/usr/bin/codeproject.ai-server-2.9.5/modules/ObjectDetectionYOLOv8/bin/debian gnu/linux/python38/venv/bin# ls -al
drwxr-xr-x 2 root root 12 Feb 26 12:03 .
drwxr-xr-x 5 root root 7 Dec 16 18:19 ..
-rw-r--r-- 1 root root 2250 Dec 16 18:19 activate
-rw-r--r-- 1 root root 1302 Dec 16 18:19 activate.csh
-rw-r--r-- 1 root root 2454 Dec 16 18:19 activate.fish
-rw-r--r-- 1 root root 8834 Dec 16 18:19 Activate.ps1
-rwxr-xr-x 1 root root 310 Feb 26 12:03 pip
-rwxr-xr-x 1 root root 310 Feb 26 12:03 pip3
-rwxr-xr-x 1 root root 310 Feb 26 12:03 pip3.8
lrwxrwxrwx 1 root root 9 Dec 16 18:19 python -> python3.8
lrwxrwxrwx 1 root root 9 Dec 16 18:19 python3 -> python3.8
lrwxrwxrwx 1 root root 24 Dec 16 18:19 python3.8 -> /usr/local/bin/python3.8

8. if you can see activate, and pip, you are good. copy the path, and return back to modules folder, and try to install it manually
/usr/bin/codeproject.ai-server-2.9.5/modules/ObjectDetectionYOLOv8/bin/debian gnu/linux/python38/venv/bin#cd /usr/bin/codeproject.ai-server-2.9.5/modules/ObjectDetectionYOLOv8
cd /usr/bin/codeproject.ai-server-2.9.5/modules/ObjectDetectionYOLOv8#


9. Before we run setup, MAKE SURE you enter the virtual environment
cd /usr/bin/codeproject.ai-server-2.9.5/modules/ObjectDetectionYOLOv8# . 'bin/debian gnu/linux/python38/venv/bin/activate' #pay attention to . before bin and ' ' due
(venv) root@codeai:/usr/bin/codeproject.ai-server-2.9.5/modules/ObjectDetectionYOLOv8#

if you see (venv) , you successfully enter the virtual environment.
now, you can run setup.
(venv) root@codeai:/usr/bin/codeproject.ai-server-2.9.5/modules/ObjectDetectionYOLOv8# ../../setup.sh

Installing CodeProject.AI Analysis Module

======================================================================

CodeProject.AI Installer

======================================================================

21.01 GiB of 133.02 GiB available on linux (linux debian x86_64 - linux)
Installing psmisc...
Installing xz-utils...

General CodeProject.AI setup

Setting permissions on runtimes folder...done
Setting permissions on general downloads folder...done
Setting permissions on module asset downloads folder...done
Setting permissions on modules folder...done
Setting permissions on models folder...done
Setting permissions on persisted data folder...done

GPU support

CUDA (NVIDIA) Present: Yes (CUDA 11.8, cuDNN 9.8.0)
ROCm (AMD) Present: No
MPS (Apple) Present: No

Reading module settings.....E: Could not get lock /var/lib/apt/lists/lock. It is held by process 45069 (apt-get)
E: Unable to lock directory /var/lib/apt/lists/
..done
Processing module ObjectDetectionYOLOv8 1.6.2 (Internal)
more ...
more...
more...



10. most of time, your install should be good for now,
if is still not install correct, let me know here. I will make another post which involved manually install requirements.txt file
something like pip install -r requirements.linux.cuda12.txt

your cuda version should follow your nvidia-smi top right corner version. if you can't run nvidia-smi, mostly you didn't have your CUDA install correctly.
And if you would like to install cuDNN , you need to install inside virtual environment with pip # pip install nvidia-cudnn-cu12
the correct package depend of your CUDA version. You can find your right package name at pypi.org PyPI · The Python Package Index
 
  • Like
Reactions: slidermike
This is my nvidia-smi

Code:
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.161.07             Driver Version: 535.161.07   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  Quadro K620                    Off | 00000000:04:00.0 Off |                  N/A |
| 34%   46C    P0               2W /  30W |    781MiB /  2048MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
|   1  NVIDIA CMP 50HX                Off | 00000000:0A:00.0 Off |                  N/A |
| 47%   50C    P0              84W / 225W |   3455MiB / 10240MiB |     10%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
|   2  Quadro K620                    Off | 00000000:84:00.0 Off |                  N/A |
| 34%   43C    P0               2W /  30W |   1533MiB /  2048MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
|   3  Quadro M2000                   Off | 00000000:8A:00.0 Off |                  N/A |
| 61%   49C    P0              22W /  75W |   3871MiB /  4096MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|    0   N/A  N/A     76392      C   llama-cpp-fallback                          778MiB |
|    1   N/A  N/A     31998      C   ...gnu/linux/python38/venv/bin/python3      594MiB |
|    1   N/A  N/A     32087      C   ...gnu/linux/python38/venv/bin/python3      852MiB |
|    1   N/A  N/A     76392      C   llama-cpp-fallback                         2006MiB |
|    2   N/A  N/A     76392      C   llama-cpp-fallback                         1528MiB |
|    3   N/A  N/A     76392      C   llama-cpp-fallback                         1203MiB |
|    3   N/A  N/A    165016      C   /home/nossd/linux/client                   2662MiB |
+---------------------------------------------------------------------------------------+
 
Why would you have both YOLOv5.6.2 and YOLOv8 running at the same time, I thought that only running one was preferred; but I could be mistaken!
 
  • Like
Reactions: Skinny1
thanks for pointing out. I was trying to offload to two running instance so I can get a faster inference speed. But after I shut one off, it seem to be fine.
most likely due to I was running larger model size ( now is size small )