For those who have trouble running CodeProjectAI YOLO with GPU (CUDA), this is something you can try
I installed my CodeProjectAI in LXC in Proxmox, with 4GPU Sharing among each other LXC containers.
( # is a prompt, )
1. enter
# cd /usr/bin/codeproject.ai-server-2.9.5
For some reason, running setup.sh from here break everything.
/usr/bin/codeproject.ai-server-2.9.5# ls -al modules
drwxr-xr-x 7 root root 47 Feb 24 00:41 ALPR
drwxr-xr-x 5 root root 33 Dec 16 18:54 FaceProcessing
drwxr-xr-x 4 root root 28 Mar 10 09:29 LlamaChat
drwxr-xr-x 6 root root 22 Mar 15 21:27 MultiModeLLM
drwxr-xr-x 6 root root 42 Mar 11 01:11 ObjectDetectionYOLOv5-6.2
drwxr-xr-x 6 root root 13 Mar 11 17:51 ObjectDetectionYOLOv5Net
drwxr-xr-x 7 root root 43 Mar 16 04:45 ObjectDetectionYOLOv8
drwxr-xr-x 4 root root 19 Mar 18 16:22 Text2Image
2. go into module directory that you wanted to fix, example here is YOLO8
#> cd modules/ObjectDetectionYOLOv8
3. check folder names
cd /usr/bin/codeproject.ai-server-2.9.5/modules/ObjectDetectionYOLOv8# ls -al bin
drwxr-xr-x 3 root root 4 Dec 17 06:54 .
drwxr-xr-x 7 root root 43 Mar 16 04:45 ..
drwxr-xr-x 3 root root 4 Dec 17 06:55 debian
4. At first, my YOLO8 cannot load. Then I notice the path codeprojectai is try to access is something like modules/ObjectDetectionYOLOv8/bin/debuan gnu/linux/python/38/venv/
but this folder doesn't exist. with some understand of linux, I made a link using 'ln" symbolic link command
please pay attention of the folder location, is very crucial here.
/usr/bin/codeproject.ai-server-2.9.5/modules/ObjectDetectionYOLOv8#cd bin
/usr/bin/codeproject.ai-server-2.9.5/modules/ObjectDetectionYOLOv8/bin#ln -s debian 'debian gnu' #pay attention to ' '
/usr/bin/codeproject.ai-server-2.9.5/modules/ObjectDetectionYOLOv8/bin#ls -al
drwxr-xr-x 3 root root 4 Dec 17 06:54 .
drwxr-xr-x 7 root root 43 Mar 16 04:45 ..
drwxr-xr-x 3 root root 4 Dec 17 06:55 debian
lrwxrwxrwx 1 root root 6 Dec 17 06:52 'debian gnu' -> debian
5. now we can enter into next folder. you can go right into debian , but for demostrattion here, I would use 'debian gnu'
/usr/bin/codeproject.ai-server-2.9.5/modules/ObjectDetectionYOLOv8/bin#cd 'debian gnu'
/usr/bin/codeproject.ai-server-2.9.5/modules/ObjectDetectionYOLOv8/bin/debian gnu#ls -al
drwxr-xr-x 3 root root 4 Dec 17 06:55 .
drwxr-xr-x 3 root root 4 Apr 14 09:55 ..
drwxr-xr-x 3 root root 3 Dec 16 18:19 python38
6. again, there is no folder linux. so, create a symbolic link to route it to right folder
/usr/bin/codeproject.ai-server-2.9.5/modules/ObjectDetectionYOLOv8/bin/debian gnu#ln -s . linux #pay attention to .
/usr/bin/codeproject.ai-server-2.9.5/modules/ObjectDetectionYOLOv8/bin/debian gnu#ls -al
drwxr-xr-x 3 root root 4 Dec 17 06:55 .
drwxr-xr-x 3 root root 4 Apr 14 09:55 ..
lrwxrwxrwx 1 root root 1 Dec 17 06:55 linux -> .
drwxr-xr-x 3 root root 3 Dec 16 18:19 python38
7. we now go to virtual environment
/usr/bin/codeproject.ai-server-2.9.5/modules/ObjectDetectionYOLOv8/bin/debian gnu#cd linux/python38/venv/bin
/usr/bin/codeproject.ai-server-2.9.5/modules/ObjectDetectionYOLOv8/bin/debian gnu/linux/python38/venv/bin# ls -al
drwxr-xr-x 2 root root 12 Feb 26 12:03 .
drwxr-xr-x 5 root root 7 Dec 16 18:19 ..
-rw-r--r-- 1 root root 2250 Dec 16 18:19 activate
-rw-r--r-- 1 root root 1302 Dec 16 18:19 activate.csh
-rw-r--r-- 1 root root 2454 Dec 16 18:19 activate.fish
-rw-r--r-- 1 root root 8834 Dec 16 18:19 Activate.ps1
-rwxr-xr-x 1 root root 310 Feb 26 12:03 pip
-rwxr-xr-x 1 root root 310 Feb 26 12:03 pip3
-rwxr-xr-x 1 root root 310 Feb 26 12:03 pip3.8
lrwxrwxrwx 1 root root 9 Dec 16 18:19 python -> python3.8
lrwxrwxrwx 1 root root 9 Dec 16 18:19 python3 -> python3.8
lrwxrwxrwx 1 root root 24 Dec 16 18:19 python3.8 -> /usr/local/bin/python3.8
8. if you can see activate, and pip, you are good. copy the path, and return back to modules folder, and try to install it manually
/usr/bin/codeproject.ai-server-2.9.5/modules/ObjectDetectionYOLOv8/bin/debian gnu/linux/python38/venv/bin#cd /usr/bin/codeproject.ai-server-2.9.5/modules/ObjectDetectionYOLOv8
cd /usr/bin/codeproject.ai-server-2.9.5/modules/ObjectDetectionYOLOv8#
9. Before we run setup, MAKE SURE you enter the virtual environment
cd /usr/bin/codeproject.ai-server-2.9.5/modules/ObjectDetectionYOLOv8# . 'bin/debian gnu/linux/python38/venv/bin/activate' #pay attention to . before bin and ' ' due
(venv) root@codeai:/usr/bin/codeproject.ai-server-2.9.5/modules/ObjectDetectionYOLOv8#
if you see (venv) , you successfully enter the virtual environment.
now, you can run setup.
(venv) root@codeai:/usr/bin/codeproject.ai-server-2.9.5/modules/ObjectDetectionYOLOv8# ../../setup.sh
Installing CodeProject.AI Analysis Module
======================================================================
CodeProject.AI Installer
======================================================================
21.01 GiB of 133.02 GiB available on linux (linux debian x86_64 - linux)
Installing psmisc...
Installing xz-utils...
General CodeProject.AI setup
Setting permissions on runtimes folder...done
Setting permissions on general
downloads folder...done
Setting permissions on module asset downloads folder...done
Setting permissions on modules folder...done
Setting permissions on models folder...done
Setting permissions on persisted data folder...done
GPU support
CUDA (NVIDIA) Present: Yes (CUDA 11.8, cuDNN 9.8.0)
ROCm (AMD) Present: No
MPS (Apple) Present: No
Reading module settings.....E: Could not get lock /var/lib/apt/lists/lock. It is held by process 45069 (apt-get)
E: Unable to lock directory /var/lib/apt/lists/
..done
Processing module ObjectDetectionYOLOv8 1.6.2 (Internal)
more ...
more...
more...
10. most of time, your install should be good for now,
if is still not install correct, let me know here. I will make another post which involved manually install requirements.txt file
something like pip install -r requirements.linux.cuda12.txt
your cuda version should follow your nvidia-smi top right corner version. if you can't run nvidia-smi, mostly you didn't have your CUDA install correctly.
And if you would like to install cuDNN , you need to install inside virtual environment with pip # pip install nvidia-cudnn-cu12
the correct package depend of your CUDA version. You can find your right package name at pypi.org
PyPI · The Python Package Index