I am running yolo 5 on my boxes, do we think there will be a patch or something to this? Or we will have to do this manuallyIt did help me with a similar error for YOLOv8 after update to 2.6.5 - that's actually my post in there
Ya going to have to tapout on this, beyond my skillset and time, tried a few different things that you and chatgpt suggested, no joy. Anyways thx for the repliesNo clue. I think NumPy was upgraded to 2.0 while the modules themselves were compiled under NumPy 1.x - hence the error.
I have downgraded NumPy manually to 1.23.0 and managed to revive YOLOv8.
It's actually something that I asked ChatGPT how to fix - and its response helped a lot
This fixed my ALPR issue that had the same symptoms. I just changed to the identical path, but with /ALPR/place of /ObjectDetectionYOLOv8/Maybe this helps.
- Installing PaddlePaddle, Parallel Distributed Deep Learning...Looking in indexes: https://mirror.baidu.com/pypi/simple
Collecting paddlepaddle-gpu==2.6.0
Downloading https://mirror.baidu.com/pypi/packages/37/9f/69921f0e4a5ef25291c77c16775457075559b0f01f7ebcdc1ea66abf2451/paddlepaddle_gpu-2.6.0-cp39-cp39-win_amd64.whl (476.3 MB)
---------------------------------------- 476.3/476.3 MB 1.1 MB/s eta 0:00:00
Collecting httpx (from paddlepaddle-gpu==2.6.0)
Downloading https://mirror.baidu.com/pypi/packages/41/7b/ddacf6dcebb42466abd03f368782142baa82e08fc0c1f8eaa05b4bae87d5/httpx-0.27.0-py3-none-any.whl (75 kB)
---------------------------------------- 75.6/75.6 kB 2.1 MB/s eta 0:00:00
Collecting numpy>=1.13 (from paddlepaddle-gpu==2.6.0)
Downloading https://mirror.baidu.com/pypi/packages/6a/1e/1d76829f03b7ac9c90e2b158f06b69cddf9a06b96667dd7e2d96acdc0593/numpy-2.0.0-cp39-cp39-win_amd64.whl (16.5 MB)
---------------------------------------- 16.5/16.5 MB 8.2 MB/s eta 0:00:00
Collecting Pillow (from paddlepaddle-gpu==2.6.0)
Downloading https://mirror.baidu.com/pypi/packages/0b/d7/3a9cfa80a3ff59fddfe3b5bd1cf5728e7ed6608678ce9f23e79f35e87805/pillow-10.3.0-cp39-cp39-win_amd64.whl (2.5 MB)
---------------------------------------- 2.5/2.5 MB 8.1 MB/s eta 0:00:00
Collecting decorator (from paddlepaddle-gpu==2.6.0)
Downloading https://mirror.baidu.com/pypi/packages/d5/50/83c593b07763e1161326b3b8c6686f0f4b0f24d5526546bee538c89837d6/decorator-5.1.1-py3-none-any.whl (9.1 kB)
Collecting astor (from paddlepaddle-gpu==2.6.0)
Downloading https://mirror.baidu.com/pypi/packages/c3/88/97eef84f48fa04fbd6750e62dcceafba6c63c81b7ac1420856c8dcc0a3f9/astor-0.8.1-py2.py3-none-any.whl (27 kB)
Collecting opt-einsum==3.3.0 (from paddlepaddle-gpu==2.6.0)
Downloading https://mirror.baidu.com/pypi/packages/bc/19/404708a7e54ad2798907210462fd950c3442ea51acc8790f3da48d2bee8b/opt_einsum-3.3.0-py3-none-any.whl (65 kB)
---------------------------------------- 65.5/65.5 kB 506.8 kB/s eta 0:00:00
WARNING: Skipping page https://mirror.baidu.com/pypi/simple/protobuf/ because the GET request got Content-Type: application/octet-stream. The only supported Content-Types are application/vnd.pypi.simple.v1+json, application/vnd.pypi.simple.v1+html, and text/html
INFO: pip is looking at multiple versions of paddlepaddle-gpu to determine which version is compatible with other requirements. This could take a while.
ERROR: Could not find a version that satisfies the requirement protobuf<=3.20.2,>=3.1.0; platform_system == "Windows" (from paddlepaddle-gpu) (from versions: none)
ERROR: No matching distribution found for protobuf<=3.20.2,>=3.1.0; platform_system == "Windows"
(❌ failed check) done
Someone found that installing PaddlePaddle from the venv fixed the issue. It uninstalled protobuf 5.27.2 which got installed with ALPR 3.2.2 and installed protobuf 3.20.2 which is what is needed.
I believe specifically installing CUDA 11.8 does not work, I will try later with CUDA 11.7.
Hi hubbend the link there doesn't work, can you try to find the correct link for me please? Thank you
I updated the link but here it is Re: ALPR INSTALLATION, NO MODULE NAME "PADDLE"
Thanks for the link. ALPR with Cuda 11.8 started OK after I installed PaddlePaddle manually per instruction.
However, ALPR stuck at CPU only, and I had to as well install "pip install PaddlePaddle-gpu" to switch to GPU(CUDA).
Have you tried Command Prompt in Admin mode?I tried the above and got the following error:
ERROR: Could not install packages due to an OSError: [WinError 5] Access is denied: 'c:\\program files\\codeproject\\ai\\modules\\alpr\\bin\\windows\\python39\\venv\\Lib\\site-packages\\paddle\\base\\libpaddle.pyd'
Check the permissions.
My install of the non gpu paddle worked well.
Steps I used to install PaddlePaddle-gpu on my system (similar to instruction in the link except PaddlePaddle-gpu instead of PaddlePaddle):Definitely.
Steps I used to install PaddlePaddle-gpu on my system (similar to instruction in the link except PaddlePaddle-gpu instead of PaddlePaddle):
My CodeProject and BI are on Promox VM Windows 11.
---------------------------------------------
1. Open Command Prompt in Admin mode
2. Run the below commands:
cd \Program Files\CodeProject\AI\modules\ALPR\bin\windows\python39\venv\Scripts
activate
cd C:\Program Files\CodeProject\AI\modules\ALPR
pip install PaddlePaddle-gpu
--------------------------------------
After that I noticed that there were 2 folders: "paddlepaddle_gpu-2.6.1.dist-info" and "paddlepaddle-2.6.1.dist-info" under C:\Program Files\CodeProject\AI\modules\ALPR\bin\windows\python39\venv\Lib\site-packages
Module 'License Plate Reader' 3.2.2 (ID: ALPR)
Valid: True
Module Path: <root>/modules/ALPR
Module Location: Internal
AutoStart: True
Queue: alpr_queue
Runtime: python3.8
Runtime Location: Local
FilePath: ALPR_adapter.py
Start pause: 3 sec
Parallelism: 0
LogVerbosity:
Platforms: all,!windows-arm64
GPU Libraries: not installed
GPU: use if supported
Accelerator:
Half Precision: enable
Environment Variables
AUTO_PLATE_ROTATE = True
CROPPED_PLATE_DIR = <root>/Server/wwwroot
MIN_COMPUTE_CAPABILITY = 6
MIN_CUDNN_VERSION = 7
OCR_OPTIMAL_CHARACTER_HEIGHT = 60
OCR_OPTIMAL_CHARACTER_WIDTH = 30
OCR_OPTIMIZATION = True
PLATE_CONFIDENCE = 0.7
PLATE_RESCALE_FACTOR = 2
PLATE_ROTATE_DEG = 0
REMOVE_SPACES = False
ROOT_PATH = <root>
SAVE_CROPPED_PLATE = False
Status Data: {
"inferenceDevice": "CPU",
"inferenceLibrary": "",
"canUseGPU": "false",
"successfulInferences": 19,
"failedInferences": 0,
"numInferences": 19,
"averageInferenceMs": 138.0
}
Started: 28 Jul 2024 12:52:06 PM Pacific Standard Time
LastSeen: 28 Jul 2024 12:52:37 PM Pacific Standard Time
Status: Started
Requests: 16112 (includes status calls)
Server version: 2.6.5
System: Docker (b942b79acfaf)
Operating System: Linux (Ubuntu 22.04)
CPUs: Intel(R) Xeon(R) CPU E3-1230 V2 @ 3.30GHz (Intel)
1 CPU x 4 cores. 8 logical processors (x64)
GPU (Primary): Tesla P4 (8 GiB) (NVIDIA)
Driver: 550.40.07, CUDA: 11.5.119 (up to: 12.4), Compute: 6.1, cuDNN: 8.9.6
System RAM: 16 GiB
Platform: Linux
BuildConfig: Release
Execution Env: Docker
Runtime Env: Production
Runtimes installed:
.NET runtime: 7.0.19
.NET SDK: Not found
Default Python: 3.10.12
Go: Not found
NodeJS: Not found
Rust: Not found
Video adapter info:
System GPU info:
GPU 3D Usage 13%
GPU RAM Usage 1.3 GiB
Global Environment variables:
CPAI_APPROOTPATH = <root>
CPAI_PORT = 32168
Can you explain what the * symbol represents? If I just wanted animals, people, and/or LPR how can I filter out the rest? I get bombarded with flying insect frisbee alerts
View attachment 194661
I just installed a Tesla P4 8GB in my unraid server for CP.AI. Pulled this tag codeproject/ai-server:cuda12_2-2.6.5 . Which version of Yolo should I be using for that and is ALPR able to use this GPU?
When ever I try to enable GPU for the LPR it keeps going back to CPU.
View attachment 199785
Here is the LPR Info where it shows GPU libraries are not installed:
Code:Module 'License Plate Reader' 3.2.2 (ID: ALPR) Valid: True Module Path: <root>/modules/ALPR Module Location: Internal AutoStart: True Queue: alpr_queue Runtime: python3.8 Runtime Location: Local FilePath: ALPR_adapter.py Start pause: 3 sec Parallelism: 0 LogVerbosity: Platforms: all,!windows-arm64 GPU Libraries: not installed GPU: use if supported Accelerator: Half Precision: enable Environment Variables AUTO_PLATE_ROTATE = True CROPPED_PLATE_DIR = <root>/Server/wwwroot MIN_COMPUTE_CAPABILITY = 6 MIN_CUDNN_VERSION = 7 OCR_OPTIMAL_CHARACTER_HEIGHT = 60 OCR_OPTIMAL_CHARACTER_WIDTH = 30 OCR_OPTIMIZATION = True PLATE_CONFIDENCE = 0.7 PLATE_RESCALE_FACTOR = 2 PLATE_ROTATE_DEG = 0 REMOVE_SPACES = False ROOT_PATH = <root> SAVE_CROPPED_PLATE = False Status Data: { "inferenceDevice": "CPU", "inferenceLibrary": "", "canUseGPU": "false", "successfulInferences": 19, "failedInferences": 0, "numInferences": 19, "averageInferenceMs": 138.0 } Started: 28 Jul 2024 12:52:06 PM Pacific Standard Time LastSeen: 28 Jul 2024 12:52:37 PM Pacific Standard Time Status: Started Requests: 16112 (includes status calls)
Here is the system info:
Code:Server version: 2.6.5 System: Docker (b942b79acfaf) Operating System: Linux (Ubuntu 22.04) CPUs: Intel(R) Xeon(R) CPU E3-1230 V2 @ 3.30GHz (Intel) 1 CPU x 4 cores. 8 logical processors (x64) GPU (Primary): Tesla P4 (8 GiB) (NVIDIA) Driver: 550.40.07, CUDA: 11.5.119 (up to: 12.4), Compute: 6.1, cuDNN: 8.9.6 System RAM: 16 GiB Platform: Linux BuildConfig: Release Execution Env: Docker Runtime Env: Production Runtimes installed: .NET runtime: 7.0.19 .NET SDK: Not found Default Python: 3.10.12 Go: Not found NodeJS: Not found Rust: Not found Video adapter info: System GPU info: GPU 3D Usage 13% GPU RAM Usage 1.3 GiB Global Environment variables: CPAI_APPROOTPATH = <root> CPAI_PORT = 32168