cyberwolf_uk
Getting comfortable
- Sep 27, 2014
- 632
- 800
I have a gt1030 running in my box and it works fine with GPU running YOLOv5.6.2 on 2.0.8, 2.1.6 & now 2.1.8. I must admit it did have all sorts of issues with detection etc on my test box running the same setup until I did a clean install of everything, I've mentioned it in a thread somewhere on here.
Here are the versions
View attachment 161952
View attachment 161953
Have your tried YOLOv5.NETHaving huge issues myself.
Removed an old 1.5x version of CPAI today and installed the 2.16 Beta from scratch as I had an old Nvidia GPU knocking about and I want to offload the AI to that.
I cannot get CUDA working for love nor money.
19:36:14:ObjectDetectionYolo: Installing CodeProject.AI Analysis Module
19:36:14:ObjectDetectionYolo: ========================================================================
19:36:14:ObjectDetectionYolo: CodeProject.AI Installer
19:36:14:ObjectDetectionYolo: ========================================================================
19:36:14:ObjectDetectionYolo: CUDA Present...True
19:36:14:ObjectDetectionYolo: Allowing GPU Support: Yes
19:36:15:ObjectDetectionYolo: Allowing CUDA Support: Yes
19:36:15:ObjectDetectionYolo: General CodeProject.AI setup
19:36:15:ObjectDetectionYolo: Creating Directories...Done
Server version: 2.1.6-Beta
Operating System: Windows (Microsoft Windows 11 version 10.0.22621)
CPUs: Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz
1 CPU x 6 cores. 12 logical processors (x64)
GPU: NVIDIA GeForce RTX 2080 SUPER (8 GiB) (NVidia)
Driver: 531.79 CUDA: 12.1 Compute: 7.5
System RAM: 32 GiB
Target: Windows
BuildConfig: Release
Execution Env: Native
Runtime Env: Production
.NET framework: .NET 7.0.5
System GPU info:
GPU 3D Usage 3%
GPU RAM Usage 464 MiB
Video adapter info:
NVIDIA GeForce RTX 2080 SUPER:
Driver Version 31.0.15.3179
Video Processor NVIDIA GeForce RTX 2080 SUPER
Intel(R) UHD Graphics 630:
Driver Version 31.0.101.2111
Video Processor Intel(R) UHD Graphics Family
Global Environment variables:
CPAI_APPROOTPATH = C:\Program Files\CodeProject\AI
CPAI_PORT = 32168
I notice it's also picked up the CPU (in bold above) display adapter but everything seems to point at the GPU being enabled for processing but starting the YOLO always results in it using CPU.
19:39:46:Module 'Object Detection (YOLOv5 6.2)' (ID: ObjectDetectionYolo)
19:39:46:Module Path: C:\Program Files\CodeProject\AI\modules\ObjectDetectionYolo
19:39:46:AutoStart: True
19:39:46:Queue: objectdetection_queue
19:39:46latforms: all
19:39:46:GPU: Support enabled
19:39:46arallelism: 0
19:39:46:Accelerator:
19:39:46:Half Precis.: enable
19:39:46:Runtime: python37
19:39:46:Runtime Loc: Shared
19:39:46:FilePath: detect_adapter.py
19:39:46re installed: False
19:39:46:Start pause: 1 sec
19:39:46:LogVerbosity:
19:39:46:Valid: True
19:39:46:Environment Variables
19:39:46:APPDIR = %CURRENT_MODULE_PATH%
19:39:46:CUSTOM_MODELS_DIR = %CURRENT_MODULE_PATH%/custom-models
19:39:46:MODELS_DIR = %CURRENT_MODULE_PATH%/assets
19:39:46:MODEL_SIZE = Medium
19:39:46:USE_CUDA = True
19:46:46:detect_adapter.py: Inference processing will occur on device 'CPU'
19:46:46:detect_adapter.py: Inference processing will occur on device 'CPU'
19:46:46:detect_adapter.py: Inference processing will occur on device 'CPU'
View attachment 161976
Anyone got any ideas?
Thanks,
Craig
Hi David,Have your tried YOLOv5.NET
Nope, exactly the same.Hi David,
I haven't since that specifically says in the description of the module 'MIT Provides Object Detection using YOLOv5 ONNX models with DirectML. This module is best for those on Windows and Linux without CUDA enabled GPUs '
I'll try anything at this point now.
I rolled back CUDA 12.1 to 11.7 and ran CUDnn - still nothing.
I'll give it a go!
Thanks
So I apologize ahead of time for anyone I mislead but I was told to try .NET and use ONNX for the custom models. Also be sure and turn off if doing so. But I am not on the latest Beta.Hi David,
I haven't since that specifically says in the description of the module 'MIT Provides Object Detection using YOLOv5 ONNX models with DirectML. This module is best for those on Windows and Linux without CUDA enabled GPUs '
I'll try anything at this point now.
I rolled back CUDA 12.1 to 11.7 and ran CUDnn - still nothing.
I'll give it a go!
Thanks
Hi Cyber, I did indeed.@Craig G did you use the cuDNN install script
You deleted both CodeProject folders in:Hi Cyber, I did indeed.
I just removed CPAI, deleted folder, rebooted and re-installed.
Installed the .NET module this time but still stuck on CPU.
this is going to eat my bank holiday weekend
Although....
I removed all CUDA 12.1 components and installed 11.7 and ran the cuDNN bat file but CPAI still reports.....
GPU: NVIDIA GeForce RTX 2080 SUPER (8 GiB) (NVidia)
Driver: 531.79 CUDA: 12.1 Compute: 7.5
Didn't, but have now.You deleted both CodeProject folders in:
Program Files
ProgramData
Damn, Nvidia want to sign up to their Dev network in order to download the latest cuDNNDidn't, but have now.
I've removed cuda 11.7 and trying 12.x again as there's a cuDNN for that too since CPAI thought I was on that version.
So going for full CPAI reinstall 'after' CUDA 12.1 and cuDNN
Make sure you run the cuDNN install script, I had to run the script again when I upgraded to 2.1.6 despite my GPU working in previous versions.
File Download - CodeProject
www.codeproject.com
Hi Cyber, I did indeed.
I just removed CPAI, deleted folder, rebooted and re-installed.
Installed the .NET module this time but still stuck on CPU.
this is going to eat my bank holiday weekend
Although....
I removed all CUDA 12.1 components and installed 11.7 and ran the cuDNN bat file but CPAI still reports.....
GPU: NVIDIA GeForce RTX 2080 SUPER (8 GiB) (NVidia)
Driver: 531.79 CUDA: 12.1 Compute: 7.5
Maybe a dumb suggestion but have you tried clicking the three dots and enabling GPU?Damn, Nvidia want to sign up to their Dev network in order to download the latest cuDNN
Many timesMaybe a dumb suggestion but have you tried clicking the three dots and enabling GPU?
Why are you not installing 2.1.8?Having huge issues myself.
Removed an old 1.5x version of CPAI today and installed the 2.16 Beta from scratch as I had an old Nvidia GPU knocking about and I want to offload the AI to that.
I cannot get CUDA working for love nor money.
19:36:14:ObjectDetectionYolo: Installing CodeProject.AI Analysis Module
19:36:14:ObjectDetectionYolo: ========================================================================
19:36:14:ObjectDetectionYolo: CodeProject.AI Installer
19:36:14:ObjectDetectionYolo: ========================================================================
19:36:14:ObjectDetectionYolo: CUDA Present...True
19:36:14:ObjectDetectionYolo: Allowing GPU Support: Yes
19:36:15:ObjectDetectionYolo: Allowing CUDA Support: Yes
19:36:15:ObjectDetectionYolo: General CodeProject.AI setup
19:36:15:ObjectDetectionYolo: Creating Directories...Done
Server version: 2.1.6-Beta
Operating System: Windows (Microsoft Windows 11 version 10.0.22621)
CPUs: Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz
1 CPU x 6 cores. 12 logical processors (x64)
GPU: NVIDIA GeForce RTX 2080 SUPER (8 GiB) (NVidia)
Driver: 531.79 CUDA: 12.1 Compute: 7.5
System RAM: 32 GiB
Target: Windows
BuildConfig: Release
Execution Env: Native
Runtime Env: Production
.NET framework: .NET 7.0.5
System GPU info:
GPU 3D Usage 3%
GPU RAM Usage 464 MiB
Video adapter info:
NVIDIA GeForce RTX 2080 SUPER:
Driver Version 31.0.15.3179
Video Processor NVIDIA GeForce RTX 2080 SUPER
Intel(R) UHD Graphics 630:
Driver Version 31.0.101.2111
Video Processor Intel(R) UHD Graphics Family
Global Environment variables:
CPAI_APPROOTPATH = C:\Program Files\CodeProject\AI
CPAI_PORT = 32168
I notice it's also picked up the CPU (in bold above) display adapter but everything seems to point at the GPU being enabled for processing but starting the YOLO always results in it using CPU.
19:39:46:Module 'Object Detection (YOLOv5 6.2)' (ID: ObjectDetectionYolo)
19:39:46:Module Path: C:\Program Files\CodeProject\AI\modules\ObjectDetectionYolo
19:39:46:AutoStart: True
19:39:46:Queue: objectdetection_queue
19:39:46latforms: all
19:39:46:GPU: Support enabled
19:39:46arallelism: 0
19:39:46:Accelerator:
19:39:46:Half Precis.: enable
19:39:46:Runtime: python37
19:39:46:Runtime Loc: Shared
19:39:46:FilePath: detect_adapter.py
19:39:46re installed: False
19:39:46:Start pause: 1 sec
19:39:46:LogVerbosity:
19:39:46:Valid: True
19:39:46:Environment Variables
19:39:46:APPDIR = %CURRENT_MODULE_PATH%
19:39:46:CUSTOM_MODELS_DIR = %CURRENT_MODULE_PATH%/custom-models
19:39:46:MODELS_DIR = %CURRENT_MODULE_PATH%/assets
19:39:46:MODEL_SIZE = Medium
19:39:46:USE_CUDA = True
19:46:46:detect_adapter.py: Inference processing will occur on device 'CPU'
19:46:46:detect_adapter.py: Inference processing will occur on device 'CPU'
19:46:46:detect_adapter.py: Inference processing will occur on device 'CPU'
View attachment 161976
Anyone got any ideas?
Thanks,
Craig
So the latest cuDNN is just a zip file (after I signed up) with the libraries in it.
There's no BAT file to alter system variables. I suppose I could edit the scripts in the original BAT file but it's 9PM on Friday night, so.... NO
Going to install 11.7 again and cuDNN BAT before even installing CPAI (after deleting both folder locations)