I've added a 3060 12GB to my BI PC and installed CUDA 12.6.2 and ran the install script for cuDNN. Immediately I could see the existing YOLOv5 .NET using DirectML was indeed using the 3080 GPU instead of the integrated one. Simlarly if I switch to YOLOv8 it also starts and uses the GPU using CUDA, but this module has limited models at present so wanted to try YOLOv5.
However YOLOv5 6.2 starts and just uses the CPU even though its info says to use CUDA. I see a couple of posts from last year on various forums saying it doesn't work with CUDA v12 so is that still the case or have I missed something?
Info for reference (using CPAI 2.6.5):
However YOLOv5 6.2 starts and just uses the CPU even though its info says to use CUDA. I see a couple of posts from last year on various forums saying it doesn't work with CUDA v12 so is that still the case or have I missed something?
Info for reference (using CPAI 2.6.5):
Code:
Module 'Object Detection (YOLOv5 6.2)' 1.9.2 (ID: ObjectDetectionYOLOv5-6.2)
Valid: True
Module Path: <root>\modules\ObjectDetectionYOLOv5-6.2
Module Location: Internal
AutoStart: True
Queue: objectdetection_queue
Runtime: python3.7
Runtime Location: Shared
FilePath: detect_adapter.py
Start pause: 1 sec
Parallelism: 0
LogVerbosity:
Platforms: all,!raspberrypi,!jetson
GPU Libraries: installed if available
GPU: use if supported
Accelerator:
Half Precision: enable
Environment Variables
APPDIR = <root>\modules\ObjectDetectionYOLOv5-6.2
CUSTOM_MODELS_DIR = <root>\modules\ObjectDetectionYOLOv5-6.2\custom-models
MODELS_DIR = <root>\modules\ObjectDetectionYOLOv5-6.2\assets
MODEL_SIZE = Medium
USE_CUDA = True
YOLOv5_AUTOINSTALL = false
YOLOv5_VERBOSE = false
Status Data: {
"inferenceDevice": "CPU",
"inferenceLibrary": "",
"canUseGPU": "false",
"successfulInferences": 13,
"failedInferences": 0,
"numInferences": 13,
"averageInferenceMs": 131.3846153846154
}