I was just going to send the configuration file with AutoStart changed to "true". There is a bug that is not saving any setting from the default settings.
That maxes out the clock speeds and memory speeds and adds heat and energy consumption. P-states is what you want to change. Nvidia has P0-P15 states. P0 being the fastest. I run at P5 (810,810) which is basically idle watts and detection times within 5 to 10 ms of the card running full throttle. Using nvidia-smi to change speeds and memory is more efficient in my testing. If anyone wants the commands I’ll post them.
Module 'License Plate Reader' 3.2.2 (ID: ALPR)
Valid: True
Module Path: <root>/modules/ALPR
Module Location: Internal
AutoStart: True
Queue: alpr_queue
Runtime: python3.8
Runtime Location: Local
FilePath: ALPR_adapter.py
Start pause: 3 sec
Parallelism: 0
LogVerbosity:
Platforms: all,!windows-arm64
GPU Libraries: not installed
GPU: use if supported
Accelerator:
Half Precision: enable
Environment Variables
AUTO_PLATE_ROTATE = True
CROPPED_PLATE_DIR = <root>/Server/wwwroot
MIN_COMPUTE_CAPABILITY = 6
MIN_CUDNN_VERSION = 7
OCR_OPTIMAL_CHARACTER_HEIGHT = 60
OCR_OPTIMAL_CHARACTER_WIDTH = 30
OCR_OPTIMIZATION = True
PLATE_CONFIDENCE = 0.7
PLATE_RESCALE_FACTOR = 2
PLATE_ROTATE_DEG = 0
REMOVE_SPACES = False
ROOT_PATH = <root>
SAVE_CROPPED_PLATE = False
Status Data: {
"inferenceDevice": "CPU",
"inferenceLibrary": "",
"canUseGPU": "false",
"successfulInferences": 19,
"failedInferences": 0,
"numInferences": 19,
"averageInferenceMs": 138.0
}
Started: 28 Jul 2024 12:52:06 PM Pacific Standard Time
LastSeen: 28 Jul 2024 12:52:37 PM Pacific Standard Time
Status: Started
Requests: 16112 (includes status calls)
Server version: 2.6.5
System: Docker (b942b79acfaf)
Operating System: Linux (Ubuntu 22.04)
CPUs: Intel(R) Xeon(R) CPU E3-1230 V2 @ 3.30GHz (Intel)
1 CPU x 4 cores. 8 logical processors (x64)
GPU (Primary): Tesla P4 (8 GiB) (NVIDIA)
Driver: 550.40.07, CUDA: 11.5.119 (up to: 12.4), Compute: 6.1, cuDNN: 8.9.6
System RAM: 16 GiB
Platform: Linux
BuildConfig: Release
Execution Env: Docker
Runtime Env: Production
Runtimes installed:
.NET runtime: 7.0.19
.NET SDK: Not found
Default Python: 3.10.12
Go: Not found
NodeJS: Not found
Rust: Not found
Video adapter info:
System GPU info:
GPU 3D Usage 13%
GPU RAM Usage 1.3 GiB
Global Environment variables:
CPAI_APPROOTPATH = <root>
CPAI_PORT = 32168
15:00:37:Object Detection (YOLOv5 6.2): [RuntimeError] : Traceback (most recent call last):
File "/app/preinstalled-modules/ObjectDetectionYOLOv5-6.2/detect.py", line 141, in do_detection
det = detector(img, size=640)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/common.py", line 705, in forward
y = self.model(x, augment=augment) # forward
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/common.py", line 515, in forward
y = self.model(im, augment=augment, visualize=visualize) if augment or visualize else self.model(im)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/yolo.py", line 209, in forward
return self._forward_once(x, profile, visualize) # single-scale inference, train
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/yolo.py", line 121, in _forward_once
x = m(x) # run
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/yolo.py", line 75, in forward
wh = (wh * 2) ** 2 * self.anchor_grid[i] # wh
RuntimeError: The size of tensor a (48) must match the size of tensor b (36) at non-singleton dimension 2
There is another thread for Coral AI problem (still CPAI but different module) but we are are having a similar issue where it keeps reverting to the CPU so maybe the root cause is the same thing regardless of module. I see you are using Linux but I am using Windows. So again maybe it's the same piece of underlying code.
I also noticed that if I reboot the PC the module is not starting automatically (even thought it was started when I rebooted).
Another thing I noticed is it not saving the Model or Model Size I am choosing when I reboot.
I've uninstalled, deleted the folders, and reinstalled at least a dozen times to the same outcome.
I'm going to try and revert to 2.6.2 since that was more stable for me. 2.5.1 was the most stable version for me but unfortunately it won't install correctly since they updated the Installer scripts. I only upgraded since the newer version enabled multiple TPUs.
There is another thread for Coral AI problem (still CPAI but different module) but we are are having a similar issue where it keeps reverting to the CPU so maybe the root cause is the same thing regardless of module. I see you are using Linux but I am using Windows. So again maybe it's the same piece of underlying code.
I also noticed that if I reboot the PC the module is not starting automatically (even thought it was started when I rebooted).
Another thing I noticed is it not saving the Model or Model Size I am choosing when I reboot.
I've uninstalled, deleted the folders, and reinstalled at least a dozen times to the same outcome.
I'm going to try and revert to 2.6.2 since that was more stable for me. 2.5.1 was the most stable version for me but unfortunately it won't install correctly since they updated the Installer scripts. I only upgraded since the newer version enabled multiple TPUs.
Thanks for the info. When I went looking for your card I found this for a small amount more, a 4060 with 8Gigs Memory...@AlwaysSomething I have Optiplex SFF, I bought this card because it's half height
Good performance for CPAI, including LPR.
Are you powering this with the original power supply?I have Optiplex SFF, I bought this card because it's half height
@Vettester Yes, I am running this card with original SFF power supply, which also powers 2 3.5" HD and 1 SSD.Are you powering this with the original power supply?
@actran @David L Thank you for the recommendations. The only problem with the cards you all mentioned was they take up 2 slots and I only have 2 PCIe slots. Unfortunately, I already have a 2nd NIC card in one of the slots. I should have mentioned that. Sorry I'll have to think if I want to remove the 2nd NIC and go to using VLANS or something else. Mike Lud made a list a while back for GPU and LPR. I'll have to find that and spend some time going through it.
@mwilky I tried reverting to 2.5.1 (last version that worked well for me) but got 404 errors on the installer. I have some more installers for prior versions but have to see if they will work. I don't have the one for 2.2.2 so I may ask you for it.
I don't have the reverting to CPU problem anymore. I reinstalled the Coral driver which seemed to fix that (for now).
The problem I have is I select the Efficient-Lite model and it says it's using it in the logs but it is really using the MobileNetSSD model. If I go into the CPAI Explorer to test images I can prove it's using MobileNetSSD and not the Efficient-Lite as selected. I posted this months ago as a bug in 2.6.2. Previous to 2.6.2 I had 2.5.1 and the model selector worked but didn't support dual TPU (multiple TPU) or was buggy with it (I can't remember). That was why I upgraded to 2.6.2 when it came out. Otherwise I don't upgrade just to upgrade. I can find that post which shows the steps I used to prove it (keeping this Post TLDR)
The biggest problem I have is the default model "MobileNetSSD" is not good (it's OK) at identifying/classifying objects. I honestly think this is what may be turning a lot of people away from the Coral TPU. Especially if they are selecting the different models and getting the same results. The inference times for MobileNetSSD are quick (30ms) but not accurate. The Efficient-Lite (medium size) is a little longer (100-120ms with single TPU) but was very accurate for me. For example, with the MobileNetSSD every time a car drives by it tags it as a person. So if I'm searching for people I'm getting every car that drives by (which is a 100 times more than people walking by). A few more examples is it tags all vehicles as cars so if I'm searching for Bus or Truck I'm not finding them. It also misses my dog which is close to the camera and perfectly positioned (sideways) for identifying. 2.5.1 was the last version that correctly used the selected model (Efficient-Lite) versus the default "MobileNetSSD".
I looked as some of the dat files to see if it was just trained so that if it sees a car it puts person but it wasn't. For some reason the glares on the windows are what is boxed as the person. Like I said, I spent a LOT of time testing/researching it.
I know I posted it here months ago and discussed it with Seth (CPAI dev) but probably got forgotten. I thought I posted on CPAI site as well but have to see. IMO their site is not as user friendly as IPCAMTALK so I try here first. I don't think I could attach images there either which deterred me since I would have to type a lot more info (a picture is worth a thousand words).
Thanks for hearing me vent. LOL
I have a SFF (small form factor) PC that an only take low profile (half-height) PCIE cards. Can anyone recommend a low power GPU that would fit and still be suitable for CPAI? Currently just need for Object Detection and probably LPR in the near future.