Blue Iris and CodeProject.AI ALPR

ah finally got it working on the vm on vmware esxi but using YOLOv5 6.2 set to gpu - though it says cpu - 145 - 215 ms. Not too bad considering the limited resources I assigned to the VM just for testing.
got the gpu passthrough but I do not think it works with CPAI - an old AMD firepro 4100.


The idea was to get my main windows server installed with VMware esxi hypervisor 7 as this works well with Truenas in a VM much better and have windows as a VM with BI on it.
Not sure yet, Quite a tricky migration

1691700974658.png

1691701118115.png
 
I've also had the LPR module die for me since going to 2.5. I just tried reinstalling CUDNN with the updated files and new script, but no luck. However it did produce new errors, not sure if these mean anything to anybody:


14:45:00:Connection id "0HMSQJ9IH6JB4", Request id "0HMSQJ9IH6JB4:00000059": An unhandled exception was thrown by the application.
14:45:00:Connection id "0HMSQJ9IH6KCF", Request id "0HMSQJ9IH6KCF:0000003C": An unhandled exception was thrown by the application.
14:45:26:Connection id "0HMSQJ9IH6KLC", Request id "0HMSQJ9IH6KLC:00000003": An unhandled exception was thrown by the application.
14:45:26:Connection id "0HMSQJ9IH6KBE", Request id "0HMSQJ9IH6KBE:00000059": An unhandled exception was thrown by the application.
14:45:29:Object Detection (YOLOv5 6.2): [RuntimeError] : Traceback (most recent call last):
File "C:\Program Files\CodeProject\AI\modules\ObjectDetectionYolo\detect.py", line 162, in do_detection
det = detector(img, size=640)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\autograd\grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\models\common.py", line 705, in forward
y = self.model(x, augment=augment) # forward
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\models\common.py", line 515, in forward
y = self.model(im, augment=augment, visualize=visualize) if augment or visualize else self.model(im)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\Lib\site-packages\yolov5\models\yolo.py", line 209, in forward
return self._forward_once(x, profile, visualize) # single-scale inference, train
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\Lib\site-packages\yolov5\models\yolo.py", line 121, in _forward_once
x = m(x) # run
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\Lib\site-packages\yolov5\models\yolo.py", line 75, in forward
wh = (wh * 2) ** 2 * self.anchor_grid # wh
RuntimeError: The size of tensor a (36) must match the size of tensor b (48) at non-singleton dimension 2
14:45:32:Object Detection (YOLOv5 6.2): [RuntimeError] : Traceback (most recent call last):
File "C:\Program Files\CodeProject\AI\modules\ObjectDetectionYolo\detect.py", line 162, in do_detection
det = detector(img, size=640)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\autograd\grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\models\common.py", line 705, in forward
y = self.model(x, augment=augment) # forward
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\models\common.py", line 515, in forward
y = self.model(im, augment=augment, visualize=visualize) if augment or visualize else self.model(im)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\Lib\site-packages\yolov5\models\yolo.py", line 209, in forward
return self._forward_once(x, profile, visualize) # single-scale inference, train
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\Lib\site-packages\yolov5\models\yolo.py", line 121, in _forward_once
x = m(x) # run
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\Lib\site-packages\yolov5\models\yolo.py", line 75, in forward
wh = (wh * 2) ** 2 * self.anchor_grid # wh
RuntimeError: The size of tensor a (36) must match the size of tensor b (48) at non-singleton dimension 2
14:49:48:Unknown response from server
14:49:56:Object Detection (YOLOv5 6.2): [RuntimeError] : Traceback (most recent call last):
File "C:\Program Files\CodeProject\AI\modules\ObjectDetectionYolo\detect.py", line 162, in do_detection
det = detector(img, size=640)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\autograd\grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\models\common.py", line 705, in forward
y = self.model(x, augment=augment) # forward
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\models\common.py", line 515, in forward
y = self.model(im, augment=augment, visualize=visualize) if augment or visualize else self.model(im)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\Lib\site-packages\yolov5\models\yolo.py", line 209, in forward
return self._forward_once(x, profile, visualize) # single-scale inference, train
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\Lib\site-packages\yolov5\models\yolo.py", line 121, in _forward_once
x = m(x) # run
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\Lib\site-packages\yolov5\models\yolo.py", line 75, in forward
wh = (wh * 2) ** 2 * self.anchor_grid # wh
RuntimeError: The size of tensor a (9) must match the size of tensor b (12) at non-singleton dimension 2
14:54:52:Connection id "0HMSQJ9IH6NQI", Request id "0HMSQJ9IH6NQI:00000124": An unhandled exception was thrown by the application.
14:54:52:Connection id "0HMSQJ9IH6KSK", Request id "0HMSQJ9IH6KSK:00000E11": An unhandled exception was thrown by the application.
14:54:52:Connection id "0HMSQJ9IH6O62", Request id "0HMSQJ9IH6O62:00000397": An unhandled exception was thrown by the application.
 
I've also had the LPR module die for me since going to 2.5. I just tried reinstalling CUDNN with the updated files and new script, but no luck. However it did produce new errors, not sure if these mean anything to anybody:


14:45:00:Connection id "0HMSQJ9IH6JB4", Request id "0HMSQJ9IH6JB4:00000059": An unhandled exception was thrown by the application.
14:45:00:Connection id "0HMSQJ9IH6KCF", Request id "0HMSQJ9IH6KCF:0000003C": An unhandled exception was thrown by the application.
14:45:26:Connection id "0HMSQJ9IH6KLC", Request id "0HMSQJ9IH6KLC:00000003": An unhandled exception was thrown by the application.
14:45:26:Connection id "0HMSQJ9IH6KBE", Request id "0HMSQJ9IH6KBE:00000059": An unhandled exception was thrown by the application.
14:45:29:Object Detection (YOLOv5 6.2): [RuntimeError] : Traceback (most recent call last):
File "C:\Program Files\CodeProject\AI\modules\ObjectDetectionYolo\detect.py", line 162, in do_detection
det = detector(img, size=640)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\autograd\grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\models\common.py", line 705, in forward
y = self.model(x, augment=augment) # forward
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\models\common.py", line 515, in forward
y = self.model(im, augment=augment, visualize=visualize) if augment or visualize else self.model(im)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\Lib\site-packages\yolov5\models\yolo.py", line 209, in forward
return self._forward_once(x, profile, visualize) # single-scale inference, train
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\Lib\site-packages\yolov5\models\yolo.py", line 121, in _forward_once
x = m(x) # run
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\Lib\site-packages\yolov5\models\yolo.py", line 75, in forward
wh = (wh * 2) ** 2 * self.anchor_grid # wh
RuntimeError: The size of tensor a (36) must match the size of tensor b (48) at non-singleton dimension 2
14:45:32:Object Detection (YOLOv5 6.2): [RuntimeError] : Traceback (most recent call last):
File "C:\Program Files\CodeProject\AI\modules\ObjectDetectionYolo\detect.py", line 162, in do_detection
det = detector(img, size=640)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\autograd\grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\models\common.py", line 705, in forward
y = self.model(x, augment=augment) # forward
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\models\common.py", line 515, in forward
y = self.model(im, augment=augment, visualize=visualize) if augment or visualize else self.model(im)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\Lib\site-packages\yolov5\models\yolo.py", line 209, in forward
return self._forward_once(x, profile, visualize) # single-scale inference, train
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\Lib\site-packages\yolov5\models\yolo.py", line 121, in _forward_once
x = m(x) # run
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\Lib\site-packages\yolov5\models\yolo.py", line 75, in forward
wh = (wh * 2) ** 2 * self.anchor_grid # wh
RuntimeError: The size of tensor a (36) must match the size of tensor b (48) at non-singleton dimension 2
14:49:48:Unknown response from server
14:49:56:Object Detection (YOLOv5 6.2): [RuntimeError] : Traceback (most recent call last):
File "C:\Program Files\CodeProject\AI\modules\ObjectDetectionYolo\detect.py", line 162, in do_detection
det = detector(img, size=640)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\autograd\grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\models\common.py", line 705, in forward
y = self.model(x, augment=augment) # forward
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\models\common.py", line 515, in forward
y = self.model(im, augment=augment, visualize=visualize) if augment or visualize else self.model(im)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\Lib\site-packages\yolov5\models\yolo.py", line 209, in forward
return self._forward_once(x, profile, visualize) # single-scale inference, train
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\Lib\site-packages\yolov5\models\yolo.py", line 121, in _forward_once
x = m(x) # run
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\Lib\site-packages\yolov5\models\yolo.py", line 75, in forward
wh = (wh * 2) ** 2 * self.anchor_grid # wh
RuntimeError: The size of tensor a (9) must match the size of tensor b (12) at non-singleton dimension 2
14:54:52:Connection id "0HMSQJ9IH6NQI", Request id "0HMSQJ9IH6NQI:00000124": An unhandled exception was thrown by the application.
14:54:52:Connection id "0HMSQJ9IH6KSK", Request id "0HMSQJ9IH6KSK:00000E11": An unhandled exception was thrown by the application.
14:54:52:Connection id "0HMSQJ9IH6O62", Request id "0HMSQJ9IH6O62:00000397": An unhandled exception was thrown by the application.
Open a command prompt and run nvcc --version then post a screenshot of the results, like the below
1691783653991.png
 
Did something change between 2.0.8 and the newer 2.1.X versions i.e. 2.1.9, 2.1.10? My Nvidia T400 no longer detects objects ... I had to change the recognizer to YoloV5 Net for it to work where as previously I would just use the default object recognizer.
 
This looks good, which GPU do you have

It's running on an NVIDIA GeForce GT 1030. Was working fine until something updated, now the returns all look like the attached screenshot. It's got me stumped.
 

Attachments

  • Capture4.PNG
    Capture4.PNG
    7.6 KB · Views: 20
I got two readings on this alert. Only sent one via MQTT. I received "NOW HIRING" alert not the plate. Any way to fix/avoid this?

Cam121.20230814_133154.2648340.3-1.jpg
 
This is my MQTT payload:
{ "plate":"&PLATE", "AlertImgPath":"&ALERT_PATH", "Alert_AI":"&MEMO", "Date":"%Y-%m-%d %H:%M:%S","Camera":"&NAME" }

Received message only has the NOW HIRING as the plate. I am not saving messages on my MQTT listener so I suppose there is the possibility that two were sent and the one with the correct plate was sent first and then overwritten. But thats not whats showing in BI log.
 
It looks like version 2.1.11 beta was posted to Docker Hub....

I don't see an ARM version though.

I did a clean checkout of 2.1.10 (deleted config and data) and I am trying to install the RKNN objects and delete the other object detectors....

ai-server | Infor
ai-server | Error Error trying to start Object Detection (YOLOv5 RKNN) (objectdetection_fd_rknn_adapter.py)
ai-server | Error An error occurred trying to start process '/app/modules/ObjectDetectionYoloRKNN/bin/linux/python39/venv/bin/python3' with working directory '/app/modules/ObjectDetectionYoloRKNN'. No such file or directory
ai-server | Error at System.Diagnostics.Process.ForkAndExecProcess(ProcessStartInfo startInfo, String resolvedFilename, String[] argv, String[] envp, String cwd, Boolean setCredentials, UInt32 userId, UInt32 groupId, UInt32[] groups, Int32& stdinFd, Int32& stdoutFd, Int32& stderrFd, Boolean usesTerminal, Boolean throwOnNoExec)
ai-server | at System.Diagnostics.Process.StartCore(ProcessStartInfo startInfo)
ai-server | at CodeProject.AI.API.Server.Frontend.ModuleProcessServices.StartProcess(ModuleConfig module)
ai-server | Error *** Please check the CodeProject.AI installation completed successfully
ai-server | Infor Module ObjectDetectionYoloRKNN started successfully.
ai-server | Infor Installer exited with code 0
 
This is my MQTT payload:
{ "plate":"&PLATE", "AlertImgPath":"&ALERT_PATH", "Alert_AI":"&MEMO", "Date":"%Y-%m-%d %H:%M:%S","Camera":"&NAME" }

Received message only has the NOW HIRING as the plate. I am not saving messages on my MQTT listener so I suppose there is the possibility that two were sent and the one with the correct plate was sent first and then overwritten. But thats not whats showing in BI log.

I get stuff like that occasionally-- like it will flag "School bus" or something. I have to think that would need to be corrected in the AI detection.... It shouldn't think that is a plate...

Another note: I have two zones A&B and when a car passes, the motion alert now triggers A->B or B->A in my MQTT message. You can infer the car direction in that case...
 
Another note: I have two zones A&B and when a car passes, the motion alert now triggers A->B or B->A in my MQTT message. You can infer the car direction in that case...

That is cool. I did not know about that directional thing. I have been experimenting with motion detection both on the camera and with BI and have not been happy with the results so far. I have not tried BI motion zones yet. Will have to give it a go. My driveway is at a 45 degree angle making it a bit challenging.
 
Mike, do you have the most up to date settings? and for ALPR am I suppose to be using main stream?
 
I've been trying to troubleshoot alpr not returning plate numbers for a few weeks now... since I first attempted to upgrade from CPAI 2.1.9 to 2.1.11. I initially had issues installing the update, but got those sorted. Some things I've tried... deleting all CPAI folders and reinstalling CPAI 1.1.11, updating CUDA to 11.8 (I was using 11.7 successful before), and running the latest cudnn batch file. CPAI OCR apparently is not returning as expected. So of course I can't expect BI to return OCR either. Any help is much appreciated. I think I got all the relevant screen grabs to convey details of my setup, but let me know if additional info might help.

2.png
1.png
3.png
4.png

FWIW, AI is otherwise working great with BI. I'm getting accurate alerts for people, packages, deliveries, cars, etc... just the alpr ocr function seems broken on my instance. Any advice to help get it working again is very much appreciated.