Blue Iris and CodeProject.AI ALPR

Looking for a second set of fresh eyes on my settings. I must be missing something because I'm not getting any interaction between BI and CPAI as for as the license-plate model goes. Let me know if you see anything that's not correct. These settings are for my Mailbox cam and license-plate isn't showing up in the log. Thanks!

View attachment 153330

View attachment 153325
View attachment 153327
View attachment 153331
View attachment 153329
Also what GPU do you have?
 
I only have two models running in ALPR. Do I need to remove dark, combined and actionnetv2 somehow?
In the below I see three models. What I am thinking is happening your GPU can not fit all the models in the GPU memory so it has to reload the model back into the GPU memory and this takes time.
Please confirm if you only run one model do you see better times.

1675699322961.png
 
In the below I see three models. What I am thinking is happening your GPU can not fit all the models in the GPU memory so it has to reload the model back into the GPU memory and this takes time.
Please confirm if you only run one model do you see better times.

View attachment 153337
Would this explain why I'm only seeing one model referenced in the log for that camera? For all of my other cameras, I use ipcam-general and ipcam-animal together without issue and decent times around 300ms. For my mailbox camera, I will limit it to license-plate only and see how that works.
 
Would this explain why I'm only seeing one model referenced in the log for that camera? For all of my other cameras, I use ipcam-general and ipcam-animal together without issue and decent times around 300ms. For my mailbox camera, I will limit it to license-plate only and see how that works.
How are your ALPR times if you run the below test?

1675700307727.png
 
Hmmm....

11:48:26:Object Detection (YOLOv5 6.2): Retrieved objectdetection_queue command
11:48:27:Object Detection (YOLOv5 6.2): Detecting using license-plate
11:48:27:Response received (...b9ad6c)
11:48:27:Object Detection (YOLOv5 6.2): Queue and Processing Object Detection (YOLOv5 6.2) command 'custom' (...b9ad6c) took 78ms
11:48:27:ALPR_adapter.py: [2023/02/06 11:48:27] ppocr WARNING: Since the angle classifier is not initialized, the angle classifier will not be uesd during the forward process
11:48:27:ALPR_adapter.py: [2023/02/06 11:48:27] ppocr DEBUG: dt_boxes num : 1, elapse : 0.025984525680541992
11:48:27:License Plate Reader: [Exception] : Traceback (most recent call last):

1675701561543.png
 
Last edited:
  • Sad
Reactions: MikeLud1
Changed License Plate to CPU and it works. Must be the GPU can't handle it.

View attachment 153348
View attachment 153347
It does look to be a GPU memory issue. The OCR part of the ALPR module load two models in the GPU memory one for finding text in the image and one for reading the text. The developer and I are going to work on a lite version of the ALPR that should help with memory issues, I have no ETA on this.
 
  • Like
Reactions: 105437
The low memory idea has come up before when others reported issues using the same card I am using. Here is another data point:

Untitled2.jpg

This is with gpu...
Untitled.jpg

I'm not saying a lighter version won't be of some use, but perhaps it won't fix the issues these folks are having with their gpu configs.
 
Changed License Plate to CPU and it works. Must be the GPU can't handle it.

View attachment 153348
View attachment 153347
Can you post the same System Info as in the below post
 
I am running license-plate, ipcam-combined, package, and delivery across various cameras... so 4 (high mode). As sort of a stress test along these lines... I can even load up default object detection on top of those 4, and it still works consistently (just slower make times as expected with the actionnet model). I suspect it may be a driver mismatch. I've seen some folks with issues that were running docker images with different driver versions (vs those recommended on the cpai install page).
 
I am running license-plate, ipcam-combined, package, and delivery across various cameras... so 4 (high mode). As sort of a stress test along these lines... I can even load up default object detection on top of those 4, and it still works consistently (just slower make times as expected with the actionnet model).
I notice you have 32 GB of system memory and normally windows allocates 50% of it for shared GPU memory. I am curious what @105437 has.

1675704723889.png
 
Good observation, and that could be a factor. I was running just 8gb before. AI worked consistently, however there were times when multiple simultaneous triggers bogged things down, and then BI would occasionally miss a confirm. Since going to 32gb I haven't had slow downs, with very little variance in make times. That said, I haven't seen windows make it past 16GB ram usage. So I think 16 is enough for 4 models+lpr at high anyways.
 
  • Like
Reactions: MikeLud1
Good observation, and that could be a factor. I was running just 8gb before. AI worked consistently, however there were times when multiple simultaneous triggers bogged things down, and then BI would occasionally miss a confirm. Since going to 32gb I haven't had slow downs, with very little variance in make times. That said, I haven't seen windows make it past 16GB ram usage. So I think 16 is enough for 4 models+lpr at high anyways.
How much shared GPU memory is being used?

1675707087443.png
 
15.9GB... and same numbers for the UHD gpu:

Untitled3.jpg
On a side note, I'm using h265, and neither gpu show any video decode activity (all cams do direct bvr+continuous). The intel gpu stays 4-5% all the time, which I presume is for BI motion. Also, that sorta high CPU usage is mostly from remote desktop... ~7% due to task manager, and another ~5% for desktop windows management. BI itself averages about 5% cpu when I'm not logged in and messing around in windows.
 
Last edited:
Can you post the same System Info as in the below post
@MikeLud1 Here's my CPAI System Info.
1675709369524.png
 
One thing I see you are using a new Nvidia Drive then @truglo, When CUDA 11.7 was released it was using Nvidia GPU Drive 516.94 maybe try downgrading to this version, link is below

 
  • Like
Reactions: 105437
I just noticed some interesting behavior with lpr... you don't even need to use the license-plate model for it to work!

I noticed my lpr cam having some relatively bad make times, and yesterday I learned about using alpr:0 in my non-lpr cams. I also noticed how BI logs plates appeared distinct from how it logs makes from license-plate. AI analysis showed ipcam-combined, license-plate, and Plates models being used, and plate numbers were showing attached to all 3 models. That gave me insight to the inner workings of BI ai, and made me think to run the following experiment on my lpr cam.

My lpr normally uses the ipcam-combined,license-plate models, with "person,bicycle,car,truck,bus,van,dog,DayPlate,NightPlate" to confirm, and "car,truck,van,DayPlate,Nightplate" marked as vehicles. I changed this to just ipcam-combined, with "person,car,truck,bus,van,dog" to confirm, and "car,truck,van" to mark as vehicles. The next thing that drove by got properly marked on the BI alert, and in BI logs! That the plate was read is business as usual, but that it did so in less than half the time due to not having to run license-plate was a fortunate discovery.

Untitled4.jpg

Now the problem I can see with this as is, at night time I have noticed that ipcam-combined is not as consistent at marking vehicles vs license-plate marking plates. I already have a day/night schedule change with different trigger zones... night time is currently setup with just license-plate, with "DayPlate,NightPlate" for both confirm and mark as vehicle. I am curious which one will turn plates around faster... just license-plate, or just ipcam-combined (guessing license-plate will be faster, but hoping combined is a close second as it's more useful during daytime). I am interested in running an apples to apples experiment and compare data from both day and night for either model. At any rate, this discovery will certainly help my lpr during daytime... I'm also using it to log foot traffic so it needed the help.

[edit: Note the make times are still not that great for a p400 gpu... the reason is self inflicted... a 20' strip of the road shown can trigger 3 models at once time. So when a car drives by ~25mph as expected, and there happens to be reflections in the windows of the cars parked in my drivways (as in this case), I can get up to 3 models plus plates being run simultaneously (lpr+combined+plates, driveway+combined, and driveway clone+delivery). Dealing with this was my main reason for getting the gpu to begin with... but I probably should do some more contrast tuning on the driveway cam.]
 
Last edited:
  • Like
Reactions: MikeLud1