How to capture multiple plate reads in quick succession?

I complete agree with Mike here. You are masking out your image by not having a full zone.

Couple other things...

1. You are triggering A>B. You will only get cars going left to right in your image. If you want both, try A-B. You might even not want to do zone crossing at all to catch more plates. I do A-B to capture motion direction, and I have a cloned camera that just does simple motion so I don't miss any.
2. Your Pre-trigger is set to one image, and post trigger is set to twenty. You might want to set than to something like 10/10 as opposed to 1/20. You might be missing some good quality frames. You are also sending a frame every 100 ms. That is what I do-- and my camera frame rate is set to 10 fps so it matches. I'd recommending setting your camera FR to 10 fps as well. Motion detection seems to be more reliable with a lower frame rate.

I am not sure how multiple plates in one image will work with BI. I know CP.AI will capture them. And the ALPR database- no idea there either.
 
  • Like
Reactions: RyanB and MikeLud1
Lots of rain to start the day and everyone was slow to get out. I did finally encounter a two car group. After having changed all the settings per everyone's recommendations (which really boosted plate grabbing and indentification for single passing cars) my system was still not able to grab the two plates. The one pic shows how the cars were spaced when they passed by the camera and the other shows the AI inspector only having been given the shot too late for the lead vehicle to have been analyzed. Your thoughts?


EmpireTechALPR1.20250330_133010358.13.jpgScreenshot 2025-03-30 133949.jpg
 
Try this in the To confirm, I am not sure if it will work, but it may,

View attachment 218059
I did update "To Confirm" , but as I was monitoring traffic, here's something that happened and might offer a clue. A car passed by at 3:21:35 PM and was plate identified, but then not right behind, but 12 seconds later at 3:21:47PM, another car passed but was not picked up, analyzed, and listed in the "Alerts"column. The record symbol and red frame around the camera shot seemed to still be active from the first car. Typically BI keeps recording 6-14 seconds after the cars leave the frame. Do you think this means anything?
 
Last edited:
Is there any detection examples listed that could be removed to simplify AI's processing time? is that even a thing? Just asking ...seems like a lot of extraneous things.
image_2025-03-30_150652569.png
 
Is there any detection examples listed that could be removed to simplify AI's processing time? is that even a thing? Just asking ...seems like a lot of extraneous things.
View attachment 218062

I could be wrong, but I think those are for BI to match up to the CPAI results--it should not increase CPAI processing time, it might be a few microseconds of processing on the BI side, but I dont think that would make any difference. Object:0 disables object processing.
 
If you cloned the camera-- and one "camera" was on the left-- and one was on the right, I wonder if you would have more success...

Bascially, have a Zone A on each which is just the side of the camera.
 
  • Like
Reactions: Flintstone61
FYI, ALPR Database handles multiples plates in a single image properly. It will create 2 different database entries using the same image. The location of the Blue Bird logo confused CPAI into thinking that's a plate.

Screenshot from 2025-03-30 14-09-00.png
 
  • Like
Reactions: Flintstone61
If you cloned the camera-- and one "camera" was on the left-- and one was on the right, I wonder if you would have more success...

Bascially, have a Zone A on each which is just the side of the camera.
I will soon have another camera online to handle traffic in the other direction but until then, I was hoping to figure out the two and three cars in a row situation. Any front mounted plate captures are just a bonus for now.
 
FYI, ALPR Database handles multiples plates in a single image properly. It will create 2 different database entries using the same image. The location of the Blue Bird logo confused CPAI into thinking that's a plate.

View attachment 218063
Just curious, can you or have you captured two or three vehicle's plates in a row rather close together? If so, can you post an image or two of that?
 
Just curious, can you or have you captured two or three vehicle's plates in a row rather close together? If so, can you post an image or two of that?

Not really, unless you would consider this as 2 vehicles:

Screenshot from 2025-03-30 15-46-05.png

This camera has 2 physical lens, one in color and the other in IR, and I think there is a 50ms difference in time. BI, CPAI, ALPR DB all handle it properly. My breaktime is 2.0 seconds, and this is fine for my situation since traffic flow is not super high, but that also means I will miss vehicles that are going one after another, I have a second camera pointing in the opposite direction to help solve that issue.
 
Here are a few of my thoughts.
  1. You are using a CPU for all your detections. I'm seeing typical times in the 500ms+ range for each call, from looking at your screenshots, sometimes up towards 900ms which is close to 1 full second.
  2. You are asking for 20 frames to be tested and quicker than CPAI can do its detection.
  3. You are asking Blue Iris not to cancel ("DoNotCancel in the Cancel field). So it will dutifully keep running images in that timeframe and pick out what it considers the best one.
  4. You are using the standard object model which is slower and possibly adding the CPAI load from other cameras.
  5. You are using the YOLOv5 6.2 module but the YOLOv5 .NET module is generally faster.
  6. In your call to the ALPR Database, you are still using the MEMO field. Check the new method passing in the "ai_dump" from the &JSON directly.
My suggestions would be to lower the number of frames you are requesting to no more than 5. Give them more time between. Get rid of "DoNotCancel" and finally unless you need to catch a giraffe riding a skateboard while eating pizza next to a hydrant, I'd turn off the standard model and only use a single custom model for any single camera. If you haven't already, switch to the YOLOv5 .NET module. Finally, if you have the capability to run CodeProject.AI with a GPU (perhaps on another computer on the network), that will definitely speed up your detections.
 
  • Love
Reactions: Flintstone61
^+1 this - I agree, you just seem to have too much going on, and is it really a NUC you are using? That could certainly contribute to it regardless of the gen as they are just not designed for 24/7. Are your drives 2.5 or 3.5 and USB or SATA?

Both Mike and I have shown in his other thread rapid fire reads of cars going by close together (3 cars in 4 seconds).

Blue Iris and CodeProject.AI ALPR

Blue Iris and CodeProject.AI ALPR
 
Here are a few of my thoughts.
  1. You are using a CPU for all your detections. I'm seeing typical times in the 500ms+ range for each call, from looking at your screenshots, sometimes up towards 900ms which is close to 1 full second.
  2. You are asking for 20 frames to be tested and quicker than CPAI can do its detection.
  3. You are asking Blue Iris not to cancel ("DoNotCancel in the Cancel field). So it will dutifully keep running images in that timeframe and pick out what it considers the best one.
  4. You are using the standard object model which is slower and possibly adding the CPAI load from other cameras.
  5. You are using the YOLOv5 6.2 module but the YOLOv5 .NET module is generally faster.
  6. In your call to the ALPR Database, you are still using the MEMO field. Check the new method passing in the "ai_dump" from the &JSON directly.
My suggestions would be to lower the number of frames you are requesting to no more than 5. Give them more time between. Get rid of "DoNotCancel" and finally unless you need to catch a giraffe riding a skateboard while eating pizzaimage_2025-03-30_181159328.pngimage_2025-03-30_181253159.png next to a hydrant, I'd turn off the standard model and only use a single custom model for any single camera. If you haven't already, switch to the YOLOv5 .NET module. Finally, if you have the capability to run CodeProject.AI with a GPU (perhaps on another computer on the network), that will definitely speed up your detections.
 
  • Haha
Reactions: VideoDad
I use a CPU for detection. It takes me maybe 1/4 to 1/2 a second to read a plate. If I analyze 30 images whenever a plate comes by, it take worse case 15 seconds of CPU time. My street is pretty slow.. Maybe 25 houses on it. If you just want to add it to a database and not make any quick decisions-- not a problem. I'd love to use a Google TPU for this.
 
Here are a few of my thoughts.
  1. You are using a CPU for all your detections. I'm seeing typical times in the 500ms+ range for each call, from looking at your screenshots, sometimes up towards 900ms which is close to 1 full second.
  2. You are asking for 20 frames to be tested and quicker than CPAI can do its detection.
  3. You are asking Blue Iris not to cancel ("DoNotCancel in the Cancel field). So it will dutifully keep running images in that timeframe and pick out what it considers the best one.
  4. You are using the standard object model which is slower and possibly adding the CPAI load from other cameras.
  5. You are using the YOLOv5 6.2 module but the YOLOv5 .NET module is generally faster.
  6. In your call to the ALPR Database, you are still using the MEMO field. Check the new method passing in the "ai_dump" from the &JSON directly.
My suggestions would be to lower the number of frames you are requesting to no more than 5. Give them more time between. Get rid of "DoNotCancel" and finally unless you need to catch a giraffe riding a skateboard while eating pizza next to a hydrant, I'd turn off the standard model and only use a single custom model for any single camera. If you haven't already, switch to the YOLOv5 .NET module. Finally, if you have the capability to run CodeProject.AI with a GPU (perhaps on another computer on the network), that will definitely speed up your detections.
So, here are my current settings. Lowered frames to 5, got rid of DoNotCancel, added pizza. I'm using YOLOv5.NET. I didn't understand where/how to "turn off the standard model and only use a single custom model for any single camera". Please advise. I still need to upgrade to ALPR Database 0.1.8. to use "ai_dump". Currently a GPU isn't an option (see my attached NUC specs). The CodeProject screenshot is for one lone car, and the AI screenshots are current for both BI and the camera in question. What do ya think?

C3.JPGC1.JPGC7.jpgC5.jpgC4.jpg
 
Putting anything in the To Cancel field that isn't an expected object means it will continue processing through all the frames. I was suggesting it be left blank, but maybe that's not necessary now you've lowered the frame count.

The AI settings screen has the "Default object detection" turned on. If you are only using ipcam custom models, the default objects are unnecessary.

All of this can be checked by looking at more of your .dat files and see what is taking the most time during the processing for a plate.