Planning : ALPR

@nayr: Thanks for sharing your ALPR experiences in these threads. I'm curious what fraction of the traffic you see gives you a readable plate? I did some casual tests using some of my frame grabs at http://www.openalpr.com/demo-image.html but only got a few to work. Probably I have too much of an angle to the road.

Almost never does it miss capturing a plate, I'd say 99% or better from my observations.. presuming its there, any time a car gone by and it has not registered a plate I reviewed the footage and found it had no visible plate, likely a temporary tag or something non-reflective.. I typically get 3-4 captures from most traffic, alot more for really slow traffic.

OpenALPR has minimum and max size filters you can tune, so it will ignore objects that are clearly not plates.. those parameters will need to be tuned..

I am feeding it 4MP video though.. those images on that page are not realistic.. use these ones they provide:
ea7the.jpg

h786poj.jpg
 
Almost never does it miss capturing a plate, I'd say 99% or better from my observations
Wow! That's much better than I expected. I thought that "consumer" LPR would never be very solid, but this is inspiring news. So if I have a good enough camera, lens, light, and viewing angle, this really can be practical. Good to know.
 
I stacked all the odds I could in my favor, great angle, great camera, great resolution, great optics, great IR, great hardware capabilities.. and it works better than I had expected, but YMMV.. I'd like to do another camera going the other direction just for backup so I can get every car even if they have an out of state a rear plate, but the odds that direction are not in my favor and I'm not quite willing to make the investment for poor results.

was not cheap, but commercial setups still cost more and would not have performed in my environment.. usually they are designed to be put on a pole next to or over the road with lots of illuminators and not working at any great distances.

as an extra bonus, in the day time I get nice shots of all sidewalk traffic
 
Last edited by a moderator:
Yes, looking at your hardware list I'd have to agree with all of the above. One unusual thing I tried is to use a narrowband interference filter on the camera that passes only 850 nm, allowing my illuminator light through while reducing the broadband headlight glare. For the newer cars with LED taillights, the filter makes those lights almost disappear; the downside is this filter works like a mirror, so I get stray multiple reflections from the headlight energy around 850 nm that does get through. Plus it's fixed in place, so my image is monochrome day and night. However reading the threads here it seems simply using a fast shutter speed does just as well, so long as your IR illuminator is bright enough.

I also felt bad about wasting almost all the IR energy with a short shutter so I tried instead the maximum shutter and just a short IR pulse (strobe) running at the camera frame rate. That actually works to get clear images on motion (only at night of course), but with the CMOS rolling shutter there are some funny artifacts, plus other cameras at different frame rates with overlapping views see the strobe as motion, and constantly trigger on it.
 
Last edited by a moderator:
  • Like
Reactions: nayr
I have done some experiments with the online demo at http://www.openalpr.com/demo-image.html and also the standalone program 'alpr' version 2.3.0 installed on an Acer C720 running Ubuntu 14.04. I feed a .mp4 clip directly into the alpr program, which generates a report for every frame of the video. With 1920x1080 input, it processes 2 frames per second, or 3.25 fps after changing /usr/share/openalpr/config/openalpr.defaults.conf to limit allowed frame sizes to 10% of the frame instead of 100% (impossible with my camera location, and yes I need a longer lens). That is good enough to keep up with traffic here since it only looks at the motion-capture clips, not all the empty frames.

I find that when the camera view is nearly square-on to the car, both programs work about the same. BUT when the view is at a steeper angle, the online demo still usually works, but the local program almost never does. I'm guessing the local one needs me to generate the geometric de-skew parameters, so the plate can be processed as if it was viewed straight-on, as explained at https://github.com/openalpr/openalpr/wiki/Camera-Calibration
 
Last edited by a moderator:
I have done some experiments with the online demo at http://www.openalpr.com/demo-image.html and also the standalone program 'alpr' version 2.3.0 installed on an Acer C720 running Ubuntu 14.04. I feed a .mp4 clip directly into the alpr program, which generates a report for every frame of the video. With 1920x1080 input, it processes 2 frames per second, or 3.25 fps after changing /usr/share/openalpr/config/openalpr.defaults.conf to limit allowed frame sizes to 10% of the frame instead of 100% (impossible with my camera location, and yes I need a longer lens). That is good enough to keep up with traffic here since it only looks at the motion-capture clips, not all the empty frames.

I find that when the camera view is nearly square-on to the car, both programs work about the same. BUT when the view is at a steeper angle, the online demo still usually works, but the local program almost never does. I'm guessing the local one needs me to generate the geometric de-skew parameters, so the plate can be processed as if it was viewed straight-on, as explained at https://github.com/openalpr/openalpr/wiki/Camera-Calibration

Awesome. So you are running a separate computer for the task. What camera? What resolution? What is your setup?


I am thinking mailbox mounted, underground POE, to stand alone unit. RasPi seems underpowered. Need to see if any of these arduino/raspi/micro computers can do the trick. Seems running it on my BI server is advised against. I wonder if the i7-6700 could handle it...?

- - - Updated - - -

I have done some experiments with the online demo at http://www.openalpr.com/demo-image.html and also the standalone program 'alpr' version 2.3.0 installed on an Acer C720 running Ubuntu 14.04. I feed a .mp4 clip directly into the alpr program, which generates a report for every frame of the video. With 1920x1080 input, it processes 2 frames per second, or 3.25 fps after changing /usr/share/openalpr/config/openalpr.defaults.conf to limit allowed frame sizes to 10% of the frame instead of 100% (impossible with my camera location, and yes I need a longer lens). That is good enough to keep up with traffic here since it only looks at the motion-capture clips, not all the empty frames.

I find that when the camera view is nearly square-on to the car, both programs work about the same. BUT when the view is at a steeper angle, the online demo still usually works, but the local program almost never does. I'm guessing the local one needs me to generate the geometric de-skew parameters, so the plate can be processed as if it was viewed straight-on, as explained at https://github.com/openalpr/openalpr/wiki/Camera-Calibration

Awesome. So you are running a separate computer for the task. What camera? What resolution? What is your setup?


I am thinking mailbox mounted, underground POE, to stand alone unit. RasPi seems underpowered. Need to see if any of these arduino/raspi/micro computers can do the trick. Seems running it on my BI server is advised against. I wonder if the i7-6700 could handle it...?
 
Mine is not optimized, for now I'm just using what I have on hand. That is a Raspberry Pi 3rd-party M12-mount camera and a "5MP 16mm lens". Cars are about 100 feet away and I want to get the 25mm lens. It is using PiKrellCam to do motion detection and generate .mp4 clips of 1920x1080 at 24 fps. This model is the "normal" daylight camera with fixed IR-cut filter so it does not work at all at night. But hey, it cost $20. I just noticed using the -j option for JSON output, that frames where a plate is recognized take about 45 msec, and the frames with no plate take around 310 msec. So you have up to 7x speedup if you can limit your .mp4 clip to just those frames with actual motion detected, and don't waste time on any other frames. I'm sure more optimization is possible, like using lower resolution.
---
RPi software (PiKrellCam): http://billw2.github.io/pikrellcam/pikrellcam.html
RPi camera with M12 lens mount: http://www.ebay.com/itm/OV5647-Came...-for-Raspberry-Pi-3-B-2-Model-B-/272399560395
RPi camera without IR filter: http://www.ebay.com/itm/OV5647-NoIR...or-Raspberry-Pi-3-B-B-2-Model-B-/291404350105
M12 lens f=16mm, 5MP: http://www.ebay.com/itm/321917625469 (many other vendors also)
M12 lens f=25mm, 5MP: http://www.ebay.com/itm/282167088666 (I don't have this one yet)
M12 lock ring: http://www.ebay.com/itm/282153110180
plus a raspberry pi, of course. I am using the RPi only to run the camera, offloading ALPR to the Chromebook.

Example output as a car was driving away. In this case I knew the car appeared at 9 seconds into the clip, so I put in the seek offset to save time. Each line of text below comes from a separate frame, recorded at 24 fps. This 45-line output covers 50 input frames, which is 2 seconds of realtime. You see recognition drops off near the end, as the plate gets smaller in the frame. You can sort the list and take the best result by confidence. In this case, all the readings above Confidence: 82.6 are correct, which is 20 readings. The best in this clip was 85.64 but in other cases I have seen above 90% confidence. I may do even better with the 25mm lens, but we shall see.
Code:
alpr -n 1 --seek 9000 m1_2016-10-15_17.02.54_17.mp4 | grep confidence

    - 4RF539     confidence: 81.2365
    - 4RF539     confidence: 82.5857
    - 4RYF539    confidence: 83.9728
    - 4RYF539    confidence: 81.3316
    - 4RYF539    confidence: 84.1171
    - 4RYF539    confidence: 83.5318
    - 4RYF539    confidence: 83.8082
    - 4RYF539    confidence: 83.1123
    - 4RYF539    confidence: 82.2545
    - 4RF539     confidence: 81.1399
    - 4RYF539    confidence: 83.5976
    - 4RYF539    confidence: 84.0121
    - 4RYF539    confidence: 84.5939
    - 4RYF53     confidence: 82.3739
    - RYF539     confidence: 82.1314
    - 4RYF539    confidence: 84.4232
    - 4RYF539    confidence: 85.6433
    - 4RYF539    confidence: 84.0529
    - 4RYF539    confidence: 83.6499
    - 4RYF539    confidence: 83.7477
    - 4RYF539    confidence: 84.9798
    - 4RYF539    confidence: 84.9845
    - 4RYF539    confidence: 83.3394
    - 4RYF539    confidence: 82.5215
    - 4RYF539    confidence: 82.5771
    - 4YF539     confidence: 82.4017
    - 4RYF539    confidence: 83.2005
    - 4RYF539    confidence: 82.4861
    - 4RYF539    confidence: 81.9009
    - 4RYF59     confidence: 81.1114
    - 4RYF539    confidence: 82.6246
    - 4RYF539    confidence: 83.0617
    - 4RYF539    confidence: 81.848
    - RYF539     confidence: 81.0094
    - 4RYF539    confidence: 82.8815
    - 4RYF539    confidence: 81.3278
    - RF539      confidence: 81.0981
    - YF539      confidence: 80.7193
    - 4F539      confidence: 80.0927
    - YF539      confidence: 79.8389
    - RYF53      confidence: 80.9622
    - F539       confidence: 79.8027
    - 4RF5       confidence: 78.2911
    - BF539      confidence: 79.9442
    - RYF5       confidence: 79.0935
 
Last edited by a moderator:
As an eBay Associate IPCamTalk earns from qualifying purchases.
  • Like
Reactions: spencnor
By the way, I didn't realize what size image was needed for alpr. Doing a test combining an image of a car and a plate, I got a correct read at 94% confidence level where the entire image is 560x420 pixels in size and the plate itself is only 84 pixels across. Of course these are perfectly clean images and the plate is exactly flat to the camera, but it gives an idea. If a plate is 12" wide, that means a real-world image scale of 7 pixels per inch. With a 720p camera of 1280 pixels across, that would mean a field of view 15 feet wide, or with a 1920x1080 camera it would be 23 feet across, which is nearly four car widths if a car is 6 feet wide.

Toyota_plate-84px.jpg Plate-Recognize.jpg
 
Last edited by a moderator:
there's a minimum and maximum size settings in config, that dramatically influences confidence levels and should be configured for the environment.. this greatly helps its pattern recognition.