Blue Iris and CodeProject.AI ALPR

I'm setting up a new Blue Iris system as I've moved... Loving the integration with CodeProject.AI.

My first camera had to be setup to be a license plate reader just because. I've got a restricted space driveway that forces cars to follow a single lane path through a fence, so it's perfect to put the camera on that location. I've also highlighted only the areas that will have a license plate as a zone to reduce processing time by the AI system. It all works brilliantly (though I'll be getting a different camera so it'll work at night). I'm getting 99% confidence on all day plates.

I'm having three issues though:

  1. I don't get the plate associated with the clip. In my review of the feature, the plate is supposed to be searchable in the clip area, but it's not. I've captured plates with "Only for confirmed vehicle alerts" checked and unchecked (I don't always end up having AI say there is a car in the image since the zone is focused on where the plates are, so only sections of the car are seen). My configuration for AI On this camera is:

    2024-01-04_13-15-48.png

    2024-01-04_12-46-34.png
  2. My understanding is that Blue Iris will essentially send each still to AI until it gets above the threshold of confidence (in my case 80%), then AI will no longer process the rest of the images of the clip. This makes sense to minimize additional processing time that's not needed. In my case, it is continuing to process every image - and with the current NVR computer I'm using, it blows out the time for processing fairly significantly.

    2024-01-04_13-23-06.png

  3. Finally, you can see in the image above, the first few images don't find a plate - the front of the car is all that's in the zone. However, once I get to 99% confidence, I'd have thought all the remaining images in the clip would be skipped. I suspect that they aren't simply because the AI model has only found a "DayPlate" and not an "object", but I'm not entirely certain.
Any thoughts on these issues?

Also, don't laugh at the times - I'm using this computer to test and validate things before I move Blue Iris to it's eventual computer that's not yet been built.
 
  • Like
Reactions: looney2ns
OK - one other question - today, I had a vehicle trigger the tagging feature of Blue Iris... I now even more believe the reason I'm not getting the license tagging is because the object detection is not finding a vehicle. Anyway, I've been able to search for the license plate when I'm looking at alerts, but the same search on clips does not work - that seems odd - the meta data is associated with the clip and the clip is identified as an alert, or not (in my case - at the moment - all clips are alerts)...

It's not a problem, but it does seems strange - at least I've figured out how to find license plates if they exist...
 
  • Like
Reactions: wpiman
@Wired6400 Your config does not look right.

If you review this thread, MikeLud1 previously shared his suggested config: Blue Iris and CodeProject.AI ALPR (assuming you're on a recent CP.AI release)

As a data point, I have been getting plate #'s on my side and can see them in the UI3 alert list. Using search box in UI3 for a plate # also works.
 
Last edited:
  • Like
Reactions: MikeLud1
@actran - thanks - that config looks different from most of the configs I've seen on youtube and other forums - I'll give that a shot.

Adding the CustomObjects of objects:0 is an interesting thought - essentially forcing CP.AI to ignore the CustomObjects module...
 
  • Like
Reactions: actran
@actran @MikeLud1 - Bingo - Mike's settings worked like a treat! Bonus - because the custom modules are not executing, the execution time, even on a somewhat underpowered system is actually quite reasonable!

Thank you! I'd done a fair amount of searching the web and watching way too many youtube videos trying to figure this out - first post to ipcamtalk in a while and I get the help I need in short order - I appreciate the pointer to the posting I had not discovered yet (though it does seem like there are ~700 postings on this thread, so maybe Mike's post can be pinned to the top).
 
  • Like
Reactions: actran
Hi all, I am interested in logging the plates captured by CodeProjectAI, and it seems this Code Red is the only option at the moment? I managed to fumble my way through setting up a home network with OpenVPN access and configuring Blue Iris to reliably capture plates, but I have ZERO programming knowledge. I'm good at following instructions, but programming talk is always a couple of steps above with a lot of 'fill in the blanks' lol.

My question is, is this something I can fumble through, having zero programming knowledge, by reading this thread in detail, or is some knowledge required? I know it's a weird question, but I'm currently at the stage 'I don't know what I don't know'.
 
  • Like
Reactions: Nunofya
Hi all, I am interested in logging the plates captured by CodeProjectAI, and it seems this Code Red is the only option at the moment? I managed to fumble my way through setting up a home network with OpenVPN access and configuring Blue Iris to reliably capture plates, but I have ZERO programming knowledge. I'm good at following instructions, but programming talk is always a couple of steps above with a lot of 'fill in the blanks' lol.

My question is, is this something I can fumble through, having zero programming knowledge, by reading this thread in detail, or is some knowledge required? I know it's a weird question, but I'm currently at the stage 'I don't know what I don't know'.

I was really hoping a solution would have been developed by now within BI.

I don't have any programming experience, so patiently waiting, but it would seem compared to AI and BI programming, that this should be something easier to incorporate into BI at some point???? Maybe that will be next on @MikeLud1 list of goodies!
 
Even an excel spreadsheet would do me for now. Some way to export timestamp, plate number and preferably a link to the still capture image saved in the Alerts folder, to a .csv file. Have you come across anything like that?
 
  • Like
Reactions: Nunofya
@Nidstang If you are on a recent BI5 release and have configured CP.AI correctly to capture license plates, there is no programming required. See screenshot below. You will get a list of plates in the alert list and if you put a partial plate # in search box below, it will limit the alert list to only matching plates. This search box is available in both UI3 and BI console window.

plate search.png

P.S. I've redacted the license plate #'s in my screenshot where they should normally be visible. The CP.AI ALPR license plate extraction (OCR) is pretty good on USA #'s if you have the camera itself properly configured. You'll have to tell us if OCR on Australian license plates are similar quality.

Search box is available in v5.7 release or higher, see help screenshot below:help file.png
 
Last edited:
@actran Thanks for the tip! I have a lot to learn with BI. I still eventually want to collect this info externally somehow, so I can log the movements of particular vehicles, but it's not a priority right now.

CP.AI is impressive so far with capturing the Australian plates considering I have spent such little time configuring everything. I would say it is correctly reading the full plate about 50% of the time, and only misinterpreting one or two digits the rest - but usually when it's not a clear capture. I am using a Z12E, distance 35m (115 ft) with an angle 30-35 degrees, focal length 60mm. Vehicles are travelling between 20 to 60 km/h (12 to 40 mph) I suspect the main issue is the plates are not horizontal on my screen because I haven't physically rotated the camera yet. I'm planning on moving this camera to a better position so haven't spent much time configuring it properly yet.
 
@Nidstang Getting data out of BI5 is straightforward. For me, I configured BI5 to send data via MQTT to Home Assistant. I can then use automations in HA to do more complex actions. MQTT is pretty standard mechanism for data sharing for a number of home automation platforms.

Below is a screenshot of my Home Assistant dashboard with events listed on the right:home assistant.png

The other way to get data out of BI5 is to use it's JSON API. Here is the help screenshot on the alertlist endpoint for getting list of plates or other alerts:
JSON API.png
 
@actran Thanks for the tip! I have a lot to learn with BI. I still eventually want to collect this info externally somehow, so I can log the movements of particular vehicles, but it's not a priority right now.

CP.AI is impressive so far with capturing the Australian plates considering I have spent such little time configuring everything. I would say it is correctly reading the full plate about 50% of the time, and only misinterpreting one or two digits the rest - but usually when it's not a clear capture. I am using a Z12E, distance 35m (115 ft) with an angle 30-35 degrees, focal length 60mm. Vehicles are travelling between 20 to 60 km/h (12 to 40 mph) I suspect the main issue is the plates are not horizontal on my screen because I haven't physically rotated the camera yet. I'm planning on moving this camera to a better position so haven't spent much time configuring it properly yet.

Generally, one thing I found really helpful in getting clear images is to prioritize shutter speed on cameras, making images clearer, less smudgy, even at the expense of contrast. Gain should be as small as possible. That's something you configure in the camera itself.
 
how to set up ALPR beside normal (person, car, cat, dog, ...) and facial recognition? I just saw settings to only get alpr running :)

That would be in this thread:

 
  • Like
Reactions: Saargebeat
Has anybody considered the possibility of storing images with unique plates in separate folders (where one could manually identify the vehicle type) and then doing training to develop vehicle make/model identification? Thoughts?
 
Has anybody considered the possibility of storing images with unique plates in separate folders (where one could manually identify the vehicle type) and then doing training to develop vehicle make/model identification? Thoughts?

I was thinking something like this myself-- rather than say a license plate is

12S 765 at 90% confidence....

I would like to put that particular plate in and have it tell me if it sees that plate.

I often will see my plate and it will say 125 765 at 90% confidence or 12S 76S at 95% confidence... Some of the letters that look similar gets mixed up.....

I use this information to open my garage door. I suppose I could do it at the next level up.
 
@wpiman Yeah, with OCR an S may look like number 5 or vice versa; other equivalent patterns as well.

I do home automations using Home Assistant. When I get a plate # from BI5 via MQTT, I use regular expressions to match to several known plates.

Example code:
Code:
variables:
  vehicle: >-
    {% if  trigger.payload_json['plate'] | regex_search('(12S 765|12S 76S|125 765)', ignorecase=True) %}wpiman_car
    {% elif trigger.payload_json['plate'] | regex_search('(9ERR27)', ignorecase=True) %}sarah_car
    {% elif trigger.payload_json['plate'] | regex_search('(9HFY69)', ignorecase=True) %}justin_car
    {% else %}unknown_car{% endif %}

then I have automation trigger actions based on vehicle value like wpiman_car and not on the raw plate #s.
 
Last edited:
  • Like
Reactions: wpiman and hikky_b
@wpiman Yeah, with OCR an S may look like number 5 or vice versa; other equivalent patterns as well.

I do home automations using Home Assistant. When I get a plate # from BI5 via MQTT, I use regular expressions to match to several known plates.

Example code:
Code:
variables:
  vehicle: >-
    {% if  trigger.payload_json['plate'] | regex_search('(12S 765|12S 76S|125 765)', ignorecase=True) %}wpiman_car
    {% elif trigger.payload_json['plate'] | regex_search('(9ERR27)', ignorecase=True) %}sarah_car
    {% elif trigger.payload_json['plate'] | regex_search('(9HFY69)', ignorecase=True) %}justin_car
    {% else %}unknown_car{% endif %}

then I have automation trigger actions based on vehicle value like wpiman_car and not on the raw plate #s.

I use MQTT to send my plate to Homeseer and was thinking of doing something very equivalent.

I think our plates have 2,4,5,6,7,8,B,J,S,Z

The 8s and Bs get confused...7 and Zs... 5 and Ses...... The J and 1s...

I also get spaces in the middle of some of them.... The permutations can get large...

I am sort of thinking if I trained the model with a bunch of shots of my plate coming and going if somehow it could look for JUST that as opposed to trying to decode each letter.