Blue Iris and CodeProject.AI ALPR

Code:
{
  "Modules": {
    "FaceProcessing": {
      "EnvironmentVariables": {
        "USE_CUDA": "True",
        "CPAI_MODULE_SUPPORT_GPU": "True"
      },
      "Activate": false
    },
    "ObjectDetectionYolo": {
      "EnvironmentVariables": {
        "USE_CUDA": "True",
        "CPAI_MODULE_SUPPORT_GPU": "True",
        "MODEL_SIZE": "Large"
      },
      "Activate": true,
      "SupportGPU": true
    },
    "ObjectDetectionNet": {
      "EnvironmentVariables": {
        "USE_CUDA": "True",
        "CPAI_MODULE_SUPPORT_GPU": "True"
      },
      "Activate": false
    },
    "OCR": {
      "Activate": false
    },
    "ALPR": {
      "SupportGPU": true
    }
  }
}
 
Code:
{
  "Modules": {
    "FaceProcessing": {
      "EnvironmentVariables": {
        "USE_CUDA": "True",
        "CPAI_MODULE_SUPPORT_GPU": "True"
      },
      "Activate": false
    },
    "ObjectDetectionYolo": {
      "EnvironmentVariables": {
        "USE_CUDA": "True",
        "CPAI_MODULE_SUPPORT_GPU": "True",
        "MODEL_SIZE": "Large"
      },
      "Activate": true,
      "SupportGPU": true
    },
    "ObjectDetectionNet": {
      "EnvironmentVariables": {
        "USE_CUDA": "True",
        "CPAI_MODULE_SUPPORT_GPU": "True"
      },
      "Activate": false
    },
    "OCR": {
      "Activate": false
    },
    "ALPR": {
      "SupportGPU": true
    }
  }
}
Does the ALPR work but not using CUDA?
Everything looks normal. Can you restart CodeProject.AI service and try testing the ALPR. After testing post the log from the test.
 
Scattered thru this thread and others are people saying that Rekor or Plate Recognizer is more accurate than this one. Well they should be as they are paid services.

In case anyone is interested I am starting to deal with this downstream in my dashboarding database.

So far I have these 2 cases:

The second character in a UK plate cannot (I believe) be zero. In the old style plates there was a letter followed by numbers, but no leading zeros. In modern plates the first 2 characters are letters, so my script replaces all instances of zero in placement 2 to the letter O.

The second search looks for plates that have letters as the first 2 chars and also the last 3 (assuming it read them correctly). In that case it replaces letter O with zero in the characters in the middle.

Does that make sense? Can anyone else thing of other cases that will work?

Here's my PHP code in case anyone is interested:

Code:
/char 2 can only ever be O, not zero;
if(substr($vehicle_plate, 1,1) == '0'){
    $vehicle_plate = substr($vehicle_plate, 0,1).'O'.substr($vehicle_plate, 2);
}

/ replace O with zero in middle of modern plate
if (preg_match('/[A-Za-z]{2}.{2}[A-Za-z]{3}/', $vehicle_plate)) {
    $part1 = substr($vehicle_plate,0,2);
    $part2 = substr($vehicle_plate,2,2);
    $part2 = str_replace("O","0",$part2);
    $part3 = substr($vehicle_plate,4,3);
    $vehicle_plate = $part1.$part2.$part3;
}
 
  • Like
Reactions: MikeLud1
I will test on one of my systems to see what happens when there is no internet later tonight.

Have you had a chance to try this without internet yet?

I did a clean install on my test computer (blew it out and reinstalled Win10 with Media Creation Tool) and downloaded the latest 2.0.7 in case it fixed whatever issues I had for 2.0.6

I am only testing CodeProject and not the integration with BI yet. I want to make sure it can read the new 3M plates consistently be feeding it images before I switch my main machine.

So I am just sending test images that CodeProject provides.

I reinstalled CodeProject last night and let the computer have internet all night.

This morning I try the test plate and it works

1675526554663.png

Then I turn off wifi and try again and even though it shows online it won't return anything and throws up the errors in the log:

1675526644796.png

Turn the wifi back on and it starts working again. I am stumped what that could be?
 
@TheWaterbug, as others have suggested, you need much higher shutter speed... like 1/1000 minimum. Your FOV is also much wider than necessary, and wastes pixels that could be better used on the plate. If you don't have varifocal, most fixed cams can be modified with different lenses for more magnification. I'd use a 25mm on that personally... and like wittaj mentioned add a second cam if you still need an overview of the scene. The boobie cam modified with one 25mm lens would be an excellent fit if you needed to "add another cam".
 
@wittaj, the fact that I haven't tested this myself started to bug me... just disabled internet going to my BI pc, and will see how it goes. Right away I did get errors about not being able to find updates, but that's expected. FWIW, I'm doing it with iptables on my router since the PC still needs ethernet to talk to my lan (cameras etc). I will run the test 24hrs before concluding anything, after which I'll report back on this thread.

One thing that struck me as odd though...

Then I turn off wifi

Are you running wifi on your BI pc LAN connection?
 
Now I am probably at an advantage here since I had been using the integrated Plate Recognizer. But because the limit was 2,500 per month, I had to make sure that I only sent images that had a plate in them. It took a lot of trial and error to get this down.

Here is my setup and you can see from the alert images, I am generally getting them with the plate in the center of the frame. In my case, it was draw a zone where I did so that the cars going right to left, I catch the front plate, and cars going left to right, I catch the back plate. It is some trial and error to get the FOV and zone line drawn to where you get them, but you see the idea in how to get it to trigger an alert with the vehicle in the frame. I do zoom in at night as this FOV for nightime was just a little too wide, but it is fine for daytime.

And then if you only wanted to catch vehicles going in one direction, simply add another zone to account for that.

1666275946029.png



1666275972482.png



So for those new, it is a lot easier to simply send every motion image from the LPR camera into the AI modules and let it do its thing so that is how a center of the image change in BI could be a problem to some people, but taking a little bit of time to tweak zones and settings could also be of some benefit.
 
Last edited:
  • Like
Reactions: Cooltiger
I just ran the exact same test and mine continues to work when it is offline so I'm baffled as well.

@wittaj, the fact that I haven't tested this myself started to bug me... just disabled internet going to my BI pc, and will see how it goes. Right away I did get errors about not being able to find updates, but that's expected. FWIW, I'm doing it with iptables on my router since the PC still needs ethernet to talk to my lan (cameras etc). One thing that struck me as odd though...

Are you running wifi on your BI pc LAN connection?

I am glad it is just me as that means it is a local solution, but it is baffling to me why this happening because I cannot think of any setting that would cause this to happen.

No this is not my BI computer. It is my test computer. My BI computer is hard-wired.

I am testing just the CodeProject portion without a camera or BI connected to it and trying just the sample images that CodeProject provides.

EDIT: I tried it hard-wired in case it was something with wifi. Same issue.
 
Last edited:
  • Like
Reactions: truglo
In case anyone is interested I am starting to deal with this downstream in my dashboarding database...

I saw the nodered thing, now this php thing, and there's my crappy batch script thing... I'm sensing that this is a very important application with many needles poking at it from different directions. This should be a hint to the BI devs at least.

I run nodered for all my homeassistant automations, so I'm fairly familar with it. I understand it has a less steep learning curve, but it is probably not the best tool for more advanced development going forward. Regardless what language/compiler/etc is used, I think it would be optimal if post processing efforts were combined to make a standard yet powerful cross/platform tool to go with BI, should BI lackluster on this end. I'm no CSE, but can help with beta testing/field reporting.
 
I saw the nodered thing, now this php thing, and there's my crappy batch script thing... I'm sensing that this is a very important application with many needles poking at it from different directions. This should be a hint to the BI devs at least.

I run nodered for all my homeassistant automations, so I'm fairly familar with it. I understand it has a less steep learning curve, but it is probably not the best tool for more advanced development going forward. Regardless what language/compiler/etc is used, I think it would be optimal if post processing efforts were combined to make a standard yet powerful cross/platform tool to go with BI, should BI lackluster on this end. I'm no CSE, but can help with beta testing/field reporting.

I would be really surprised to see any sort of post-processing built into Blue Iris, unless there is some way to crowd-source the algorithms that would be needed to identify which of the thousands of US license plate designs is in the image and then pattern match the letters/numbers based on what is valid on that design. Even the commercial LPR systems I've worked with that cost 5 figures per installation only try to identify tags from a few states at a time by loading region-specific firmware. The type of pattern matching that may work for UK and EU tags just seems impractical in the US where each state can have between dozens and hundreds of different tag designs with different letter/number patterns.

I'd love to be wrong about this, and maybe with enough input the AI can learn to mostly pick the correct option, but it will require a huge set of images and I'd still have very limited expectations when dealing with the night time black and white IR reflective images, since a lot of the information that lets you identify a tag's design is lost in those images.

Trying to differentiate between zero and the letter O on US plates seems like a waste of time. A few states use both. Many only use one or the other. For searching and matching purposes, the two should be considered to be a single character. I don't know if it's possible in the UK for two plates to exist that only differ by the fact that one has an O and the other has a 0 in the same position. I'm pretty sure that is impossible in the US, as the FBI's NCIC system converts all Os to zeros in queries, and you don't want to be driving around with the vanity plate "ST0LEN" after someone else reports their vanity plate "STOLEN" as stolen.

(Maryland did allow the vanity plates "NO TAG" and "NO TAGS" at one point. That was either a brilliant move that meant they never had to pay a parking ticket, or a dumb move meaning they get thousands of parking tickets a year for other people's unregistered cars. My guess is it's probably the latter. :)

It's probably easiest to just normalize all the Os to zeros or zeros to Os in the database in the first place, but if you don't, it just pushes the issue to the query side of things. If I were looking through my data for the plate NOP102, I would, at a minimum, search for N (O,0) P 1 (O,0) 2. Even using Plate Recognizer as my ALPR engine, with its huge dataset and region-specific processing, I'd be searching (N,M, W,H) (O,0,Q,D) (P,B,R) (1,L,I) (O,0,Q) (2,Z) if I really wanted to do a thorough check for that tag in my database.
 
  • Like
Reactions: truglo
I would be really surprised to see any sort of post-processing built into Blue Iris

I would agree with this, and BI in my mind should not be an ALPR system, it should just allow the calls to the AI systems and downstream processors. If someone wants to create a fully-fledged ALPR system with known plates, alerts, reports etc (I am actually doing this myself) then BI should just be flexible enough to integrate. The more that gets burned-in to BI the less flexibility we get.

This is why it irked me that it has become a BI decision as to what image is classed as "best" to decision off. In my mind we tell BI to send an image, plus 4 others at 100ms increments to the AI system. We should then be able to send the full JSON , including every 100ms variant to the downstream system to decide in which way to decypher it to get the best result (in my instance the best "plate" confidence level, but in other peoples examples the closest to the middle of the frame). Having BI decide which is the best way just puts limitations in place.
 
  • Like
Reactions: truglo
Thanks! 2 very insightful replies... I very much appreciate such comments from folks who clearly have their head wrapped around AI. The past few weeks I have gone from knowing nothing about AI to having a modest understanding of it's use and the challenges that lie in the way. Most of that understanding has come from reading insightful responses like this.
 
This is why it irked me that it has become a BI decision as to what image is classed as "best" to decision off. In my mind we tell BI to send an image, plus 4 others at 100ms increments to the AI system. We should then be able to send the full JSON , including every 100ms variant to the downstream system to decide in which way to decypher it to get the best result (in my instance the best "plate" confidence level, but in other peoples examples the closest to the middle of the frame). Having BI decide which is the best way just puts limitations in place.

I don't disagree that Blue Iris should be flexible to accommodate different use cases. Since I'm using Plate Recognizer and don't have unlimited image analyses, I can't (or at least, I don't want to pay more for) multiple images per alert to be analyzed, so I'll never have a 'highest confidence' on the read itself. I get one shot at each alert. I'm using about 8,000 to 9,000 of my 10,000 paid reads each month. Which is why for my use case, I need Blue Iris to pick the single best image from the local AI to send up for having the plate read.

I wonder if both systems can be used in tandem. CPAI first finds a DayPlate or NightPlate, then it looks at each candidate image with one of those objects to read the plate, then Blue Iris sends the image with the best plate (as rated by CPAI) to Plate Recognizer.
 
I'm curious when the "always look for plates" bug will be fixed in BI. Other than that, the new lpr stuff has been working excellent for me.

Untitled.jpg

9 cams and one lpr... the bug wasting a lot of gpu cycles.
 
Can anybody explain how this could have ended up in the canceled alerts? Here's the plate and my settingsView attachment 153121View attachment 153122View attachment 153124

I had some similar issues and in my case Blue Iris support said the AI return can be discarded if it comes back too late. The image with the license plate is at T+0, and the last image was at T+2000, which makes sense for your settings of 10 images @ 200ms. But the response time for that T+0 image is listed as 2287ms. I'm not sure if I'm reading the AI analysis right, but it looks like some of the responses from the AI took more than 6 seconds. If you're running CPU mode, you might need to add a GPU to your system to get adequate performance out of CPAI.

It also looks like you're running default object detection and license-plate at the same time. You might want to try only running license-plate to see if it speeds up the AI response.
 
  • Like
Reactions: Crest6010
Thanks. I’m running with a GPU. I was under the impression. That with the new update we had to use the default object detection. Tried to mirror Mike’s settings.