Blue Iris and DeepStack ALPR

MikeLud1

IPCT Contributor
Apr 5, 2017
2,278
4,371
Brooklyn, NY
I took a break from my DeepStack Custom Model project and started work on a DeepStackALPR solution.

I made a lot of progress on the DeepStackALPR. The solution is API that crops and rotates (if needed) the license plate. Then the API reads all of the characters in the plate and logs the license plate details. Also the API will save the DeepStack results in JSON format and YOLO format. This data can be used to improve DeepStack Custom Model that does the OCR.

The model that does the OCR needs some work I would say it is about 85% accurate. The way the API is written it will save a small text file that can be used to retrain the OCR model. Once everyone starts using the DeepStackALPR API you can send me the alert images and the text files with these files I will retrian the OCR model. If we do this about two to three times we should have a accurate OCR model. I will post more details later.

Also your antivirus might detect the API as a virus because the antivirus software has no signature reference for the API since it is not mass produced software. My Norton Antivirus did detecte it as a virus and I had to let Norton Antivirus know it is OK to run.

Version 3.0 notes:
  • Version 3.0 is now an Install exe file that will extract all the files and install the API as a service. It also adds DeepStackALPR to the Start Menu where are tools to control DeepStackALPR service.
  • Added NSSM - the Non-Sucking Service Manager to manage DeepStackALPR service
  • Converted the Python script into an API. Since the script is now an API you do not need to add any Actions on alerts in your LPR camera.
  • There is only one Actions on alerts needed for the cropped camera (see instructions for details)
  • Fixed first plate capture for the new day was saving log details, YOLO.txt, and JSON file in the previous day.
  • Blue Iris Support (Ken) added the memo to Blue Iris log.
  • V3.0 does not need some of the requested changes sent to Blue Iris Support (see below)
Version 2.2 notes:
  • Fixed (If DeepStack identifies two first characters in the license plate the script might use the wrong character.)
    • If on version 2.1 only need to replace "DeepStackALPR.exe" file to upgrade to version 2.2 (reboot Blue Iris PC after replacing)
  • New OCR DeepStack Model (lplate_char.pt)
    • If on version 2.1 only need to replace "lplate_char.pt" file to upgrade to version 2.2 (reboot Blue Iris PC after replacing)
Version 2.1 notes:
  • Corrected rotation error
  • Corrected trigger delay error
Version 2.0 notes:
  • Only one script is needed for V2.0. The script does the cropping and rotation (if needed). Once cropped the script will have DeepStack run the OCR on the cropped plate.
  • After the OCR the script will save the DeepStack results in JSON format and YOLO format (for future retraining of the OCR custom model)
  • Since there is only one script only one camera is needed with the cropped license plate.
  • Also the new version is twice as fast as V1.1 since there is only one script.
Known issue:
  • If there is a parked car and a car drives by the API might crop & OCR the plate of the parked car instead of the car that drove by.
Requested changes sent to Blue Iris Support (Ken) to improve the integration:
  • When sending Blue Iris a JSON trigger can you have Blue Iris respond with the file name for the alert and add the memo to the log, currently it only shows “external”
  • When using &JSON macro can you included the full file path for the alert (F:\BlueIris\Alerts\ALPR\20220105\ALPR.20220105_235240.0.16-0.0.0.jpg)
  • If you turn off Burn label mark-up onto alert images the alert image that is saved is the motion triggered image not the image DeepStack confirmed. Can you please change to the confirmed image
1641676522833.png
 

Attachments

Last edited:
I just posted V1.0 in the first post. I still need to write up the instructions on how to set it up. For starts you can do the first step which is to setup your LPR camera to use the below DeepStack custom model. I am hoping to finish the instructions sometime today.

Attached is a DeepStack custom model that can be used to confirm if a license plate is in the FOV of your LPR camera. The DeepStack labels for the model are "DayPlate" and "NightPlate".

Let me know how the custom model works for you. If you are having poor results and want to add your LPR images to the model let me know and I will post instructions on how to label the images so I can added them to the custom model.

This model was created with the help of @aesterling. Below are the states where the plates were captured and the amount of images used to train the model.
State Where Plates Were CapturedDay Plate Count Used For Training the Custom ModelNight Plate Count Used For Training the Custom Model
New York530530
Minnesota291103
View attachment 108342


View attachment 108343

I took a break from my DeepStack Custom Model project and started work on a DeepStack LPR solution.

I made a lot of progress on the DeepStack LPR. The solution is made up of two Python scripts the first script crops and rotates (if needed) the license plate. The second script reads all of the characters in the plate and logs the license plate details. Also the second script saves the YOLO details for the labels so that the save plate image and YOLO details can be used to improve DeepStack Custom Model that does the OCR.

View attachment 114270
 
I’ll have to give this a try.

Thank you for working on this as I know many people are looking for this.
 
Dang I was just getting ready to try it!

Are you running the substream to mainstream for your LPR or just mainstream?
 
I just posted V1.1 and the instructions on how to setup the DeepStack LPR. The model that does the OCR needs some work I would say it is about 75% accurate. The way the OSR script is written it will save a small text file that can be used to retrain the OCR model. Once everyone starts using the DeepStack LPR models and scripts for a week or two you can send me the alert images and the text files with these files I will retrian the OCR model. If we do this about two to three times we should have a accurate OCR model. I will post more details later in the week.

Also your antivirus might detect the scripts as a virus because the antivirus software has no signature reference for the Python scripts since they are not mass produced software. My Norton Antivirus did detecte them as a virus and I had to let Norton Antivirus know it is OK to run.
I took a break from my DeepStack Custom Model project and started work on a DeepStack LPR solution.

I made a lot of progress on the DeepStack LPR. The solution is made up of two Python scripts the first script crops and rotates (if needed) the license plate. The second script reads all of the characters in the plate and logs the license plate details. Also the second script saves the YOLO details for the labels so that the save plate image and YOLO details can be used to improve DeepStack Custom Model that does the OCR.

View attachment 114270
 
Last edited:
Got the crop part to work. The OCR is not working for me, perhaps due to bad hardware (90-100% CPU) I get "no signal" on the 2 newly created cameras.
How will the OCR part work? and is it possible to have the license plate mailed or pushed to get a notification when away?

But very good work.

BR
 
What are your DeepStack times for the crop plate?
Can you send screenshots of the two new cameras Network IP configuration add the LPR folder you created. If the crop part is working correctly there should be a file named LPR.jpg in the folder
1641243695997.png
1641243967665.jpeg
Got the crop part to work. The OCR is not working for me, perhaps due to bad hardware (90-100% CPU) I get "no signal" on the 2 newly created cameras.
How will the OCR part work? and is it possible to have the license plate mailed or pushed to get a notification when away?

But very good work.

BR
 
  • Like
Reactions: looney2ns
Also for now just do up to and including step 5 and see if you can get that much working.

Got the crop part to work. The OCR is not working for me, perhaps due to bad hardware (90-100% CPU) I get "no signal" on the 2 newly created cameras.
How will the OCR part work? and is it possible to have the license plate mailed or pushed to get a notification when away?

But very good work.

BR
 
cam1.PNGcam2.PNGfolder.PNG

The cropped image is in the folder. So this part works. I am still lost regarding to the OCR part.

Do you have any suggestions for the trigger part. BI doesn't trigger a snapshot from the camera. yesterday i made manual trigger to get image from camera.

BR.
 
  • Like
Reactions: looney2ns
Make sure you have the trigger setup in your LPR camera that you manual triggered below is a post with my settings



View attachment 114483View attachment 114484View attachment 114485

The cropped image is in the folder. So this part works. I am still lost regarding to the OCR part.

Do you have any suggestions for the trigger part. BI doesn't trigger a snapshot from the camera. yesterday i made manual trigger to get image from camera.

BR.
 
  • Like
Reactions: looney2ns