Hello,
This could be useful to any tinkers with cars passing by their camera (either with or without a full LPR solution already - such as openalpr / license plate reader.) I did this because my setup was challenging to catch every plate that went by in a single image (yes, it probably could be done reliably, but i didn't want to compromise on other benefits of my setup configuration), so I instead decided to save 10 images in rapid succession on motion detection trigger (in road), and then filter out these images for plates with the below method.
it also saves space and could be useful for long term storage (years and years), and maybe quicker to look through at a later time. Or if you have a plate reading setup already, but has a subscription with a limited # of hits per month, I expect this would dramatically cut down on wasted (no-plate) inference calls. For instance, in my dataset, less than 15% of images have a plate in them.
1. setup BI to capture images on motion (use zone where car passes) and save to a file folder (I will call this the input_directory)
2. train a custom model with your images (draw boxes around plates in images, save to zip, run in colab for free, download best.pt file [or use mine if I can figure out how to attached the .pt file, although be warned i didn't put much effort into it])
3. install latest deepstack for your platform (such as DeepStack Windows Version CPU and GPU Beta)
4. start deepstack server in powershell
deepstack --MODELSTORE-DETECTION "C:/path-to-detection-models" --PORT 80
5. download python 3.9, install
pip install requests (in cmd),
pip install pillow (in cmd)
6. download visual studio code, download python extension for visual studio code
7. adjust the below python script to suit your specific file folder names (just change the input_directory and output_directory variables),
adjust the port to match your deepstack server (as above in #4)
check code formatting is correct, particulary the 'for' loops / 'if' loops (haven't uploaded a .py file before)
The script below will scan the input_directory, send the images to deepstack server with your custom ai model, and if a plate is detected, it will crop the plate and save as a new image in the output_directory.
Send these to your plate reader service, store them for long term, or work out an OCR solution for yourself!
This could be useful to any tinkers with cars passing by their camera (either with or without a full LPR solution already - such as openalpr / license plate reader.) I did this because my setup was challenging to catch every plate that went by in a single image (yes, it probably could be done reliably, but i didn't want to compromise on other benefits of my setup configuration), so I instead decided to save 10 images in rapid succession on motion detection trigger (in road), and then filter out these images for plates with the below method.
it also saves space and could be useful for long term storage (years and years), and maybe quicker to look through at a later time. Or if you have a plate reading setup already, but has a subscription with a limited # of hits per month, I expect this would dramatically cut down on wasted (no-plate) inference calls. For instance, in my dataset, less than 15% of images have a plate in them.
1. setup BI to capture images on motion (use zone where car passes) and save to a file folder (I will call this the input_directory)
2. train a custom model with your images (draw boxes around plates in images, save to zip, run in colab for free, download best.pt file [or use mine if I can figure out how to attached the .pt file, although be warned i didn't put much effort into it])
Custom Models
docs.deepstack.cc
3. install latest deepstack for your platform (such as DeepStack Windows Version CPU and GPU Beta)
4. start deepstack server in powershell
deepstack --MODELSTORE-DETECTION "C:/path-to-detection-models" --PORT 80
5. download python 3.9, install
pip install requests (in cmd),
pip install pillow (in cmd)
6. download visual studio code, download python extension for visual studio code
7. adjust the below python script to suit your specific file folder names (just change the input_directory and output_directory variables),
adjust the port to match your deepstack server (as above in #4)
check code formatting is correct, particulary the 'for' loops / 'if' loops (haven't uploaded a .py file before)
The script below will scan the input_directory, send the images to deepstack server with your custom ai model, and if a plate is detected, it will crop the plate and save as a new image in the output_directory.
Send these to your plate reader service, store them for long term, or work out an OCR solution for yourself!
Python:
###python file - Custom_LPR_detection_v20201229###
import requests
import os
import PIL
# Importing Image class from PIL module
from PIL import Image
input_directory = 'J:\\BlueIris\\LPR_Alerts' #where BlueIris saves jpg from motion trigger
output_directory = 'J:\\BlueIris\\Deepstack_LPR_dataset\\results' #where python will save the cropped plates
#cleanup variables before run, just in case
left = 0
top = 0
right = 0
bottom = 0
label = 0
confidence = 0.0
#goes through entire directory and processes each file
for filename in os.listdir('J:\\BlueIris\\LPR_Alerts'):
filepath = os.path.join(input_directory, filename)
image_data = open(filepath,"rb").read()
#posts to deepstack custom server and logs result
response = requests.post("[URL unfurl="true"]http://localhost:80/v1/vision/custom/best[/URL]",files={"image":image_data}).json() #change port 80 to whatever your deepstack custom server is on
#log result for debugging
for detection in response["predictions"]:
label = detection["label"]
confidence = detection["confidence"]
print(label)
print(response)
#if object["confidence"] is not zero :
#when a plate is found, perform an action (crop and save image to output_directory)
if confidence > 0.2:
#log details of the match
print("yes, it is a positive match. A plate was found in " + filename + " with " + str(confidence) + " confidence.")
# Opens a image in RGB mode
im = Image.open(filepath)
# Setting the points for cropped image
# include extra margin of ~20pixels
left = detection["x_min"] - 20
top = detection["y_min"] - 20
right = detection["x_max"] + 20
bottom = detection["y_max"] + 20
#ensure margin does not extend beyond image limits - turn this on if it helps
#if left < 0: left = 0
#if top < 0: top = 0
#if right > 1080: right = 1080
#if bottom > 1080: bottom = 1080
###avoid false positives due to timestamp. filter out image corner - this was commented out since it was usually actual plates under the timestamp
##if object["x_min"] > 100 or object["y_min"] > 40 :
# Cropped image of above dimension
# (It will not change orginal image)
im1 = im.crop((left, top, right, bottom))
# Shows the image in image viewer
##im1.show()
# save a image using extension
im1 = im1.save(output_directory + "\\cropped_" + filename)
#log action for debugging
print("image saved as " + output_directory + "\\cropped_" + filename)
##else
##print("No image saved: likely a false positive due to timestamp")
#cleanup variables for next image processed
detection["confidence"] = 0.0
confidence = 0.0
left = 0
top = 0
right = 0
bottom = 0
else:
print("No Plate was found in image " + filename)
Last edited: