License plate detection, crop, and save

cjowers

Getting the hang of it
Joined
Jan 28, 2020
Messages
107
Reaction score
36
Location
AUS
Hello,

This could be useful to any tinkers with cars passing by their camera (either with or without a full LPR solution already - such as openalpr / license plate reader.) I did this because my setup was challenging to catch every plate that went by in a single image (yes, it probably could be done reliably, but i didn't want to compromise on other benefits of my setup configuration), so I instead decided to save 10 images in rapid succession on motion detection trigger (in road), and then filter out these images for plates with the below method.

it also saves space and could be useful for long term storage (years and years), and maybe quicker to look through at a later time. Or if you have a plate reading setup already, but has a subscription with a limited # of hits per month, I expect this would dramatically cut down on wasted (no-plate) inference calls. For instance, in my dataset, less than 15% of images have a plate in them.

1. setup BI to capture images on motion (use zone where car passes) and save to a file folder (I will call this the input_directory)

2. train a custom model with your images (draw boxes around plates in images, save to zip, run in colab for free, download best.pt file [or use mine if I can figure out how to attached the .pt file, although be warned i didn't put much effort into it])

3. install latest deepstack for your platform (such as DeepStack Windows Version CPU and GPU Beta)

4. start deepstack server in powershell
deepstack --MODELSTORE-DETECTION "C:/path-to-detection-models" --PORT 80

5. download python 3.9, install

pip install requests (in cmd),
pip install pillow (in cmd)

6. download visual studio code, download python extension for visual studio code

7. adjust the below python script to suit your specific file folder names (just change the input_directory and output_directory variables),
adjust the port to match your deepstack server (as above in #4)
check code formatting is correct, particulary the 'for' loops / 'if' loops (haven't uploaded a .py file before)

The script below will scan the input_directory, send the images to deepstack server with your custom ai model, and if a plate is detected, it will crop the plate and save as a new image in the output_directory.
Send these to your plate reader service, store them for long term, or work out an OCR solution for yourself!





Python:
###python file - Custom_LPR_detection_v20201229###


import requests

import os

import PIL


# Importing Image class from PIL module

from PIL import Image


input_directory = 'J:\\BlueIris\\LPR_Alerts'                          #where BlueIris saves jpg from motion trigger

output_directory = 'J:\\BlueIris\\Deepstack_LPR_dataset\\results'    #where python will save the cropped plates


#cleanup variables before run, just in case

left = 0

top = 0

right = 0

bottom = 0

label = 0

confidence = 0.0


#goes through entire directory and processes each file

for filename in os.listdir('J:\\BlueIris\\LPR_Alerts'):

 

    filepath = os.path.join(input_directory, filename)

 

    image_data = open(filepath,"rb").read()


    #posts to deepstack custom server and logs result

    response = requests.post("[URL unfurl="true"]http://localhost:80/v1/vision/custom/best[/URL]",files={"image":image_data}).json()     #change port 80 to whatever your deepstack custom server is on


    #log result for debugging

    for detection in response["predictions"]:

        label = detection["label"]

        confidence = detection["confidence"]

      

        print(label)

 

    print(response)

 

    #if object["confidence"] is not zero :

    #when a plate is found, perform an action (crop and save image to output_directory)

    if confidence > 0.2:

        #log details of the match

        print("yes, it is a positive match.  A plate was found in " + filename + "  with " + str(confidence) + " confidence.")


        # Opens a image in RGB mode

        im = Image.open(filepath)


        # Setting the points for cropped image

        # include extra margin of ~20pixels

        left = detection["x_min"]   - 20

        top = detection["y_min"]    - 20

        right = detection["x_max"]  + 20

        bottom = detection["y_max"] + 20


        #ensure margin does not extend beyond image limits  - turn this on if it helps

        #if left < 0:        left    = 0

        #if top < 0:         top     = 0

        #if right > 1080:    right   = 1080

        #if bottom > 1080:   bottom  = 1080


 

        ###avoid false positives due to timestamp.  filter out image corner  - this was commented out since it was usually actual plates under the timestamp

        ##if object["x_min"] > 100 or object["y_min"] > 40 :

          

            # Cropped image of above dimension

            # (It will not change orginal image)

        im1 = im.crop((left, top, right, bottom))


            # Shows the image in image viewer

            ##im1.show()

                      

            # save a image using extension

        im1 = im1.save(output_directory + "\\cropped_" + filename)

          

            #log action for debugging

        print("image saved as " + output_directory + "\\cropped_" + filename)


        ##else

            ##print("No image saved: likely a false positive due to timestamp")


        #cleanup variables for next image processed

        detection["confidence"] = 0.0

        confidence = 0.0

        left = 0

        top = 0

        right = 0

        bottom = 0


    else:

        print("No Plate was found in image " + filename)
 
Last edited:

aesterling

Getting comfortable
Joined
Oct 9, 2017
Messages
352
Reaction score
346
@cjowers this is excellent and thanks for taking the time to post! I mentioned in another thread that I followed your instructions to annotate and train my own license plate model a couple of weeks ago.

Since then, I've gathered an additional 500 images to train a more accurate model, but I'd like to speed up the annotation step by using my original license plate model to automatically annotate the new images. I believe this is called "model-assisted annotation" or "automatic annotation?"

I'm wondering if your python script could be modified to generate and save an XML file (like the one pasted below) for each input image, just like what LabelIMG generates? I don't know visual studio, but it seems like all the pieces are there. Or if you know of an existing free or easy way to do this, let me know. :)

Thanks again!

XML:
<annotation>
    <folder>test</folder>
    <filename>LRP.20210602_162142003.109.jpg</filename>
    <path>C:\BlueIris\license_plate_stills\test\LRP.20210602_162142003.109.jpg</path>
    <source>
        <database>Unknown</database>
    </source>
    <size>
        <width>3840</width>
        <height>2160</height>
        <depth>3</depth>
    </size>
    <segmented>0</segmented>
    <object>
        <name>license_plate</name>
        <pose>Unspecified</pose>
        <truncated>0</truncated>
        <difficult>0</difficult>
        <bndbox>
            <xmin>2398</xmin>
            <ymin>1122</ymin>
            <xmax>2828</xmax>
            <ymax>1400</ymax>
        </bndbox>
    </object>
</annotation>
 
Last edited:

cjowers

Getting the hang of it
Joined
Jan 28, 2020
Messages
107
Reaction score
36
Location
AUS
@cjowers this is excellent and thanks for taking the time to post! I mentioned in another thread that I followed your instructions to annotate and train my own license plate model a couple of weeks ago.

Since then, I've gathered an additional 500 images to train a more accurate model, but I'd like to speed up the annotation step by using my original license plate model to automatically annotate the new images. I believe this is called "model-assisted annotation" or "automatic annotation?"

I'm wondering if your python script could be modified to generate and save an XML file (like the one pasted below) for each input image, just like what LabelIMG generates? I don't know visual studio, but it seems like all the pieces are there. Or if you know of an existing free or easy way to do this, let me know. :)

Thanks again!
That's great! I don't know of an easy way to do this, but I know it can be done... For instance, I have seen people use modeFRONTIER to automate image manipulation for machine learning datasets / training models. It is not really designed for this, but it does work.

But you are correct I think, seems like you could modify this python script to create the xml based on the output received from original deepstack model (xy/min/max). From memory the xml or txt file (yolo) is just : class, xmin,ymin,xmax,ymax (or a similar row) for each detected object in the image. So should be easy to write the file.

You'd definitely want to check through all the results to ensure it is not introducing errors into your model, so how much you will gain in terms of time and improved model I am not sure. But if you do this for every new plate image (add it to your model dataset), and periodically retain the model (you'd want to version track, in case things go bad :)), that could be interesting / maybe worth it in the long run.

I might try this soon (few weeks) , as I am doing an OCR model with a lot of labeling. Please let me know if you beat me to it!
 

cjowers

Getting the hang of it
Joined
Jan 28, 2020
Messages
107
Reaction score
36
Location
AUS
happy to share my best.pt model weights, but forum doesnt allow filetype to be uploaded. but anyone can msg me if they share a onedrive or email or something I can drop it in (~14MB, based on 200 labeled images, color and IR BW, slightly overexposed environment which makes the plates not very high contrast)
 

cjowers

Getting the hang of it
Joined
Jan 28, 2020
Messages
107
Reaction score
36
Location
AUS
@cjowers this is excellent and thanks for taking the time to post! I mentioned in another thread that I followed your instructions to annotate and train my own license plate model a couple of weeks ago.

Since then, I've gathered an additional 500 images to train a more accurate model, but I'd like to speed up the annotation step by using my original license plate model to automatically annotate the new images. I believe this is called "model-assisted annotation" or "automatic annotation?"

I'm wondering if your python script could be modified to generate and save an XML file (like the one pasted below) for each input image, just like what LabelIMG generates? I don't know visual studio, but it seems like all the pieces are there. Or if you know of an existing free or easy way to do this, let me know. :)

Thanks again!
i managed to write in the YOLO automated labeling annotations into a python script that identifies the characters from plate images. it may take you some time to get it running / configured with your file system, but should help someone who wants to do something similar. (either automated labeling, or running the detection of a custom made OCR model on a large dataset of cropped images.
I used YOLO5m, ~200 images to train. results are good, but not perfect

***edited to update code massively - v20210712. way way better now!
*****edited with small updates - v20210713


Python:
#Custom_LPR_OCR_detection_v20210713_wAutoYoloLabeling.py
#created by u\shallowstack for the purpose of either:  autolabeling for YOLO deepstack custom model for OCR, 
# or for logging the detected characters in images within a file directory, with a number of filtering capabilties.

# before running this code, first run the below call in Powershell (not CMD) to start Deepstack custom model:
#PS C:\Windows\system32> deepstack --MODELSTORE-DETECTION "C:\AI_models\lpr_ocr\" --PORT 97
#change port and dir to whereever your custom (character recognition) AI model is...

#major changes to v20210712 - 
# deals with overlapping detected characters. as i think these are common with custom OCR models. 
# if too many detections, it now filters down to an expended number of Characters.
# now skips any left over yolo text files in the directory without error.  note: deletes txt file only when it needs to write a new one of the same name.
# better debugging info and code comments for usability

import requests
import os
import PIL
import sys
from PIL import Image

#CHANGE THESE GLOBAL VARIABLES for customized control
overlapping_threshold = 4           #how close the characters can be before the inferior one is ignored
min_conf_thres = 0.6                #if worst confidence is less than this, the result wont be written to output_logfile 
min_len_thres = 3                   #if there is not more than this number of characters detected on the plate, the result wont be written to output_logfile
plate_len_threshold = 6              #if there is more character detected than this number, program will cut out the weakest confidence characters until value met.

#CHANGE THESE DIRECTORIES TO MATCH YOUR FILESYSTEM
input_directory = 'J:\\BlueIris\\Deepstack_LPR_dataset\\results_2021'                          #where large dataset of cropped plates live
output_logfile = 'C:\\Users\\BlueIrisServer\\Desktop\\logfile.txt'            #where python will save the log file

#change this file pointer to whereever your YOLO class file is.  it will sync up the character labels with the class numbering scheme
classYoloFilename = 'J:\\BlueIris\\Deepstack_LPR_dataset\\OCR_testing\\QLD Model\\train\\classes.txt'

#reads 'class' txt file, and puts into a list to be searched every time a label is given, with the index value of the list returned so it can be written to the YOLO file.
classLabels = tuple(open(classYoloFilename).read().split('\n')) 
print(classLabels)        

#get # of x pixels for yolo file conversion from pixel # to %
def get_xnum_pixels(filepath):
    width, height = Image.open(filepath).size
    return width
#get # of y pixels for yolo file conversion from pixel # to %
def get_ynum_pixels(filepath):
    width, height = Image.open(filepath).size
    return height

#cleanup variables before run, just in case
resp_label = [0]
resp_conf = [0]
resp_pos = [0]
resp_data2write = [""]
resp_label.clear()
resp_conf.clear()
resp_pos.clear()
resp_data2write.clear()
YoloLabelFilepath = ""
data2write = ""

i = 0

#goes through entire input_directory and processes each file to look for alphanumeric characters
for filename in os.listdir(input_directory):
    #store full filepath of current file under inspection
    filepath = os.path.join(input_directory, filename)
    
    #skip file if it is a txt file
    if filename.rsplit('.',1)[1] == 'txt' :
        print(filename + " is a txt file, SKIPPED OCR DETECTION ON THIS FILE.")
        #os.remove(filepath)   #uncomment this if you want it to delete the text file from directory.  (untested)
        continue
    else :
        print(filename + " is not a txt file, sending to OCR detection AI...")

    #get # of x pixels for yolo file conversion from pixel # to %
    xSize = get_xnum_pixels(filepath)
    #get # of y pixels for yolo file conversion from pixel # to %
    ySize = get_ynum_pixels(filepath)

    #create new text file to store label annotations
    YoloLabelFilepath = filepath.rsplit('.',1)[0] + '.txt'
    image_data = open(filepath,"rb").read()

    #clear variables from last image data
    resp_label.clear()
    resp_conf.clear()
    resp_pos.clear()
    resp_data2write.clear()

    #posts to deepstack custom server and logs result
    response = requests.post("http://localhost:97/v1/vision/custom/yolo5m_best_20210623",files={"image":image_data}).json()     #change port 97 to whatever your deepstack custom server is on, and custom model name
    #print(response)

    #go through all detections in image and store to temporary lists for comparisons
    for detection in response["predictions"]:
        resp_label.append(detection["label"]) 
        resp_conf.append(round(detection["confidence"],2))
        resp_pos.append(detection["x_min"])
      
        #print annotation results for assisted labeling - for future incorporation into model
        #this will print a YOLO type txt file for each image processed, according to classes.txt file
        
        #some maths to get xy pixel values into YOLO format:  x center, y center, x width, y width (all as a % of total image size)
        xCenter = float(detection["x_min"] + detection["x_max"]) / float(2) 
        yCenter = float(detection["y_min"] + detection["y_max"]) / float(2)
        xWidth = detection["x_max"] - detection["x_min"] 
        yWidth = detection["y_max"] - detection["y_min"] 

        xCenter = format(round(xCenter / xSize, 6), '.6f')
        yCenter = format(round(yCenter / ySize, 6), '.6f')
        xWidth = format(round(xWidth / xSize, 6), '.6f')
        yWidth = format(round(yWidth / ySize, 6), '.6f')

        #check the class list to see what index value the label is that was returned from detection
        ClassValue = classLabels.index(detection["label"])
        #format is :   class (not label) xcenter% ycenter% xwidth% ywidth%
        data2write = str(ClassValue) + " " + str(xCenter) + " " + str(yCenter) + " " + str(xWidth) + " " + str(yWidth) + " "

        # resp_xCenter.append(xCenter)
        # resp_yCenter.append(yCenter)
        # resp_xWidth.append(xWidth)
        # resp_yWidth.append(yWidth)
        # resp_ClassValue.append(ClassValue)
        resp_data2write.append(data2write)

    #sort all stored arrays (label, confidence, YoloData) according to x_min position array  (so it reads left to right like we see it)
    B1=resp_pos
    B2=resp_pos
    B3=resp_pos
    A=resp_label
    C=resp_conf
    D=resp_data2write

    #if NO detections made exist with debug info
    if len(C) == 0: 
        print("No Char was found in image " + filename)
        continue
    else:
        pass

    #sort resp_label array
    zipped_lists1 = zip(B1,A)
    sorted_pairs1 = sorted(zipped_lists1)
    tuples = zip(*sorted_pairs1)
    B1, A = [list(tuple) for tuple in tuples]
    #sort resp_conf array
    zipped_lists2 = zip(B2,C)
    sorted_pairs2 = sorted(zipped_lists2)
    tuples = zip(*sorted_pairs2)
    B2, C = [list(tuple) for tuple in tuples]
    #sort resp_data2write array
    zipped_lists2 = zip(B3,D)
    sorted_pairs2 = sorted(zipped_lists2)
    tuples = zip(*sorted_pairs2)
    B3, D = [list(tuple) for tuple in tuples]

    #debug info only
    print(str(A) + ", " + str(B1) + ", " + str(C) + ", " + str(D))

   
    k=0
    #go through each position in the lists
    for m in B1 :
        i = 0
        if k == len(B1) :
            print("max of list reached already.")
            continue
        #check if any CHARs are overlapping according to their xmin value
        upperLim = m + overlapping_threshold
        lowerLim = m - overlapping_threshold
        for n in B1 :
            if i == len(B1) :
                print("max of list reached already.")
                continue
            if n < upperLim and n > lowerLim and i != k :
                if C[i] > C[k] :
                    #debug info only
                    print("deleted an overlapping CHAR: '" + str(A[k]) + "', " + str(B1[k]) + ", " + str(C[k]) + ", with yolo info: " + str(D[k]))
                    #remove detection # [n] from all lists
                    del B1[k]
                    del B2[k]
                    del B3[k]
                    del A[k]
                    del C[k]
                    del D[k]

                else:
                    #remove detection # [m] from all lists
                    #debug info only
                    print("deleted an overlapping CHAR: '" + str(A[i]) + "', " + str(B1[i]) + ", " + str(C[i]) + ", with yolo info: " + str(D[i]))
                    del B1[i]
                    del B2[i]
                    del B3[i]
                    del A[i]
                    del C[i]
                    del D[i]              
            else:
                pass
            i=i+1
        k=k+1
    
    #if results still more CHARs than expected, being deleting the lowest confidence items
    check_len = len(C)
    while check_len > plate_len_threshold :
        #remove detection # [i] from all lists
        i = C.index(min(C))    
        #debug info only
        print("more detections than expected (plate_len_threhold = " + str(plate_len_threshold) + "), so the following (low conf) CHAR was deleted : '" + str(A[i]) + "', " + str(B1[i]) + ", " + str(C[i]) + ", with yolo info: " + str(D[i]))
        #delete index i
        del B1[i]
        del B2[i]
        del B3[i]
        del A[i]
        del C[i]
        del D[i]
        check_len = len(C)


    #delete any exisiting yolo text file by that name
    if os.path.exists(YoloLabelFilepath) :
        os.remove(YoloLabelFilepath)
        print(YoloLabelFilepath + " file removed, so a new yolo txt file can be written.")
    else:
        pass
    #write to yolo text file        
    text_file = open(YoloLabelFilepath,"a")
    j=0
    for o in D :
        text_file.write(D[j] + '\n')
        j=j+1
    text_file.close()
    
    #debug
    print("I wrote all lines to Yolo .txt : " + str(D))
    
    #print results to log file (if looks OK)              change directory to where ever you like, or add info
    if C != [] and len(C)>min_len_thres:
        if min(C) > min_conf_thres :
            text_file = open(output_logfile,"a")
            text_file.write(filename)
            text_file.write("\n")
            text_file.write(str(A))
            text_file.write("\n")
            text_file.write(str(B1))
            text_file.write("\n")
            text_file.write(str(C))
            text_file.write("\n")
            text_file.write(str(D))
            text_file.write("\n")
            text_file.close()
        else:
            pass
    else :
        pass
    
    #if best found character really sucks, notify
    if max(C) < 0.1: 
        print("No Char was found in image " + filename)
    else:
        print("COMPLETED : " + filename + " *** " + str(A) + " *** " + str(B1) + " *** " + str(C) + " *** " + str(D))
 
Last edited:

cjowers

Getting the hang of it
Joined
Jan 28, 2020
Messages
107
Reaction score
36
Location
AUS
here's another script to help filter out the bad results. you can then either manually label those bad results, or just edit the yolo files to include the proper character class # (if it is just a character swap).
You can feed them all back into the custom model for better accuracy, but I think it is most valuable to correct the bad ones and feed those in (since it already detected the good ones fine)


Python:
#QC_AutoYoloResults_v20210713.py
#created by u\shallowstack for the purpose of:  checking the YOLO autolabeling performed using a deepstack custom model for OCR. 
#the program displays the results and allows for human checking, before adding any (good) results to a directory (incorporate with original model training dataset)

# before running this code, adjust the directory variables in the code to match your filesystem.
# There should also be a collection of cropped images (lisence plates) and yolo files (.txt) with the same filenames inside the input_directory to process. 

import requests
import os
import PIL
import sys
import shutil
#import pywinauto
#import win32gui

# Importing Image class from PIL module
from PIL import Image, ImageDraw, ImageFont, ImageShow

#from pywinauto import Application

def get_xnum_pixels(filepath):
    width, height = Image.open(filepath).size
    return width
#get # of y pixels for yolo file conversion from pixel # to %
def get_ynum_pixels(filepath):
    width, height = Image.open(filepath).size
    return height

input_directory = 'C:\\Users\\BlueIrisServer\\Desktop\\AutoTrain'                          #where large dataset of cropped plates live
output_directory_GOOD = 'C:\\Users\\BlueIrisServer\\Desktop\\AutoTrain\\GOOD'               #where we will save the GOOD yolo results to for future training
output_directory_BAD = 'C:\\Users\\BlueIrisServer\\Desktop\\AutoTrain\\BAD'                 #where we will save the BAD yolo results to dispose of or adjust
output_logfile = 'C:\\Users\\BlueIrisServer\\Desktop\\logfile.txt'            #where python will save the log file
anno_output_directory_GOOD = 'C:\\Users\\BlueIrisServer\\Desktop\\AutoTrain\\Annotations_GOOD'               #where we will save the GOOD yolo results to for future training
anno_output_directory_BAD = 'C:\\Users\\BlueIrisServer\\Desktop\\AutoTrain\\Annotations_BAD'

#change this file pointer to whereever your YOLO class file is.  it will sync up the character labels with the class numbering scheme
classYoloFilename = 'C:\\Users\\BlueIrisServer\\Desktop\\classes.txt'

#reads class file, and puts into a list to be searched every time a label is given, with the index value of the list returned so it can be written to the YOLO file.
classLabels = tuple(open(classYoloFilename).read().split('\n'))
print(classLabels)       

#cleanup variables before run, just in case
pos = [0]
pos.clear()
ALL_CHAR = [""]
ALL_CHAR.clear()
YoloLabelFilepath = ""
text_line_list = [0]

i = 0

#goes through entire directory and processes each file
for filename in os.listdir(input_directory):
    #pid_terminal = os.getpid()

    if filename.endswith(".jpg"):
        filepath = os.path.join(input_directory, filename)
    
        #get # of x pixels for yolo file conversion from pixel # to %
        xSize = get_xnum_pixels(filepath)
        #get # of y pixels for yolo file conversion from pixel # to %
        ySize = get_ynum_pixels(filepath)

        #new text file to store label annotations
        YoloLabelFilepath = filepath.rsplit('.',1)[0] + '.txt'
        AnnotationFilepath = filepath.rsplit('.',1)[0] + "_annotation.jpg"

        #create object
        im = Image.open(filepath)
        
        i = 0

        #make sure a Yolo file exists, otherwise skip image file
        if os.path.isfile(YoloLabelFilepath) is False :
            continue
        else:
            pass
        #format of yolo file lines should already be :   class (not label) xcenter% ycenter% xwidth% ywidth%
        #read from file       
        text_file = tuple(open(YoloLabelFilepath).read().split('\n'))
        
        #for every line in the yolo file, inspect and annotate onto image
        for line in text_file :
            #cleanup any previous lines
            text_line_list.clear()
            #inspect and store yolo file lines
            text_line = str.split(line)
            print(text_line)
            text_line_list = list(map(float, text_line))

            #go through each line, and convert to character (from class integer)
            if text_line != [] :
                #store character information
                CHAR = classLabels[int(text_line[0])]
                ALL_CHAR.append(CHAR)
                #store x position for futher checking
                pos.append(text_line_list[1])
                #draw box annotations onto image
                im1 = ImageDraw.Draw(im)
                im1.rectangle([(text_line_list[1]*xSize,text_line_list[2]*ySize+9.0), (text_line_list[1]*xSize, text_line_list[2]*ySize+11.0)], fill = None, outline = "red")
                #draw text annotations onto image
                im2 = ImageDraw.Draw(im)
                im2.text((text_line_list[1]*xSize, text_line_list[2]*ySize+12.0), CHAR, fill = (34,139,34))
            else:
                pass
        #show image annotated with characters   
        im.resize((xSize*8,ySize*8), Image.ANTIALIAS).show()
        #im3 = im.resize((xSize*4,ySize*4))
        #im3 = im.open(im,"r",None)
        #im.thumbnail((xSize*4,ySize*4))
        
        #save and close
        #im3 = ImageShow.show(im, filename)
        
        im.save(AnnotationFilepath, quality=95)         #dont use 100
        #im3.close()
        im.close()
        #show main terminal for user input
        #app = Application().connect(process=pid_terminal)
        #app.top_window().set_focus()
        
        #print result for debugging
        print(pos)
        print(text_file)
        
        #ask for human QC
        resultQC = input("Check if result is good - 1, or bad - 0...   (or any other key to exit)")
        #if good
        if resultQC == "0" :
            #move to good folders
            print("its bad")
            os.rename(YoloLabelFilepath,output_directory_BAD + "\\" + filename.rsplit('.',1)[0] + ".txt")
            os.rename(filepath,output_directory_BAD + "\\" + filename)
            os.rename(AnnotationFilepath,anno_output_directory_BAD + "\\" + filename.rsplit('.',1)[0] + "_annotation.jpg")
        else:
            #if bad
            if resultQC == "1" :
                #move to bad folders
                print("its good, so moved file to GOOD directory.")
                os.rename(YoloLabelFilepath,output_directory_GOOD + "\\" + filename.rsplit('.',1)[0] + ".txt")
                os.rename(filepath,output_directory_GOOD + "\\" + filename)
                os.rename(AnnotationFilepath,anno_output_directory_GOOD + "\\" + filename.rsplit('.',1)[0] + "_annotation.jpg")
            else:
                print("Something went wrong")
                break
        
        #close the shown plate image manually

        print(ALL_CHAR)

        #clean up variables
        pos.clear() 
        ALL_CHAR.clear()   

        i = i + 1
        #break  #used to debug for only 1 cycle
    else:
        pass
    
    # keep this txt manipulation calls below for my reference... for future editing of single characters in yolo file
    # if X != [] and len(X)>3:
    #     if min(confidence) > 0.6 :
    #         text_file = open(output_logfile,"a")
    #         text_file.write(filename)
    #         text_file.write("\n")
    #         text_file.write(str(X))
    #         text_file.write("\n")
    #         text_file.write(str(confidence))
    #         text_file.write("\n")
    #         text_file.close()
    #     else:
    #         pass
    # else :
    #     pass
    

    #cleanup variables for next image processed
    pos.clear()
 

bignose3

Young grasshopper
Joined
Jul 15, 2021
Messages
36
Reaction score
10
Location
UK
I hate to ask as I guess I should be able to work out for myself but it takes so long to load & train that hopefully someone can advise quickly.

Slowly implementing the APLR code above - great work, would have taken me a year to get that far.

Test & Train folders? are they meant to be the same, I read Train e.g. 300 & Test 30 but the video tutorial seems to show the same files, also the doc's say apply the boxes to the images in train & then casually says do the same for test, now if you had drawn boxes in 300 images why do basically the same for the test folder, surely just duplicate/copy. Are they mean to be different?
Also when uploading, the classes.txt is always in the Train folder, needs to be in root to run, I guess correct but seemed odd not mentioned anywhere that you do have to copy to root.

Do you have to use the clone each time, I have because it won't run otherwise but wondered if I was doing something wrong, once trained I guess not have to do often anyway.

Not done it since but was about 45 minutes training, I came back to PC & none of the folders were anywhere to be seen, the message saying complete & in Train-... folder, but no folders, anyway, just in case this is something I can avoid in the future, would hate to do many hours and find gone again.

Thanks for all your hard work, I think I am close, just need to sort the OCR solution.
 

cjowers

Getting the hang of it
Joined
Jan 28, 2020
Messages
107
Reaction score
36
Location
AUS
Test & Train folders? are they meant to be the same, I read Train e.g. 300 & Test 30
as far as i understand, the test and train should be different folders and have different files in those folders. (I just labeled all, and then 'moved' 10% of them into a test folder). the train folder contains the 'inputs' for the model. for each iteration of the model training process, an algorithm is created and that algorithm is run on the files in the test folder (these shouldnt have been in the train folder, otherwise your model will think it is much better than it is). the algortihm result is compared with your manual labeling in the test folder, and from that information a new algorithm is created for the next iteration (probably better, possibly worse). and so on and so on for hundreds of iterations until the result of the iteration is as close to your labeled results in the test folder. whenever an iteration result surpasses the others in accuracy, its weights are written to the best.pt file (last.pt is also the latest iteration weight, but may not be the best). you need to download the best.pt file shortly after training completes (see below)

Do you have to use the clone each time,
not sure i understand which clone you are referring to, can you explain more?

Not done it since but was about 45 minutes training, I came back to PC & none of the folders were anywhere to be seen, the message saying complete & in Train-..
if you are referring to the google collab training program doing this, i am fairly sure it is due to a time out, as i experienced this often. without a paid subscription you need to setup your epochs to end before the expiration (either 12 or 24hr) and you also need to download the completed folders shortly after it finishes (<1hr) otherwise it disconnects from the runtime and all data is lost (which is what sounds like happened here, since at least it finished). it is very frustrating and happened to me several times. the best option is to try to run when you can connect to a gpu runtime (not often it seems, but sometimes you can, and i think you have), and then you at least know it will complete before the expiration. then looking at how long an epoch takes, and giving yourself an estimate time to be back at the pc.

hope this helps
 
Last edited:

bignose3

Young grasshopper
Joined
Jul 15, 2021
Messages
36
Reaction score
10
Location
UK
Excellent explanation,
I know understand the Test & Train & of course makes sense.

your code
###python file - Custom_LPR_detection_v20201229###
worked great & even with my, poor Test & Train dataset.

Yes, sorry I did not explain myself very well, I was referring to the google collab and again great explanation.
Seems you have done all the hard yards as I imagine lots of frustrating hours finding all that stuff out & for a freeloader like me to come along.

Just struggling with the OCR but early days, done a lot of visual basic programming but struggling to get on top of Python even when I have example code to start with.

thanks again.
 

cjowers

Getting the hang of it
Joined
Jan 28, 2020
Messages
107
Reaction score
36
Location
AUS
Excellent explanation,
I know understand the Test & Train & of course makes sense.

your code
###python file - Custom_LPR_detection_v20201229###
worked great & even with my, poor Test & Train dataset.

Yes, sorry I did not explain myself very well, I was referring to the google collab and again great explanation.
Seems you have done all the hard yards as I imagine lots of frustrating hours finding all that stuff out & for a freeloader like me to come along.

Just struggling with the OCR but early days, done a lot of visual basic programming but struggling to get on top of Python even when I have example code to start with.

thanks again.
nice work! glad to hear it is progressing well.
no shame in freeloading, i've done the same for this (just got an earlier start than you)! and getting some info down on forums, or in code, as we work through it just makes it easier for anyone who comes along next.

let us know when/where you get stuck, happy to take a look at code, images, etc.
 
Top