Dedicated License Plate Cam project

Thanks, that's good to know! I tried two cheap illuminators I had, but they were too wide angle to be effective at the distance I have. On the other hand I have a 15 degree G-Series 6x2W illuminator from Scene on its way. I was being silly and ordered the 940nm, just because I wanted to see a 940nm in action, so I figure overall it's no more powerful than yours. I saw some interesting models on their website. The S-H301E-10-IR with its 10 degree beam looks interesting, but I assume just the shipping is going to be expensive for something that heavy.
 
@SyconsciousAu, you nailed it earlier asking me about ABF.. at night to re-focus plates; I have to bring the focus way in, the drift is too high.

soo, when my automation system goes to switch it to night mode, and turns the IR on its also going to have to change focus.. I cant use auto focus at night without a test plate out there, so this is what i found:

http://192.168.42.26/cgi-bin/devVideoInput.cgi?action=getFocusStatus
Code:
status.Focus=0.475207
status.FocusMotorSteps=968
status.Status=Normal
status.Zoom=0.000000
status.ZoomMotorSteps=0

and http://192.168.42.26/cgi-bin/devVideoInput.cgi?action=adjustFocus&focus=0.75&zoom=0

moved it way out of focus, so I sent:

http://192.168.42.26/cgi-bin/devVideoInput.cgi?action=adjustFocus&focus=0.475207&zoom=0

and it came right back to where it was!!!

Gday Nayr.

I'm hoping you, or someone, can shed some light on where I am going wrong here, and how you authenticate to the cameras.

If I put http://USER:pASSWORD@<IP>/cgi-bin/devVideoInput.cgi?action=getFocusStatus into the firefox URL bar I get a warning that I'm about to log in as "user" and clicking ok gives me my output.


If I do the same with http://USER:pASSWORD@<IP>/cgi-bin/devVideoInput.cgi?action=adjustFocus&focus=0.475207&zoom=0 the browser asks me for my username and password. I type them in and then the camera moves as told. How do I authenticate in the URL?

From the command line using curl - u USER:pASSWORD http://<IP>/cgi-bin/devVideoInput.cgi?action=adjustFocus&focus=0.475207&zoom=0 I get

'focus" is not recognised as an internal or external command, operable program or batch file
'zoom" is not recognised as an internal or external command, operable program or batch file

and I have no idea why

OS is Windows 10 and camera is the IPC-5431-Z5
 
Last edited:
You need to put quotes around the URL as ampersand is a special character used to separate multiple commands on the same line.
 
You need to put quotes around the URL as ampersand is a special character used to separate multiple commands on the same line.

So i tried

curl -u user:password " http://<IP>/cgi-bin/devVideoInput.cgi?action=adjustFocus&focus=0.475207&zoom=0"

That eliminates the

'focus" is not recognised as an internal or external command, operable program or batch file
'zoom" is not recognised as an internal or external command, operable program or batch file


response but nothing actually happens at the camera end.

In the browser I get an error when I include the quotes.
 
Quotes would only apply to curl, not the browser. Also, may not matter, but you have an extra space after the first quote.

Looking at the man page for curl, there's a --digest option you may need to specify as Dahua requires digest authentication with recent firmware AFAIK. If that doesn't help, add -v -i to get verbose output and HTTP headers from the camera.
 
  • Like
Reactions: SyconsciousAu
Quotes would only apply to curl, not the browser. Also, may not matter, but you have an extra space after the first quote.

Looking at the man page for curl, there's a --digest option you may need to specify as Dahua requires digest authentication with recent firmware AFAIK. If that doesn't help, add -v -i to get verbose output and HTTP headers from the camera.

Winner winner chicken dinner!

curl -u user:password --digest "http://<IP>/cgi-bin/devVideoInput.cgi?action=adjustFocus&focus=0.475207&zoom=0" is the go.
 
All of us will run into the issue of non reflective number plates at night from time to time, which looks something like this.
View attachment 17968

When I built my new number plate recognition setup I wanted to eliminate the unreadable plate so I put a starlight on the job running 1/250s shutter at night. The result is this

View attachment 17969

Combining the IPC-HFW5431E-Z5 with the IPC-HFW5231E-Z5 has worked very well for me.

@SyconsciousAu Nice! So the starlight cam sounds like it would be a better LPR. Are you running both as LPR cameras?
 
@SyconsciousAu, you nailed it earlier asking me about ABF.. at night to re-focus plates; I have to bring the focus way in, the drift is too high.

soo, when my automation system goes to switch it to night mode, and turns the IR on its also going to have to change focus.. I cant use auto focus at night without a test plate out there, so this is what i found:

http://192.168.42.26/cgi-bin/devVideoInput.cgi?action=getFocusStatus
Code:
status.Focus=0.475207
status.FocusMotorSteps=968
status.Status=Normal
status.Zoom=0.000000
status.ZoomMotorSteps=0

and http://192.168.42.26/cgi-bin/devVideoInput.cgi?action=adjustFocus&focus=0.75&zoom=0

moved it way out of focus, so I sent:

http://192.168.42.26/cgi-bin/devVideoInput.cgi?action=adjustFocus&focus=0.475207&zoom=0

and it came right back to where it was!!!

here is how I do my switching:
Code:
-- ALPR Day/Night Video Profile Switching
if (mins == timeofday['SunsetInMinutes'] + 60) then
        print("Switching ALPR Camera to Night Profile.")
        commandArray[1]={ ['OpenURL'] = uservariables['camLogin'] .. '@alpr/cgi-bin/configManager.cgi?action=setConfig&VideoInMode[0].Config[0]=2' }
        commandArray[2]={ ['OpenURL'] = uservariables['camLogin'] .. '@alpr/cgi-bin/devVideoInput.cgi?action=adjustFocus&focus=0.495868&zoom=0' }
        commandArray[3]={ ['OpenURL'] = uservariables['camLogin'] .. '@alpr/cgi-bin/configManager.cgi?action=setConfig&VideoInExposure[0][0].Backlight=1' }
        commandArray[4]={ ['OpenURL'] = uservariables['camLogin'] .. '@alpr/cgi-bin/configManager.cgi?action=setConfig&AlarmOut[0].Mode=1' }
elseif (mins == timeofday['SunriseInMinutes'] - 60) then
        print("Switching ALPR Camera to Day Profile.")
        commandArray[1]={ ['OpenURL'] = uservariables['camLogin'] .. '@alpr/cgi-bin/configManager.cgi?action=setConfig&VideoInMode[0].Config[0]=1' }
        commandArray[2]={ ['OpenURL'] = uservariables['camLogin'] .. '@alpr/cgi-bin/devVideoInput.cgi?action=adjustFocus&focus=0.475207&zoom=0' }
        commandArray[3]={ ['OpenURL'] = uservariables['camLogin'] .. '@alpr/cgi-bin/configManager.cgi?action=setConfig&VideoInExposure[0][0].Backlight=0' }
        commandArray[4]={ ['OpenURL'] = uservariables['camLogin'] .. '@alpr/cgi-bin/configManager.cgi?action=setConfig&AlarmOut[0].Mode=0' }
end

here is where you'll find the latest version of my profile switching script: domoticz-scripts/script_time_ipc.lua at master · nayrnet/domoticz-scripts · GitHub

Thanks for sharing this @nayr . I had my LPR dialed in last night and couldnt figure out what changed. I'll be setting up the same sort of thing in Python. I also need to do like you did and manually switch between Day/Night to enhance dusk/dawn plates. I have the 5431 Dahua that goes to 35mm. Hoping to use it for 90ft application.

I am going to hide my camera in a false electrical box near the sidewalk. There is a gang of telecomm/electrical boxes there already.
 
Last edited:
OK, I am a believer in the Dahua 5431 4MP with 35mm zoom. This is at around 90-100ft. Camera is at a 20 degree angle to the target.

Shutter: 1/500
Iris: 50
Gain: 0-25
3DR: 20

And here is the Python code to use the requests package to adjust it.

Sun Up

#Set Profile to 'Day'
url = 'http://192.168.2.205/cgi-bin/configManager.cgi?action=setConfig&VideoInMode[0].Config[0]=0'
r = requests.get(url, auth=HTTPDigestAuth('admin', 'password'))
print r.status_code

#Set Focus & Zoom
url = 'http://192.168.2.205/cgi-bin/devVideoInput.cgi?action=adjustFocus&focus=0.161491&zoom=1'
r = requests.get(url, auth=HTTPDigestAuth('admin', 'password'))
print r.status_code


Sun Down

#Set Profile to 'Night'
url = 'http://192.168.2.205/cgi-bin/configManager.cgi?action=setConfig&VideoInMode[0].Config[0]=1'
r = requests.get(url, auth=HTTPDigestAuth('admin', 'password'))
print r.status_code

#Set Focus & Zoom
url = 'http://192.168.2.205/cgi-bin/devVideoInput.cgi?action=adjustFocus&focus=0.155280&zoom=1'
r = requests.get(url, auth=HTTPDigestAuth('admin', 'password'))
print r.status_code


upload_2017-5-10_21-48-33.png
 
Last edited:
One thing I noticed when having an automation script change the Day/Night profile and refocus/zoom. You need to add a delay in between the profile change and zoom change. Apparently the IR/Profile might cause a auto-focus or something. It overwrites the focus setting that immediately followed that command. With a delay, problem solved.

@kevkmartin I will
 
  • Like
Reactions: ilrider78
I'm still planning to put some type of cam right on my new mailbox. I have conduit run through it to the back.
A lot of the discussion here is about capturing plates from quite a long range. In this situation I'll be right on top of them and almost parallel to the road, so just a few feet away.
Which camera choices may be best suited for this?
I don't think I'd need a lot of zoom to narrow the FOV to just the plate \ car area this close but I assume I'd want to use some type of varifocal, right?
Most here seem to be using the bullet cams. Any reason something like a 5321 turret would not work well here?
Thanks
 
I'm still planning to put some type of cam right on my new mailbox. I have conduit run through it to the back.
A lot of the discussion here is about capturing plates from quite a long range. In this situation I'll be right on top of them and almost parallel to the road, so just a few feet away.
Which camera choices may be best suited for this?
I don't think I'd need a lot of zoom to narrow the FOV to just the plate \ car area this close but I assume I'd want to use some type of varifocal, right?
Most here seem to be using the bullet cams. Any reason something like a 5321 turret would not work well here?
Thanks

From my limited experience, I'd get the 4MP dahua that can do varifocial. Once you have your target spot sets its nice to be able to refocus it between night and day. Its big though compared to some of the smaller bullets. Depending how you are oriented you need to be at around a 15-20 angle to the plate. So even on a mailbox that could end up being further away than you expected.
 
I'm still planning to put some type of cam right on my new mailbox. I have conduit run through it to the back.
A lot of the discussion here is about capturing plates from quite a long range. In this situation I'll be right on top of them and almost parallel to the road, so just a few feet away.
Which camera choices may be best suited for this?
I don't think I'd need a lot of zoom to narrow the FOV to just the plate \ car area this close but I assume I'd want to use some type of varifocal, right?
Most here seem to be using the bullet cams. Any reason something like a 5321 turret would not work well here?
Thanks

Even though you won't need a lot of zoom, with more zoom you cam move your capture zone further down the road and decrease the effective angle of the plate to the camera, which will reduce motion blur, and improve accuracy of ANPR if you choose to run it.

Playing around with the history now. I decided to store the top 10 matches for a single capture. Photo shows each guess along with if that guess has been seen before and when.

  • Database: Logs data to SQLite Database
  • Faster Images: Added a Python script that pulls images off the camera at 250ms intervals. Blue iris runs the script when motion is detected.
  • Reduce Duplicates: A single capture is around 13 images. 5 from Blue Iris and 8 from the Python grabs. The analyzer discards any images that end up with the same best result.
  • Disk Housekeeping: Analyzer does ongoing housekeeping and deletes any files 2+ hours old.
  • Alerts: Added a text file of important plates to track. If there is a match to anything in this file an email alert goes out.

View attachment 18356

That's a brilliant idea. I've been trying something similar with batch files but using the output from the openalpr itself, so that when a plate is matched, a snapshot is pulled from the camera.

Has worked ok with the 4 megapixel .mp4 test files I was using. I was going to run it on the 720P substream but I am having trouble working out what the URL is to pull that stream off the camera. http://USER:pASS@<IP>/cam/realmonitor?action=getStream&channel=1&subtype=1 that I found in the API documentation gives me a 404 error. If anyone knows the URL for substream 2 on a Dahua I'm all ears.

Test1.bat which looked for the results from alpr looked like this

Code:
)
set _alpr_cmd=alpr -c au -p nsw C:\openalpr_64\samples\anprtest.mp4
)
FOR /F "tokens=3" %%G IN ('%_alpr_cmd%^|find "results"') DO test2

and test2.bat looked like this
Code:
)
For /f "tokens=1-4 delims=/:." %%a in ("%TIME%") do (
    SET HH24=%%a
    SET MI=%%b
    SET SS=%%c
    SET FF=%%d
)
For /f "tokens=1-4 delims=/:." %%a in ("%TIME%") do (
    SET HH24=%%a
    SET MI=%%b
    SET SS=%%c
    SET FF=%%d
) 
curl -o C:\openalpr_64\samples\ANPR_%date%_%HH24%-%MI%-%SS%.jpg -u USER:PASS --digest "http://<IP>/cgi-bin/snapshot.cgi?

Which grabbed a time stamped snapshot.

Your way seems like a much less processor intensive way of doing it than trying to analyse the stream in real time. I'd love to see, and blatantly plagiarise, your code if you are willing to share.
 
Even though you won't need a lot of zoom, with more zoom you cam move your capture zone further down the road and decrease the effective angle of the plate to the camera, which will reduce motion blur, and improve accuracy of ANPR if you choose to run it.



That's a brilliant idea. I've been trying something similar with batch files but using the output from the openalpr itself, so that when a plate is matched, a snapshot is pulled from the camera.

Has worked ok with the 4 megapixel .mp4 test files I was using. I was going to run it on the 720P substream but I am having trouble working out what the URL is to pull that stream off the camera. http://USER:pASS@<IP>/cam/realmonitor?action=getStream&channel=1&subtype=1 that I found in the API documentation gives me a 404 error. If anyone knows the URL for substream 2 on a Dahua I'm all ears.

Test1.bat which looked for the results from alpr looked like this

Code:
)
set _alpr_cmd=alpr -c au -p nsw C:\openalpr_64\samples\anprtest.mp4
)
FOR /F "tokens=3" %%G IN ('%_alpr_cmd%^|find "results"') DO test2

and test2.bat looked like this
Code:
)
For /f "tokens=1-4 delims=/:." %%a in ("%TIME%") do (
    SET HH24=%%a
    SET MI=%%b
    SET SS=%%c
    SET FF=%%d
)
For /f "tokens=1-4 delims=/:." %%a in ("%TIME%") do (
    SET HH24=%%a
    SET MI=%%b
    SET SS=%%c
    SET FF=%%d
)
curl -o C:\openalpr_64\samples\ANPR_%date%_%HH24%-%MI%-%SS%.jpg -u USER:PASS --digest "http://<IP>/cgi-bin/snapshot.cgi?

Which grabbed a time stamped snapshot.

Your way seems like a much less processor intensive way of doing it than trying to analyse the stream in real time. I'd love to see, and blatantly plagiarise, your code if you are willing to share.

Here is the python code for the three files I use. One analyzes, one does the image pulls from the cam, and the other adjusts the focus at sunrise/sunset. Hopefully this helps some, I am not a professional programmer so you can probably make it a lot better.

----ANALYZER----

from openalpr import Alpr
from PIL import ImageFont
from PIL import ImageDraw
from PIL import Image
from PIL import ImageChops
import sys
import time
import os
import csv
import getpass
import sqlite3 as lite
import datetime

sqlite3file='c:\\sql\\plates.db'

con = lite.connect(sqlite3file)
cur = con.cursor()

EMAILpassword = password = getpass.getpass("Email pass: ")

#Directory to watch for new files to analyze in
watchdir = 'c:/sql/snaps/'
contents = os.listdir(watchdir)
count = len(watchdir)
dirmtime = os.stat(watchdir).st_mtime

alpr = Alpr("us", "C:/OpenALPR/openalpr_32bit/openalpr.conf", "C:/OpenALPR/openalpr_32bit/runtime_data")
if not alpr.is_loaded():
print("Error loading OpenALPR")
sys.exit(1)

#Only show top 10 best guesses when analyzing a plate.
alpr.set_top_n(10)


#Email
def send_email(alertmessage,EMAILpassword):
import smtplib
gmail_user = "FROM ACCOUNT"
gmail_pwd = EMAILpassword
FROM = 'FROM ACCOUNT'
BCC = ['TO ACCOUNT',]
SUBJECT = "LPR ALERT" + ' ' + alertmessage
TEXT = alertmessage

message = """\From: %s\nBCC: %s\nSubject: %s\n\n%s
""" % (FROM, ", ".join(BCC), SUBJECT, TEXT)
try:
server = smtplib.SMTP("smtp.gmail.com", 587)
server.ehlo()
server.starttls()
server.login(gmail_user, gmail_pwd)
server.sendmail(FROM, BCC, message)
server.close()
print 'successfully sent the mail'
except:
print "failed to send mail"


PLATEID = []
OWNERID = []

#Loads plates to get email alerts on from a CSV file that is PLATEID, DESCRIPTION
with open('hotlist.csv') as csvfile:
readCSV = csv.reader(csvfile, delimiter=',')
for row in readCSV:
print row[0] + ' ' + row[1]
PLATEID.append(row[0])
OWNERID.append(row[1])




while True:
print 'Analyzing Files'
This_Run = []
#Monitor folder for changes.
newmtime = os.stat(watchdir).st_mtime
if newmtime != dirmtime:
dirmtime = newmtime
newcontents = os.listdir(watchdir)
added = set(newcontents).difference(contents)
#If added is from monitoring the folder for new images to analyze.
if added:

for item in added:
#Analyzers the file for plates.
results = alpr.recognize_file("c:/sql/snaps/" + item)
DrawPlates = []
#Makes sure there are plate(s) detected before proceeding.
if len(results['results']) > 0:
print str(item)
#Check to see if there has already been a match on this same plate ID.
if str(results['results'][0]['candidates'][0]['plate']) not in This_Run:

#This run is used to de-duplicate the same files that have the same best plate match.
This_Run.append(str(results['results'][0]['candidates'][0]['plate']))

#File name made of the plate guess and file name.
saveName = results['results'][0]['candidates'][0]['plate'] + '-' + item

#Inserts all the plates into the database.
for cand in results['results'][0]['candidates']:
DrawPlates.append(str(cand['plate']))
cur.execute("INSERT INTO plate_trans (plate_id,photo,seen) VALUES(?,?,?)", (str(cand['plate']), saveName, datetime.datetime.now()))
con.commit()
#Box adjusts the image. I take off the bottom and top because of other wording in them such as a YIELD sign. I also remove the cameras time overlay from the photo and stamp one on. I leave the overlay on for the Blue Iris recording.
box = (-750, 250, 2688, 1320)

#Opens up the image for editting.
picTxt = Image.open("c:/sql/snaps/" + item).crop(box)
draw = ImageDraw.Draw(picTxt)
font = ImageFont.truetype("impact.ttf", 34)

#Drawcount is used to set the starting point and increment line spacing.
drawcount = 10
#First_Seen is the first time the plate was seen in the database
first_seen = ''
#Adds header line and spaces more.
draw.text((30,drawcount), 'Plate Recognition Top Matches', font=font)
drawcount = drawcount + 50

#Adds all the guesses to the image then saves it.
for plates2draw in DrawPlates:
cur.execute(u"""select * from plate_trans where plate_id = '{0:s}'""".format(plates2draw))
first_seen = cur.fetchone()[3]
draw.text((30,drawcount), plates2draw + ' First Seen:' + str(first_seen), font=font,)
drawcount = drawcount + 50
drawcount = drawcount + 200
draw.text((120,drawcount), str(datetime.datetime.now()), font=font)
picTxt.save("history/" + saveName, quality=100)
count = 0

#Alert criteria. Looks through all the guesses for a match in the hotlist.txt file. PLATEID and OWNERID are from the CSV earlier up in the code.
for row in PLATEID:
for plates2draw in DrawPlates:
if plates2draw == row:
alertmessage = PLATEID[count] + ' ' + OWNERID[count]
send_email(alertmessage,EMAILpassword)
count = count + 1


contents = newcontents
#Cleanup Work - Remove Older Files. Change os.remove to print curpath to test run it.
dir_to_search = 'C:/sql/snaps/'
for dirpath, dirnames, filenames in os.walk(dir_to_search):
for file in filenames:
curpath = os.path.join(dirpath, file)
file_modified = datetime.datetime.fromtimestamp(os.path.getmtime(curpath))
if datetime.datetime.now() - file_modified > datetime.timedelta(hours=2):
os.remove(curpath)
time.sleep(30)


alpr.unload()


---Python SNAPSHOT File--- (My cars are going 10-15mph so it may not be quick enough for fast cars)
import cv2
import time

cap = cv2.VideoCapture("rtsp://admin:pASSWORD@192.168.2.205:554/cam/realmonitor?channel=1@subtype=0")
count = 0
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
#This is the folder the analyzer is monitoring
cv2.imwrite('c:/sql/snaps/' + str(time.time()) + '.png',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
#this is how many photos to take.
if count == 8:
break
count = count + 1
#this is the delay between photos.
time.sleep(0.25)

# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()


----Zoom/Focus Script @ Sunset & Sunrise----
import datetime
import requests
import time
from requests.auth import HTTPDigestAuth


lastchange = ''

while True:
try:
now = str(datetime.datetime.now()).split(' ')[1].split(':')[0]

if int(now) > (int(sunrise_string) + 1) and int(now) < (int(sunset_string) - 1):
if lastchange != 'up':
#Switches cam to the day profile settings
url = 'http://192.168.2.205/cgi-bin/configManager.cgi?action=setConfig&VideoInMode[0].Config[0]=0'
r = requests.get(url, auth=HTTPDigestAuth('admin', 'PASSWORD'))
print r.status_code
#Delay for auto focus to sort itself out
time.sleep(30)
#resets the focus to my calibrated setting for day.
url = 'http://192.168.2.205/cgi-bin/devVideoInput.cgi?action=adjustFocus&focus=0.161491&zoom=1'
r = requests.get(url, auth=HTTPDigestAuth('admin', 'PASSWORD'))
print r.status_code)
lastchange = 'up'
print str(datetime.datetime.now()) + ' sun is up'
else:

if lastchange != 'down':
#Switches the profile in the cam to the Night settings
url = 'http://192.168.2.205/cgi-bin/configManager.cgi?action=setConfig&VideoInMode[0].Config[0]=1'
r = requests.get(url, auth=HTTPDigestAuth('admin', 'PASSWORD'))
print r.status_code
#Gives time for the auto-focus to stop
time.sleep(30)
#resets the focus to my calibrated setting for night.
url = 'http://192.168.2.205/cgi-bin/devVideoInput.cgi?action=adjustFocus&focus=0.155280&zoom=1'
r = requests.get(url, auth=HTTPDigestAuth('admin', 'PASSWORD'))
print r.status_code
lastchange = 'down'
print str(datetime.datetime.now()) + ' sun is down'


time.sleep(600)
except:
print 'crashed'
time.sleep(600)
 
I haven't wanted to run the screaming beast of a machine capable of real-time ALPR. So far, I've got a simple motion-detecting Python / OpenCV program that takes a 1920x1080 stream at 15 fps, and downsamples that to a tiny frame (just 88 pixels across) to do "Optical Flow" motion detection, My camera has a field of view of about 1.5 car lengths, viewing at about 45 degrees to the road. The program keeps a 25 frame buffer of the full-res images and when a large moving object is detected, it saves out three full-res images from each event to disk. One image that is centered, and one each where object is still nearly all visible but at the right or left frame edge. If the car is moving slowly enough that it is still in the frame after 25 frames, the process repeats, and saves more images. Almost always, two out of the three frames on each event contain a usable plate image. Now ALPR can take its time with these selected images, and a tiny "bookshelf-size" fanless PC drawing about 12 watts is already good enough.

With only 3 frames per event to process, I can also play around with slower stuff like YOLO: Real-Time Object Detection ("real-time" only with GPU; it takes about 25 seconds per image on a single core of a low-power CPU, for the 1000-category classifier) . This will pretty reliably detect and accurately localize a bounding rectangle around cars, trucks, and people if they're separate and not overlapping, and often works even if they do overlap. It has recognized a car with as little as 1/8 of the car visible. Maybe half the time, it will also find and recognize bicycles and dogs. Sometimes, it will recognize a backpack separately from the person wearing it. The bounding boxes let you program up a nice "thumbnail page" showing just the items of interest at a glance. (The program wasn't trained on the recycling bin, so it doesn't detect it, or sometimes thinks its a fire hydrant.)
h2_2017-05-16_11.01.52_01_083.jpg
Code:
object    probability   Xmin Xmax Ymin Ymax
------------------------------------------------
car       0.840221     1447  1893   43  394
handbag   0.530781      727   962  623  981
person    0.891188      759  1051  468  1078
car       0.839791      905  1636   22  358
 
Last edited:
That is pretty cool. That could be useful to cut out just the car only for analysis.

Or just do motion alerts on "Person" for other cameras.
 
I haven't wanted to run the screaming beast of a machine capable of real-time ALPR. So far, I've got a simple motion-detecting Python / OpenCV program that takes a 1920x1080 stream at 15 fps, and downsamples that to a tiny frame (just 88 pixels across) to do "Optical Flow" motion detection, My camera has a field of view of about 1.5 car lengths, viewing at about 45 degrees to the road. The program keeps a 25 frame buffer of the full-res images and when a large moving object is detected, it saves out three full-res images from each event to disk. One image that is centered, and one each where object is still nearly all visible but at the right or left frame edge. If the car is moving slowly enough that it is still in the frame after 25 frames, the process repeats, and saves more images. Almost always, two out of the three frames on each event contain a usable plate image. Now ALPR can take its time with these selected images, and a tiny "bookshelf-size" fanless PC drawing about 12 watts is already good enough.

With only 3 frames per event to process, I can also play around with slower stuff like YOLO: Real-Time Object Detection ("real-time" only with GPU; it takes about 25 seconds per image on a single core of a low-power CPU, for the 1000-category classifier) . This will pretty reliably detect and accurately localize a bounding rectangle around cars, trucks, and people if they're separate and not overlapping, and often works even if they do overlap. It has recognized a car with as little as 1/8 of the car visible. Maybe half the time, it will also find and recognize bicycles and dogs. Sometimes, it will recognize a backpack separately from the person wearing it. The bounding boxes let you program up a nice "thumbnail page" showing just the items of interest at a glance. (The program wasn't trained on the recycling bin, so it doesn't detect it, or sometimes thinks its a fire hydrant.)
View attachment 18372
Code:
object    probability   Xmin Xmax Ymin Ymax
------------------------------------------------
car       0.840221     1447  1893   43  394
handbag   0.530781      727   962  623  981
person    0.891188      759  1051  468  1078
car       0.839791      905  1636   22  358

Do you have a python example of that? I'd like to try and trigger alerts on person detections