Blue Iris and CodeProject.AI ALPR

hopalong

Getting the hang of it
Joined
Apr 19, 2021
Messages
70
Reaction score
34
Location
California
@MikeLud1 Is there a guide or writeup I can reference besides the BI Manual: Camera --> AI section & CP.AI API REF that details what I can put in the highlighted fields below for the AI camera settings? I'm trying to get a better understanding of the options and features and make sure things are configured correctly.

1709928609626.png
 
Last edited:

mpl

n3wb
Joined
Nov 6, 2023
Messages
11
Reaction score
3
Location
Germany
im Runing CP on this host with Dual Edge TPU:

Server version: 2.5.6
System: Windows
Operating System: Windows (Microsoft Windows 11 version 10.0.22631)
CPUs: Intel(R) Core(TM) i3-8109U CPU @ 3.00GHz (Intel)
1 CPU x 2 cores. 4 logical processors (x64)
GPU (Primary): Microsoft Remote Display Adapter (Microsoft)
Driver: 10.0.22621.3085
System RAM: 8 GiB
Platform: Windows


Module 'Object Detection (Coral)' 2.1.4 (ID: ObjectDetectionCoral)
Valid: True
Module Path: <root>\modules\ObjectDetectionCoral
AutoStart: True
Queue: objectdetection_queue
Runtime: python3.9
Runtime Loc: Local
FilePath: objectdetection_coral_adapter.py
Pre installed: False
Start pause: 1 sec
Parallelism: 1
LogVerbosity:
Platforms: all
GPU Libraries: installed if available
GPU Enabled: enabled
Accelerator:
Half Precis.: enable
Environment Variables
CPAI_CORAL_MODEL_NAME = YOLOv8
CPAI_CORAL_MULTI_TPU = True
MODELS_DIR = <root>\modules\ObjectDetectionCoral\assets
MODEL_SIZE = large


Object detection works, but im a dog, my wife is a airplane, my child is a teddy bear, my car is a trash. Face processing doesnt works, why?

1709930969385.png

BI:

1709931006477.png


1709931034977.png

My unknown faces folder is empty.

whats wrong with my setup?
 

morikplay

n3wb
Joined
Mar 8, 2024
Messages
4
Reaction score
0
Location
california
@Nidstang Getting data out of BI5 is straightforward. For me, I configured BI5 to send data via MQTT to Home Assistant. I can then use automations in HA to do more complex actions. MQTT is pretty standard mechanism for data sharing for a number of home automation platforms.

Below is a screenshot of my Home Assistant dashboard with events listed on the right:View attachment 182298
@actran Rather new to this forum, so please pardon me if the question is rather obvious. I have a few reolink ONVIF cameras now configured in BI (Windows 11) and HomeAssistant. Mike's license-plate 3.0.2 model is verified to work with YOLO v5 6.2 1.9.1 CPAI explorer. Cameras AI settings are configured like so:
Code:
to confirm: *
to cancel: t
custom-models: ipcam-general,license-plate
mark as vehicle: *
Object detection included motion triggered by e.g. person is registering fine. MQTT trigger to topic bi/alert/&CAM/motion with certain payload works fine. What should the MQTT payload (towards HomeAssistant - HA) for license plate detection look like? The simplest thing would be to dump the entire data towards HA like so:
Code:
{ "ai_data": &JSON}
But, then it consumes unnecessary network bandwidth. Were one to do so regardless, then how to grab the plate information. Below is a snapshot of raw JSON data potentially generated in BI-->HA direction:

Code:
[
    {
        "api":"alpr",
        "found":{
            "success":true,
            "processMs":668,
            "inferenceMs":598,
            "predictions":[
                {
                    "confidence":0.97005695104599,
                    "label":"Plate: 9EVG363",
                    "plate":"9EVG363",
                    "x_min":58,
                    "y_min":75,
                    "x_max":186,
                    "y_max":159}
                ]
            ,
            "message":"Found Plate: 9EVG363",
            "moduleId":"ALPR",
            "moduleName":"License Plate Reader",
            "code":200,
            "command":"alpr",
            "requestId":"e980fd40-3b35-49a5-b813-f4c497043a99",
            "inferenceDevice":"GPU",
            "analysisRoundTripMs":3304,
            "processedBy":"localhost",
            "timestampUTC":"Sun,
            03 Mar 2024 19:35:06 GMT"}
        }
    ]
or is there a more elegant way of e.g. using other BI Macros like &MEMO or something else?

Yours' and others' guidance is much appreciated. Ofcourse, a big shout-out to @MikeLud1 for making this feasible!
 
Last edited:

actran

Getting comfortable
Joined
May 8, 2016
Messages
807
Reaction score
734
@morikplay If you don't get plate #s consistently, you should try MikeLud1 camera AI configuration: Blue Iris and CodeProject.AI ALPR

On Alert, my MQTT payload is:
Code:
{ "state":"ON", "cam":"&CAM", "plate":"&PLATE", "memo":"&MEMO", "camera_name":"&NAME", "type":"&TYPE", "last_tripped_time":"&ALERT_TIME", "alert_db":"&ALERT_DB"}
Install MQTT Explorer to inspect the messages BI5 sent to your MQTT server:
mqtt explorer.png
 
Last edited:

morikplay

n3wb
Joined
Mar 8, 2024
Messages
4
Reaction score
0
Location
california
@actran thank you! I didn't realize that macros designed for BI's ALPR (not Mike's CPAI license-plate) could be re-used here. Neat. It wasn't obvious in the BI v5 manual. Do you consume the MQTT message as a custom sensor in configuration.yaml or straight up in an automation trigger?

Also, by the following message, is the recommendation to clone the camera (for which ALPR) is to be conducted and apply the settings of object:0 so as to skip YOLO modal?
@morikplay If you don't get plate #s consistently, you should try MikeLud1 camera AI configuration: Blue Iris and CodeProject.AI ALPR
 

actran

Getting comfortable
Joined
May 8, 2016
Messages
807
Reaction score
734
@morikplay Make sure it is objects:0 not object:0. This turns off default object detection.

I have used both approaches in Home Assistant. Easiest is to put a binary sensor like this in configurations.yaml:
Code:
mqtt:
  binary_sensor:
    - name: "Driveway ALPR"
      object_id: "driveway_alpr"
      state_topic: "BlueIris/activity/DrivewayALPR"
      value_template: "{{ value_json.state }}"
      json_attributes_topic: "BlueIris/activity/DrivewayALPR"
      off_delay: 15
      device_class: vibration
***Replace BlueIris/activity/DrivewayALPR with your MQTT topic

But I also have other situations where I have an automation triggered by MQTT topic and then do a sequence of actions.
 
Last edited:

morikplay

n3wb
Joined
Mar 8, 2024
Messages
4
Reaction score
0
Location
california
@actran Thank you. Cloning the camera and applying @MikeLud1 settings interestingly yields the same result. No detections.
1710042539097.png
1710042655847.png
1710042672497.png
1710042717534.png
1710042769588.png
1710042799630.png

Static Analysis is checked but i do not have any configuration in it.
 
Last edited:

rhwd2003

Young grasshopper
Joined
Jul 23, 2023
Messages
44
Reaction score
7
Location
Kentucky
Hello I am trying to solve two problems with my Blue Iris LPR project...

#1) I am getting an "unable to process this image" for plates
#2) I am trying to show the plate number in my alerts log so I can search by plate that it recognizes heres my error and setup

1710080588078.png
1710080618942.png
1710080640986.png
1710080677326.png
Server version: 2.5.6
System: Windows
Operating System: Windows (Microsoft Windows 10.0.19045)
CPUs: Intel(R) Core(TM) i5-9600K CPU @ 3.70GHz (Intel)
1 CPU x 6 cores. 6 logical processors (x64)
GPU (Primary): NVIDIA GeForce RTX 2070 (8 GiB) (NVIDIA)
Driver: 551.23, CUDA: 12.4 (up to: 12.4), Compute: 7.5, cuDNN:
System RAM: 32 GiB
Platform: Windows
BuildConfig: Release
Execution Env: Native
Runtime Env: Production
Runtimes installed:
.NET runtime: 7.0.5
.NET SDK: Not found
Default Python: Not found
Go: Not found
NodeJS: Not found
Video adapter info:
NVIDIA GeForce RTX 2070:
Driver Version 31.0.15.5123
Video Processor NVIDIA GeForce RTX 2070
System GPU info:
GPU 3D Usage 1%
GPU RAM Usage 1.3 GiB
Global Environment variables:
CPAI_APPROOTPATH = <root>
CPAI_PORT = 32168
 

MikeLud1

IPCT Contributor
Joined
Apr 5, 2017
Messages
2,223
Reaction score
4,276
Location
Brooklyn, NY
Hello I am trying to solve two problems with my Blue Iris LPR project...

#1) I am getting an "unable to process this image" for plates
#2) I am trying to show the plate number in my alerts log so I can search by plate that it recognizes heres my error and setup

View attachment 189019
View attachment 189020
View attachment 189021
View attachment 189022
Server version: 2.5.6
System: Windows
Operating System: Windows (Microsoft Windows 10.0.19045)
CPUs: Intel(R) Core(TM) i5-9600K CPU @ 3.70GHz (Intel)
1 CPU x 6 cores. 6 logical processors (x64)
GPU (Primary): NVIDIA GeForce RTX 2070 (8 GiB) (NVIDIA)
Driver: 551.23, CUDA: 12.4 (up to: 12.4), Compute: 7.5, cuDNN:
System RAM: 32 GiB
Platform: Windows
BuildConfig: Release
Execution Env: Native
Runtime Env: Production
Runtimes installed:
.NET runtime: 7.0.5
.NET SDK: Not found
Default Python: Not found
Go: Not found
NodeJS: Not found
Video adapter info:
NVIDIA GeForce RTX 2070:
Driver Version 31.0.15.5123
Video Processor NVIDIA GeForce RTX 2070
System GPU info:
GPU 3D Usage 1%
GPU RAM Usage 1.3 GiB
Global Environment variables:
CPAI_APPROOTPATH = <root>
CPAI_PORT = 32168
It looks like you do not have cuDNN installed. Run install_cuDNN.bat to install cuDNN. After running reboot the PC and see if this fixed the issues you are having.

1710081866801.png

1710081914519.png
 

wittaj

IPCT Contributor
Joined
Apr 28, 2019
Messages
25,270
Reaction score
49,211
Location
USA
Thanks Mike that did the trick so no more errors or anything are showing from before now I am getting this "Error - 1" after restart...
View attachment 189025
You need to look at investigating how to get those make times down. 15seconds will certainly creep longer over time and will timeout.
 

rhwd2003

Young grasshopper
Joined
Jul 23, 2023
Messages
44
Reaction score
7
Location
Kentucky
You need to look at investigating how to get those make times down. 15seconds will certainly creep longer over time and will timeout.
Thoughts on how to fix that? Unless theres something I am missing I dont know how to fix that. Maybe my settings?
 

wittaj

IPCT Contributor
Joined
Apr 28, 2019
Messages
25,270
Reaction score
49,211
Location
USA
Thoughts on how to fix that? Unless theres something I am missing I dont know how to fix that. Maybe my settings?
What are the specs of your system?

Processor
amount of RAM
GPU?
 

rhwd2003

Young grasshopper
Joined
Jul 23, 2023
Messages
44
Reaction score
7
Location
Kentucky
What are the specs of your system?

Processor
amount of RAM
GPU?
Here ya go:

Server version: 2.5.6
System: Windows
Operating System: Windows (Microsoft Windows 10.0.19045)
CPUs: Intel(R) Core(TM) i5-9600K CPU @ 3.70GHz (Intel)
1 CPU x 6 cores. 6 logical processors (x64)
GPU (Primary): NVIDIA GeForce RTX 2070 (8 GiB) (NVIDIA)
Driver: 551.23, CUDA: 12.4 (up to: 12.4), Compute: 7.5, cuDNN: 8.9
System RAM: 32 GiB
Platform: Windows
BuildConfig: Release
Execution Env: Native
Runtime Env: Production
Runtimes installed:
.NET runtime: 7.0.5
.NET SDK: Not found
Default Python: Not found
Go: Not found
NodeJS: Not found
Video adapter info:
NVIDIA GeForce RTX 2070:
Driver Version 31.0.15.5123
Video Processor NVIDIA GeForce RTX 2070
System GPU info:
GPU 3D Usage 1%
GPU RAM Usage 1.3 GiB
Global Environment variables:
CPAI_APPROOTPATH = <root>
CPAI_PORT = 32168

I am wondering if its these settings that are causing the delay?
1710093912529.png
pre-trigger, post-triger and analyze one image each 50ms?
 

wittaj

IPCT Contributor
Joined
Apr 28, 2019
Messages
25,270
Reaction score
49,211
Location
USA
Oh yeah you should be well under 1 second. My 8th gen without a GPU is less than 1 second.

How many cameras are you using Code Project on?

Do you have other models loaded?
 

rhwd2003

Young grasshopper
Joined
Jul 23, 2023
Messages
44
Reaction score
7
Location
Kentucky
Oh yeah you should be well under 1 second. My 8th gen without a GPU is less than 1 second.

How many cameras are you using Code Project on?

Do you have other models loaded?
I have two cameras both the Dahua cameras.

Models or modules? Sorry, just confirming... I have four modules installed:
Face Processing Running/Started
Object Detection (YOLOv5.net) Stopped
Object Detection (YOLOv5 6.2) Running/Started
License Plate Reader Running/Started
 

wittaj

IPCT Contributor
Joined
Apr 28, 2019
Messages
25,270
Reaction score
49,211
Location
USA
Try stopping the face processing for a bit and let a car pass and see if the numbers drop.

I am running CodeProject on 5 cameras and they are sub 1 second on a machine not as powerful as yours.

Even my 4th gen test computer with 3 cameras running AI is less than 1 second.
 

MikeLud1

IPCT Contributor
Joined
Apr 5, 2017
Messages
2,223
Reaction score
4,276
Location
Brooklyn, NY
I have two cameras both the Dahua cameras.

Models or modules? Sorry, just confirming... I have four modules installed:
Face Processing Running/Started
Object Detection (YOLOv5.net) Stopped
Object Detection (YOLOv5 6.2) Running/Started
License Plate Reader Running/Started
Open a command prompt and run nvcc --version then post the results like the below screenshot.

1710096975008.png
 
Top