Blue Iris and CodeProject.AI ALPR

It should be like the below
View attachment 153469

Thanks again Mike, that seems to have done the trick now it's ignoring static plates as expected.

For whatever reason lately my plate make times seem to be a lot longer than before. Like ~800msec avg, up to 1500 when it is reading a second static plate. When I test images in the cpai explorer, it sometimes gets 400-500msec at first, but goes down to ~150msec after "warming up". It appears the alpr for BI doesn't seem to get warmed up the same, IDK. It could also be due to me splitting off the combined model for my lpr cam to a clone cam (so simultaneously triggered plate and combined). The rest of the AI modules are returning fast as expected (~75msec with the p400 gpu, including the new lpr clone running combined). Whenever ken gets around to streamlining alpr calls (picking highest non-static ocr return), it should improve some. However I'm wondering if my apparently "never warming up" plate module setup needs work.

Edit: Also, I've noticed consistent problems with the OCR recognizing "4" as an "L", even when it's a really clean cap. I noticed it seems dark plate frames make the problem worse. The california 4 is similar to L, but pretty sure it can be trained to fix this. Let me know if sending images will help.
 
Last edited:
Thanks again Mike, that seems to have done the trick now it's ignoring static plates as expected.

For whatever reason lately my plate make times seem to be a lot longer than before. Like ~800msec avg, up to 1500 when it is reading a second static plate. When I test images in the cpai explorer, it sometimes gets 400-500msec at first, but goes down to ~150msec after "warming up". It appears the alpr for BI doesn't seem to get warmed up the same, IDK. It could also be due to me splitting off the combined model for my lpr cam to a clone cam (so simultaneously triggered plate and combined). The rest of the AI modules are returning fast as expected (~75msec with the p400 gpu, including the new lpr clone running combined). Whenever ken gets around to streamlining alpr calls (picking highest non-static ocr return), it should improve some. However I'm wondering if my apparently "never warming up" plate module setup needs work.

Edit: Also, I've noticed consistent problems with the OCR recognizing "4" as an "L", even when it's a really clean cap. I noticed it seems dark plate frames make the problem worse. The california 4 is similar to L, but pretty sure it can be trained to fix this. Let me know if sending images will help.
One trick you can do to keep your P400 from going into a low Pstate is to make the below change in your Nvidia Control Panel. this should stop the warming up.

1675795957146.png
 
Thanks for the tip... done. To follow up, shortly after my post my BI pc crashed. After reboot, I had a recurrence of an earlier problem, where AI started but BI was not properly communicating with CPAI. I went through and did what I did to fix this before:

Is anyone else still having issues with modules not working after reboot? There has been at least 3 BI updates since I posted about having a problem. Just checking in to see if maybe I need to reinstall some stuff.

[edit: A follow up to help others with the same issue:

I got no response to this post, which I think confirms it was a problem on my end. I was finally able to get AI working after reboot by 1) set cpai service to start automatically, 2) disable AI in BI settings, 3) disable BI service in BI settings, 4) close BI, 5) run a "Repair" using the cpai 2.0.7 installer, 6) reboot windows, 7) enable AI in BI settings, 8) enable BI service in BI settings, 9) restart BI. After this, AI is now working after reboot, without me having to stop/start AI using BI settings. This was with BI 5.6.9.6.
]

...but this time it didn't work. After rebooting, the problem keeps returning, and I have to stop/start AI using BI settings for it to work after every reboot. However, make times are now stellar all around. Consistent 120msec plates and 70msec on other models. Not sure what's up... maybe my slow makes had something to do with the reboot issue?
 
  • Like
Reactions: MikeLud1
Also seeing the issue where using GPU returns no results but using CPU works fine.

Code:
Operating System: Windows (Microsoft Windows 10.0.19045)
CPUs:             1 CPU x 4 cores. 8 logical processors (x64)
GPU:              Quadro K620 (2 GiB) (NVidia)
                  Driver: 528.02 CUDA: 12.0 Compute: 5.0
System RAM:       16 GiB
Target:           Windows
BuildConfig:      Release
Execution Env:    Native
Runtime Env:      Production
.NET framework:   .NET 7.0.2
System GPU info:
  GPU 3D Usage       0%
  GPU RAM Usage      1.4 GiB
Video adapter info:
  NVIDIA Quadro K620:
    Adapter RAM        2 GiB
    Driver Version     31.0.15.2802
    Video Processor    Quadro K620
  Microsoft Remote Display Adapter:
    Adapter RAM        0
    Driver Version     10.0.19041.2075
    Video Processor   
  Intel(R) HD Graphics 530:
    Adapter RAM        1,024 MiB
    Driver Version     31.0.101.2115
    Video Processor    Intel(R) HD Graphics Family
Global Environment variables:
  CPAI_APPROOTPATH = C:\Program Files\CodeProject\AI
  CPAI_PORT        = 32168
 
Not 100% sure if it's the cause of your problem, but it appears you are running a newer nvidia driver than recommended. Refer to this post:

One thing I see you are using a new Nvidia Drive then @truglo, When CUDA 11.7 was released it was using Nvidia GPU Drive 516.94 maybe try downgrading to this version, link is below

 
@MikeLud1 , I just observed some alerts that should have been cancelled but went through:

Untitled.jpg

I'm not sure if this is a side-effect of using ,** in to confirm. It's working well for ignoring static plates though.
 
I tried downgrading driver versions but that did not change anything for me. The test image license plate can only be detected with CPU and not GPU. Could this be an issue where the relevant compute rating of the GPU isn't high enough as in the model relies on things only available in compute 6, 7 or 8?

Code:
Operating System: Windows (Microsoft Windows 10.0.19045)
CPUs:             1 CPU x 4 cores. 8 logical processors (x64)
GPU:              Quadro K620 (2 GiB) (NVidia)
                  Driver: 516.94 CUDA: 11.7 Compute: 5.0
System RAM:       16 GiB
Target:           Windows
BuildConfig:      Release
Execution Env:    Native
Runtime Env:      Production
.NET framework:   .NET 7.0.2
System GPU info:
  GPU 3D Usage       0%
  GPU RAM Usage      1.8 GiB
Video adapter info:
  NVIDIA Quadro K620:
    Adapter RAM        2 GiB
    Driver Version     31.0.15.1694
    Video Processor    Quadro K620
  Intel(R) HD Graphics 530:
    Adapter RAM        1,024 MiB
    Driver Version     31.0.101.2115
    Video Processor    Intel(R) HD Graphics Family
Global Environment variables:
  CPAI_APPROOTPATH = C:\Program Files\CodeProject\AI
  CPAI_PORT        = 32168
 
Last edited:
I have been putting off updating . I currently am 5.6.8.4 on bi and 1.6.8-Beta . I do have an LPR up and running pretty good for a while now . If i choose to update either/or do i have to make any changes if i want to keep using my current set up and plate recognizer?
 
To the best of my knowledge you don't have to change anything in your existing setup as there is a new option you click on the AI be settings page to enable the LPR codestack integration. You can always downgrade back if something does change.
 
  • Like
Reactions: woolfman72
I tried today to spin up a Linux-VM with the same GPU and tried the docker-nvidia-stuff out.
So far: Yolo and faces works as GPU, but ALPR only starts as CPU and doesn't respond, even when disabling GPU. BI says Error 200 for Plates there.
Tried with the nvidia driver in bookworm (debian) and the newest from nvidia. Both worked, except the ALPR-stuff.
For now I started the win-VM again where ALPR works in CPU-mode. Maybe it helps. If I can do something useful for testing in that enviroment let me know.

Edit: Just installed a nvidia 515.X driver to have explicitly CUDA 11.7, behaviour is the same.
The bookworm driver was 520.56.06 and the newest 525.89.02.

Edit2: ALPR CPU working, I remembered that the default kvm processor was mixxing avx(?, the needed cpu-extension for it to work). Changed that, and began to work.
GPU is still the same, doesn't start in GPU mode.
HTH
 
Last edited:
Quick question for you, do you have two GPU cards on your system, one for blueiris and one for codeproject? Originally Thanks to this group they suggested docker for Linux but my blueiris is on a windows platform where docker will not work.

By the way, you may get that error because maybe you don't have the custom in the directory, I would check on it. I'm learning as well.

Sent from my SM-S906U using Tapatalk
 
To the best of my knowledge you don't have to change anything in your existing setup as there is a new option you click on the AI be settings page to enable the LPR codestack integration. You can always downgrade back if something does change.
I updated just bi this morning before I left for work this morning (dumb move) I noticed just now at lunch that oddly the only thing working trigger wise is the lpr.
 
I too updated this morning. I still have to start/stop AI using BI settings to get AI working after a reboot, and alpr, combined, packages, and delivery work fine after that. The changes Ken just released as far as AI results selection are working much better for me. It is approaching the accuracy of the online stuff now.
 
I still have to start/stop AI using BI settings to get AI working after a reboot, and alpr, combined, packages, and delivery work fine after that.
To eliminate this step, you might try changing the Blue Iris Service 'Startup Type' to 'Automatic (Delayed Start)'. This delays starting the Blue Iris Service for a default 120 seconds, thus ensuring that the CP.AI service is already running
 
Quick question for you, do you have two GPU cards on your system, one for blueiris and one for codeproject? Originally Thanks to this group they suggested docker for Linux but my blueiris is on a windows platform where docker will not work.

By the way, you may get that error because maybe you don't have the custom in the directory, I would check on it. I'm learning as well.

Sent from my SM-S906U using Tapatalk
Docker exists for windows as well, so why wouldn't that work? I'm actually running it like that on my setup, running the gpu-version… or do I misinterpret you in some way.
 
To eliminate this step, you might try changing the Blue Iris Service 'Startup Type' to 'Automatic (Delayed Start)'. This delays starting the Blue Iris Service for a default 120 seconds, thus ensuring that the CP.AI service is already running
Thanks for that! Man, I tried all kinds of settings on the CPAI server startup, but setting BI service to startup delayed does seem to be working reliably.
 
  • Like
Reactions: jaydeel
Thanks for that! Man, I tried all kinds of settings on the CPAI server startup, but setting BI service to startup delayed does seem to be working reliably.
BTW, Googling this setting will suggest that there is a way to tune the 120 seconds delay value for individual services (via a registry setting), however, other sites claim that this is a global setting only. My testing supports the later.