CodeProject.AI Version 2.5

Wow.
The previous version to the newest. 2.5.1 maybe. I’ve been changing my gpu clocks with smi-nvidia commands. Noticed a little drop in performance with the latest version. Prefer the smi-nvidia method. Better electricity usage. Versus the full boogie method of max performance in the Nvidia setting.
 
Wow.
The previous version to the newest. 2.5.1 maybe. I’ve been changing my gpu clocks with smi-nvidia commands. Noticed a little drop in performance with the latest version. Prefer the smi-nvidia method. Better electricity usage. Versus the full boogie method of max performance in the Nvidia setting.
 

Attachments

  • Like
Reactions: David L
Wow.
The previous version to the newest. 2.5.1 maybe. I’ve been changing my gpu clocks with smi-nvidia commands. Noticed a little drop in performance with the latest version. Prefer the smi-nvidia method. Better electricity usage. Versus the full boogie method of max performance in the Nvidia setting.
 

Attachments

Wow.
The previous version to the newest. 2.5.1 maybe. I’ve been changing my gpu clocks with smi-nvidia commands. Noticed a little drop in performance with the latest version. Prefer the smi-nvidia method. Better electricity usage. Versus the full boogie method of max performance in the Nvidia setting.
If you want to get some more performance from your Nvidia GPU change the below setting to Prefer maximum performance

1712897510700.png
 
If you want to get some more performance from your Nvidia GPU change the below setting to Prefer maximum performance

View attachment 192094
That maxes out the clock speeds and memory speeds and adds heat and energy consumption. P-states is what you want to change. Nvidia has P0-P15 states. P0 being the fastest. I run at P5 (810,810) which is basically idle watts and detection times within 5 to 10 ms of the card running full throttle. Using nvidia-smi to change speeds and memory is more efficient in my testing. If anyone wants the commands I’ll post them.
 

Attachments

  • IMG_2716.png
    IMG_2716.png
    723.5 KB · Views: 27
Last edited:
That maxes out the clock speeds and memory speeds and adds heat and energy consumption. P-states is what you want to change. Nvidia has P0-P15 states. P0 being the fastest. I run at P5 (810,810) which is basically idle watts and detection times within 5 to 10 ms of the card running full throttle. Using nvidia-smi to change speeds and memory is more efficient in my testing. If anyone wants the commands I’ll post them.
Are you guys just talking about OCing?

 
Are you guys just talking about OCing?


Hey,
No overclocking. Just changing the idle speeds of the GPU. Codeproject ai doesn't constantly load the gpu. Most times the gpu is in an idle state. Well that slows detection times down. What @MikeLud1 is showing allows the gpu to run its full power without loading the gpu. Then in turn when codeproject-ai wants to use the gpu is ready to go. Using Nvidia-smi allows finer control of the power states. It's all about getting the gpu ready for when code project-ai calls on the gpu for work. If you have a Nvidia Gpu the Nvidia-smi gets installed with the drivers. Also DirectML is faster then using cuda in my testing.
For anyone wanting to try it.
If you have activated the Max performance in the Nvidia setting revert back to Normal setting.
Open powershell as admin.
Type in Nvidia-smi. It'll give you some info on the GPU. Look for the P=state.
Then try this
nvidia-smi --lock-gpu-clocks=810
nvidia-smi --lock-memory-clocks=810
Memory and Clocks don't have to 810. More than welcome to try different speeds. All well using the benchmarks in codeproject to see the gains or loses.
 

Attachments

  • Screenshot 2024-04-13 at 5.07.21 PM.png
    Screenshot 2024-04-13 at 5.07.21 PM.png
    1.1 MB · Views: 19
I can't get 2.6.2 to install successfully, it eventually times out after about an hour+ of displaying installing object detection Yolo v5 2.6.2 (I've tried 3 times). I had odd issues with 2.5.6 missing many AI detection's. I'm going to try rolling back to 2.5.1 which was my last stable version. I am following the upgrade guide on page1. Any tips or suggestions appreciated.
 
Last edited:
I can't get 5.6.2 to install successfully, it eventually times out after about an hour+ of displaying installing object detection Yolo v5 2.6.2 (I've tried 3 times). I had odd issues with 2.5.6 missing many AI detection's. I'm going to try rolling back to 2.5.1 which was my last stable version. I am following the upgrade guide on page1. Any tips or suggestions appreciated.
Check your firewall if it is blocking the module from downloading and installing.
 
  • Like
Reactions: David L and actran
Getting this weird image in the last two-ish frames of a cancelled alert, in the ai view of dat file.
Ai or Bi issue?
What can I do/check
Everything has been working fine for a long while before today.
Bi 5.8.9.5 CP 2.6.2.0
1713287747391.png
 
I think I found a couple of other bugs as well.

It also doesn't seem to be using the model I configured. I configured "EfficientDet-Lite" and Medium model size (as you can see in the previous post's picture. But when I look at the times in the logs they are too fast (which at first I was happy to see faster times LOL). I did some tests in CPAI Explorer and the it seems as though it's still using MobileNet SSD. Tried stopping and starting a few times as well as setting the config again from the gear icon. The logs always showed as what I picked but the times and inference labels are definitely not "EfficientDet-Lite". If I use the CPAI Explorer and switch the models and test there, the times and inference labels are matching the what I expect. In other words "EfficientDet-Lite" is longer and more accurate for me. Forcing the model in Explorer shows the longer times and accurate labels. Using MobileNet SSD, I see the faster times and incorrect labels.

Here is my BI AI settings page:
BI AI Settings page.JPG

Another issue is if I Enable the Dual TPU it crashes shortly after enabling the dual TPU. I'm going to do more troubleshooting to see it is hardware or software since I just replaced a single TPU with the Dual TPU (I have 2 more sets I'm going to swap around).

One of the problems is that I'll get an error that there is not Work for 60.0 Seconds when the log is clearly showing activity before and after the message at the same second.

CPAI no work error message.JPG

The other problem is an HIB Error followed by a crash.

Dual TPU causing a problem.JPG

No idea what that is.

FYI, this is a Dual TPU in a PCIe adapter. If you need any other info let me know and I can provide it. I won't have time to swap the hardware till later but I'll provide an update then.
 
Thanks for the bug reports.
I know there has been some reworking of what models get run and when and file names. Hopefully that is fixed in the next release. Do you see any log lines like "Loading pci:0: <filename>"? That should tell you exactly what model's file is being read. If that says it's an efficentdet file, but you still think it's moblenet performance, the .tflite file may be mis-named.

I think that the watchdog problem(s) may be fixed. Someone reported a problem earlier so I fixed a few things in that code, hopefully it takes care of it.

The HIB Error may be a Windows problem? Do you have a USB TPU plugged in? I haven't seen it on my linux system, but I'll keep an eye out.

I need to get a more flakey setup to be able to identify & debug these things locally. I'm thinking of buying a USB dongle just for that reason, since I've heard they're relatively bad...
 
Hi everyone, anyone have any ideas how to get the alpr working with a gpu? Been trying to trouble shoot the alpr module for days now. I also updated SDK to 8.0.104 and it shows up in system info now and also Quadro drivers. Tried uninstalling and reinstalling everything even win11, but it's still only using the cpu.
1713397871805.jpegcodeproject ai.jpg
 
Thanks for the updates.

So I searched the log for "Loading pci:0" and didn't find anything.

Just in case I tried searching for the following as well:
"pci" - only saw a few lines and it was driver related.
"loading" - only saw a bunch of "downloading" but nothing else

Also tried searching "EfficientDet-Lite" and "MobileNet SSD" but mostly the downloading and expanding and didn't see anything that would relate.

At the time of the errors I had the Dual TPU installed with the PCIe adapter. I originally had the Single M.2 M Key. I don't have the USB based on recommendations I've seen before I purchased them.

The HIB didn't make sense either and like I said that maybe a local issue since I just swapped out the single TPU for Dual TPU. I did have some more time to troubleshoot now and something is whacky. FYI - I actually order two Dual TPU and adapters so I can place the 2nd one in a different PC later. The 1st one showed in Control Panel as 2 separate TPUs and gave the HIB error. I put the 2nd card in in the same slot (I don't have any extra PCIe slots) and it didn't detect it at all, not even in Control Panel. Put the first card back and it was no longer detected anymore either. I put the original Singe TPU card back in (same PCIe slot again using an M.2 to PCIe adapter) and it got detected and is working. Put the 1st Dual TPU back in still not detected. :banghead: So something local on my end is mess up.:banghead:

I put the singe TPU card back in and left it for now (as originally configured before the Dual TPU).

I'm researching moving to a virtualized environment so I can quickly revert if any issues. Part of that would be using Docker for CPAI in Linux and only BI in Windows. I need to just bite the bullet and do it already. LOL
 
Hi everyone, anyone have any ideas how to get the alpr working with a gpu? Been trying to trouble shoot the alpr module for days now. I also updated SDK to 8.0.104 and it shows up in system info now and also Quadro drivers. Tried uninstalling and reinstalling everything even win11, but it's still only using the cpu.
View attachment 192520View attachment 192521
Your GPU is Compute 5.2, the ALPR module currently only supports Compute 6 and greater.

1713403212045.png
 
  • Like
Reactions: actran
Thanks for the updates.

So I searched the log for "Loading pci:0" and didn't find anything.

Just in case I tried searching for the following as well:
"pci" - only saw a few lines and it was driver related.
"loading" - only saw a bunch of "downloading" but nothing else

Also tried searching "EfficientDet-Lite" and "MobileNet SSD" but mostly the downloading and expanding and didn't see anything that would relate.

At the time of the errors I had the Dual TPU installed with the PCIe adapter. I originally had the Single M.2 M Key. I don't have the USB based on recommendations I've seen before I purchased them.

The HIB didn't make sense either and like I said that maybe a local issue since I just swapped out the single TPU for Dual TPU. I did have some more time to troubleshoot now and something is whacky. FYI - I actually order two Dual TPU and adapters so I can place the 2nd one in a different PC later. The 1st one showed in Control Panel as 2 separate TPUs and gave the HIB error. I put the 2nd card in in the same slot (I don't have any extra PCIe slots) and it didn't detect it at all, not even in Control Panel. Put the first card back and it was no longer detected anymore either. I put the original Singe TPU card back in (same PCIe slot again using an M.2 to PCIe adapter) and it got detected and is working. Put the 1st Dual TPU back in still not detected. :banghead: So something local on my end is mess up.:banghead:

I put the singe TPU card back in and left it for now (as originally configured before the Dual TPU).

I'm researching moving to a virtualized environment so I can quickly revert if any issues. Part of that would be using Docker for CPAI in Linux and only BI in Windows. I need to just bite the bullet and do it already. LOL
If you do go with Linux for CPAI, I’ve been developing the TPU code on Ubuntu 20.04 and things have been relatively rock solid. I had some problems with some of the tooling under 22.04.
 
  • Like
Reactions: AlwaysSomething