CodeProject.AI Version 2.5

Just run the ipcam-general one and tell it to only IDENTIFY person. All that model has is person and vehicle. It is a fast model.
Thanks. So I downloaded general.pt and copied it into C:\Program Files\CodeProject\AI\modules\ObjectDetectionYOLOv5Net\custom-models but now it wont start up... hmmm.. i removed the general.pt file and it boots up. Any ideas?
 
You don't need to download anything. ipcam-general comes with the download.
This is what I see in BI...

Edit: I realised you had to remove the older custom models from the folder and leave only the general one - it now works.

1709371231139.png
 
Last edited:
This is what I see in BI...

Edit: I realised you had to remove the older custom models from the folder and leave only the general one - it now works.

View attachment 188283
When you click on the three ... Do you see your models?

1709379442436.png

Yours looked grayed out
 
Sounds like something is not working/installed right. I would Stop now and restart. May need to reboot BI, make sure you have latest version too.

Mine have always been greyed out.

@MikeLud1 screenshots are always showed it greyed out as well as seen on the first post of this thread as an example below. I think this is here more for DeepStack where you have to specify the custom models, whereas with CodeProject it simply pulls all of them you put in the custom model folder?

1709390927219.png
 
Mine have always been greyed out.

@MikeLud1 screenshots are always showed it greyed out as well as seen on the first post of this thread as an example below. I think this is here more for DeepStack where you have to specify the custom models, whereas with CodeProject it simply pulls all of them you put in the custom model folder?

View attachment 188303
Hmmm...mine is not, when I click on it I see all the models. But I am running an old version 2.0.8 of CPAI

Sorry wrong thread since this is 2.5 Thread. I did not know it got greyed out.
 
Last edited:
Mine have always been greyed out.

@MikeLud1 screenshots are always showed it greyed out as well as seen on the first post of this thread as an example below. I think this is here more for DeepStack where you have to specify the custom models, whereas with CodeProject it simply pulls all of them you put in the custom model folder?

View attachment 188303
Mine is grayed out as well...
1709524454538.png
 
I'm using Coral, and on the latest version of CP I can see 2 new models:
1. Yolov8
2. EfficientDet Lite

Yolo seems to be interesting, but I don't have custom models there, nothing with ipcam anyway
1709544116951.png
Is there a way to have ipcam there?
meanwhile, I'm getting great results from the Medium, 30ms for pretty accurate result
 
I will stick with my AMD GPU based system running yolo5vNET (large) with custom models.

My single Coral TPU (yolo8v / medium) test system is using the same cameras and trigger settings, while inference is faster with Coral (about 15ms vs 25-30ms), otherwise Coral is not doing so well. Accuracy is way lower, if yolo5v/custom detects something at 75%, coral yolov8/no custom models might detect it at 45-55%. Setting threshold for alert this low causes a lot of false positives.

Additionally with Coral I get a lot of "AI 500" errors. Actually since last reboot this morning, I have more "failedinferences" than "successfulInferences". Reason for failure seems to be "Unable to create interpreter". It might be that I am overloading Coral with too many request with my fairly liberal trigger settings, but then again I've yet to get any "AI 500" errors with my AMD based "production" system.

I am eagerly waiting for:
  • PCIe adapter for Dual Coral TPU. I tested it in a e-key 2230 slot in my Fujitsu G9012, but it wasn't recognized. Hopefully having more Coral TPUs will help with "AI 500" errors.
  • Custom Coral models for yolo8v coral
 
  • Like
Reactions: Chura
When installing a new version can you choose not to Uninstall Data and Modules if it's just a minor version or you are not having problems? It takes forever to install if you are doing that every time. There has to be a better way.
 
I will stick with my AMD GPU based system running yolo5vNET (large) with custom models.

My single Coral TPU (yolo8v / medium) test system is using the same cameras and trigger settings, while inference is faster with Coral (about 15ms vs 25-30ms), otherwise Coral is not doing so well. Accuracy is way lower, if yolo5v/custom detects something at 75%, coral yolov8/no custom models might detect it at 45-55%. Setting threshold for alert this low causes a lot of false positives.

Additionally with Coral I get a lot of "AI 500" errors. Actually since last reboot this morning, I have more "failedinferences" than "successfulInferences". Reason for failure seems to be "Unable to create interpreter". It might be that I am overloading Coral with too many request with my fairly liberal trigger settings, but then again I've yet to get any "AI 500" errors with my AMD based "production" system.

I am eagerly waiting for:
  • PCIe adapter for Dual Coral TPU. I tested it in a e-key 2230 slot in my Fujitsu G9012, but it wasn't recognized. Hopefully having more Coral TPUs will help with "AI 500" errors.
  • Custom Coral models for yolo8v coral
I am also back to yolo5vNET (Medium) also with custom models.
I see the same thing with the Coral M.2. Many "nothing found" in Blue Iris, also, in the week or so that I used the Coral, I saw about 18% failed inferences. Meaning 18,000 failed out of ~100,000 inferences.
With the Yolo5vNet, before I rebooted the machine recently, I had zero failed inferences, with more than 150,000 inferences.
The USB Coral accelerator I have on the Ubuntu 22.04 machine is running fine with Frigate NVR for over two months. No issues at all.
IMO, the Coral devices were a waste of money to use with CPAI. Seems the software implementation is a little off.

Sent from my iPlay_50 using Tapatalk
 
When installing a new version can you choose not to Uninstall Data and Modules if it's just a minor version or you are not having problems? It takes forever to install if you are doing that every time. There has to be a better way.
It might work, it depends on the module version, some of the older modules will not work with the newer version of CodeProject.AI.

1709610052614.png
 
Recent update to latest BI seems to have broken CPAI integration, going back to previous versions doesnt seem to fix it.
The custom models are greyed out in BI AI settings, but can be enabled or disabled.
Im running CPAI in a docker using the Coral TPU. Everything was working great until BI update this morning. Now I get Error -1 when "use custom models" is checked.
Default object detection works, but I wasnt using it until now.
Here is a typical error showing what how the system used to work with the TPU EfficientDet, and now doesnt: (top is AI dat from before update, bottom is live AI DAT analyzing the same image)
1709777771695.png

Note that default objects is using the TPU successfully.
 
Last edited:
  • Like
Reactions: wasserfloh
Recent update to latest BI seems to have broken CPAI integration, going back to previous versions doesnt seem to fix it.
The custom models are greyed out in BI AI settings, but can be enabled or disabled.
Im running CPAI in a docker using the Coral TPU. Everything was working great until BI update this morning. Now I get Error -1 when "use custom models" is checked.
Default object detection works, but I wasnt using it until now.
Here is a typical error showing what how the system used to work with the TPU EfficientDet, and now doesnt: (top is AI dat from before update, bottom is live AI DAT analyzing the same image)
View attachment 188775

Note that default objects is using the TPU successfully.
Try restarting Blue Iris service to see if it starts working
 
Try restarting Blue Iris service to see if it starts working

Operation is very erratic. Restarted BI computer.
At time of restart Custom models, Default Objects, and Faces was enabled.
All calls to CPAI failed with error -1, (Objects,EfficeintDet-Lits, MobileNet SSD, Faces)
Removed Default Objects detection, still error
Specified EfficientDet-Lite under camera AI settings as custom model, then EfficientDet returns results (~200ms)
Re-enabled default objects, Now both EfficientDet and Objects provide responses, however EfficientDet response time changes from ~200ms to over 1 second
1709781671199.png
Then removed explicitly calling for EfficientDet-Lite in camera AI settings, no notable change from above in response times (this is technically the same configuration it booted into, and when nothing initially worked).
Removing default object detection in main settings (and not explicitly calling out EfficientDet-Lite in camera settings) Bring response time of EfficientDet back to ~200ms
1709781876154.png
 
Last edited:
Did you uninstall and re-install per the instructions on Post #2 or did you simply install it over the existing?
 
Did you uninstall and re-install per the instructions on Post #2 or did you simply install it over the existing?
CPAI is running on a separate server

Restarting BI results in all CPAI calls failing, regardless of which options are enabled. It seems I need to go and start changing settings for it to work. I can change settings back to the starting configuration which didnt work, but will now work after cycling through different settings.
 
Operation is very erratic. Restarted BI computer.
At time of restart Custom models, Default Objects, and Faces was enabled.
All calls to CPAI failed with error -1, (Objects,EfficeintDet-Lits, MobileNet SSD, Faces)
Removed Default Objects detection, still error
Specified EfficientDet-Lite under camera AI settings as custom model, then EfficientDet returns results (~200ms)
Re-enabled default objects, Now both EfficientDet and Objects provide responses, however EfficientDet response time changes from ~200ms to over 1 second
View attachment 188776
Then removed explicitly calling for EfficientDet-Lite in camera AI settings, no notable change from above in response times (this is technically the same configuration it booted into, and when nothing initially worked).
Removing default object detection in main settings (and not explicitly calling out EfficientDet-Lite in camera settings) Bring response time of EfficientDet back to ~200ms
View attachment 188777
The Coral has a limited amount of memory and running two models is causing the CPU to be used and that is what is causing the slow speed.
 
  • Like
Reactions: fenderman