Some success with a coral tpu (m.2) with CPAI and BI

Pentagano

Getting comfortable
Joined
Dec 11, 2020
Messages
627
Reaction score
285
Location
Uruguay
Always tweaking the settings tbh - at the moment most exterior cams have this set in cpai. Had it set to 20 post triggers before - but this makes no difference on the gpu
objects and % confidence vary a little in each cam.
On a busy day it can go for a long time and rarely exit P8 state

1714151103411.png
 

mailseth

Getting the hang of it
Joined
Dec 22, 2023
Messages
143
Reaction score
99
Location
California
Yeah, I wouldn't expect to see much difference between 0.03 FPS and 1.0 FPS as far as analysis load goes. It's going to really only make a difference when everything is moving because it's windy or raining.
 

m_listed

Getting the hang of it
Joined
Jun 11, 2016
Messages
180
Reaction score
57
Btw, noticed that any inference below the minimum confidence level you set in Blue Iris's global AI settings counts as a "failed inference" with an error code of 500 from CPAI. Just a heads up. If you set it to 1% or some other low number, that might get rid of all failed inferences. This is for the Coral but might apply to other modules too.
 

AlwaysSomething

Pulling my weight
Joined
Apr 24, 2023
Messages
133
Reaction score
105
Location
US
Btw, noticed that any inference below the minimum confidence level you set in Blue Iris's global AI settings counts as a "failed inference" with an error code of 500 from CPAI. Just a heads up. If you set it to 1% or some other low number, that might get rid of all failed inferences. This is for the Coral but might apply to other modules too.
Interesting.

Just to be clear we're talking about CPAI counting it as a failure?

1727450698067.png

I always thought the failed inference seemed high since I rarely actually saw any failures (once I got it to be stable). I felt like it was counting nothing found as a failed, which to me is not necessarily a failure (there was just nothing there). I assumed it was checking the Post trigger images (after the object left the FOV) and there was nothing there.

I recently saw on another post somewhere either IPCamTalk or Code Projects site that the "Pre Trigger" images causes high failure rates as well but I have all my cameras set to zero for pre trigger already. That also gives my assumption some validation but doesn't mean it's true.

As you can see in the picture, I'm running the 2.1.0 version of the Coral TPU module (CPAI v 2.5.1). That was the most stable version where selecting the models still worked (I have some other posts explaining that bug). The startup seems wonky and every once in a while I see the CPAI is "Waiting" but no errors. However I feel the startup has always been wonky and it's the main app and not necessarily the modules ( I notice it on newer version of CPAI 2.6.5 where I am using just the ALRP module).
 

Pentagano

Getting comfortable
Joined
Dec 11, 2020
Messages
627
Reaction score
285
Location
Uruguay
Interesting.

Just to be clear we're talking about CPAI counting it as a failure?

View attachment 203860

I always thought the failed inference seemed high since I rarely actually saw any failures (once I got it to be stable). I felt like it was counting nothing found as a failed, which to me is not necessarily a failure (there was just nothing there). I assumed it was checking the Post trigger images (after the object left the FOV) and there was nothing there.

I recently saw on another post somewhere either IPCamTalk or Code Projects site that the "Pre Trigger" images causes high failure rates as well but I have all my cameras set to zero for pre trigger already. That also gives my assumption some validation but doesn't mean it's true.

As you can see in the picture, I'm running the 2.1.0 version of the Coral TPU module (CPAI v 2.5.1). That was the most stable version where selecting the models still worked (I have some other posts explaining that bug). The startup seems wonky and every once in a while I see the CPAI is "Waiting" but no errors. However I feel the startup has always been wonky and it's the main app and not necessarily the modules ( I notice it on newer version of CPAI 2.6.5 where I am using just the ALRP module).
I have 2 instances running and the one in docker (unraid app) seems to be much more stable than the cpai service on the windows. Maybe due to it using the M2 coral as opposed to the usb coral on the windows.

Plan to get a couple more pcie corals and put them in the m2 wifi slots or pcie adaptor whichever pc I'm running it on

1727451716886.png
 

AlwaysSomething

Pulling my weight
Joined
Apr 24, 2023
Messages
133
Reaction score
105
Location
US
I have 2 instances running and the one in docker (unraid app) seems to be much more stable than the cpai service on the windows. Maybe due to it using the M2 coral as opposed to the usb coral on the windows.

Plan to get a couple more pcie corals and put them in the m2 wifi slots or pcie adaptor whichever pc I'm running it on
I'm also using only M.2 cards. No USB. I think I had one for the Wifi slot (M.2 A+E) but it would not work in any of the Wifi slots on my HP or Dell so ended up getting the ones that goes in the M.2 B+M slots (similar to SSD). I also have a couple dual TPU using the PCIE adapter.

I have never had a use for Docker so haven't used it but maybe I will in the future.

I still feel like the fails are due to the objects not being there anymore (or no objects found to begin with) but your stats with just 1 failed could prove me wrong. LOL
 

Pentagano

Getting comfortable
Joined
Dec 11, 2020
Messages
627
Reaction score
285
Location
Uruguay
I'm also using only M.2 cards. No USB. I think I had one for the Wifi slot (M.2 A+E) but it would not work in any of the Wifi slots on my HP or Dell so ended up getting the ones that goes in the M.2 B+M slots (similar to SSD). I also have a couple dual TPU using the PCIE adapter.

I have never had a use for Docker so haven't used it but maybe I will in the future.

I still feel like the fails are due to the objects not being there anymore (or no objects found to begin with) but your stats with just 1 failed could prove me wrong. LOL
So many settings in BI also that maybe we have different set ups. I still don't fully understand what they all do after 3-4 years of using BI.
Yes always found CPAI more stable in a docker container, ether on proxmox lxc or on unraid which I have in addition.
The AItool on my Blueiris VM is a great back up for when the CPAI service on the windows fails.

Every snapshot gets analysed by CPAI on the windows instance and also by CPAI on the unraid server (AITOOL - server specified in the settings).
If the cpai on the windows stops working (tpu fails for example), the aitool is still sending the alerts to the second server.

Happened last night and I didn't notice until today as the unraid cpai was still online
 

m_listed

Getting the hang of it
Joined
Jun 11, 2016
Messages
180
Reaction score
57
Interesting.

Just to be clear we're talking about CPAI counting it as a failure?

View attachment 203860

I always thought the failed inference seemed high since I rarely actually saw any failures (once I got it to be stable). I felt like it was counting nothing found as a failed, which to me is not necessarily a failure (there was just nothing there). I assumed it was checking the Post trigger images (after the object left the FOV) and there was nothing there.

I recently saw on another post somewhere either IPCamTalk or Code Projects site that the "Pre Trigger" images causes high failure rates as well but I have all my cameras set to zero for pre trigger already. That also gives my assumption some validation but doesn't mean it's true.

As you can see in the picture, I'm running the 2.1.0 version of the Coral TPU module (CPAI v 2.5.1). That was the most stable version where selecting the models still worked (I have some other posts explaining that bug). The startup seems wonky and every once in a while I see the CPAI is "Waiting" but no errors. However I feel the startup has always been wonky and it's the main app and not necessarily the modules ( I notice it on newer version of CPAI 2.6.5 where I am using just the ALRP module).
Does 2.5.1 have multi-TPU support? Since 2.8.0 and 2.6.5 are both broken, I’m on 2.6.2, but even that is broken where I can’t use any model except MobileNet Small and Medium.
 

Pentagano

Getting comfortable
Joined
Dec 11, 2020
Messages
627
Reaction score
285
Location
Uruguay
Does 2.5.1 have multi-TPU support? Since 2.8.0 and 2.6.5 are both broken, I’m on 2.6.2, but even that is broken where I can’t use any model except MobileNet Small and Medium.
I had to go into the modulesettings.json and update several parameters myself and save it.
autostart/model/size etc
Thought they had fixed that in 2.8 though

C:\Program Files\CodeProject\AI\modules\ObjectDetectionCoral\
 

m_listed

Getting the hang of it
Joined
Jun 11, 2016
Messages
180
Reaction score
57
I had to go into the modulesettings.json and update several parameters myself and save it.
autostart/model/size etc
Thought they had fixed that in 2.8 though

C:\Program Files\CodeProject\AI\modules\ObjectDetectionCoral\
What version are you running?
 

AlwaysSomething

Pulling my weight
Joined
Apr 24, 2023
Messages
133
Reaction score
105
Location
US
Yeah it let's you choose the model and size and shows it in the log file but isn't actually use them. Here is a post that I created on the CPAI site that describes it:


There was another thread with someone else that said one of the larger models was indeed working but the mediums were not (confirming my finding). I actually took screenshots of the test from the explorer and posted the pics with the times but can't find that post at the moment. I'll try once more and see if I can find it.
 

AlwaysSomething

Pulling my weight
Joined
Apr 24, 2023
Messages
133
Reaction score
105
Location
US
Here it is:


That was running the EfficientDet Lite model using the medium size. If you try changing the model in the current versions it doesn't change the times or inferences (objects and confidence). I think someone said using the Large size did work but I can't find that post and don't remember if I tested it. I know I didn't use Large because the times were longer and no increased accuracy for my cases.
 

AlwaysSomething

Pulling my weight
Joined
Apr 24, 2023
Messages
133
Reaction score
105
Location
US
Does 2.5.1 have multi-TPU support? Since 2.8.0 and 2.6.5 are both broken, I’m on 2.6.2, but even that is broken where I can’t use any model except MobileNet Small and Medium.
2.5.1 is the CPAI version and the Coral Object Detection (Coral) is v2.1.0. Just want to clarify because you can theoretically have different module versions running under different CPAI application versions. It was confusing to me at first but understood it when asked to try different modules under different apps. Usually most people just refer to the app version.

Bask to answering your question, as far as the Coral Object Detection (Coral) v2.1.0, it does has the option but I don't think it worked. IIRC it would give a timeout message if there was no activity for a while or something like that but not 100% sure. I tried so many versions it's hard to remember which had what problem.

The PC I have this running on only has a single TPU so can't test it for you at the moment. I have a multiple TPUs in another PC but don't have CPAI installed there. I'll see if I have some time later or tomorrow to test it out. I learned not to touch my working PC until I test things out on an other PC first. LOL
 

Pentagano

Getting comfortable
Joined
Dec 11, 2020
Messages
627
Reaction score
285
Location
Uruguay
Here it is:


That was running the EfficientDet Lite model using the medium size. If you try changing the model in the current versions it doesn't change the times or inferences (objects and confidence). I think someone said using the Large size did work but I can't find that post and don't remember if I tested it. I know I didn't use Large because the times were longer and no increased accuracy for my cases.
Good catch see what you mean.

I had mine set to efficientdet-lite medium.
Tested in the dashboard and took 434ms!

Then ran trigger on my cameras and got this in the logs (went from 434ms to 79ms with it stating it forced a model reload.

Then I tested the same pic in the dashboard and it was reduced to 40ms!!

What size is the testing dashboard using then? And yes the logs says 'Model change detected. Forcing model reload'


objectdetection_coral_adapter.py: Object Detection (Coral) started.
objectdetection_coral_adapter.py: Model change detected. Forcing model reload.
objectdetection_coral_adapter.py: Refreshing the Tensorflow Interpreter


14:52:15:Object Detection (Coral): Retrieved objectdetection_queue command 'detect'
14:52:15:Response rec'd from Object Detection (Coral) command 'detect' (...03a2e3) [''] took 22ms
14:52:15:Response rec'd from Object Detection (Coral) command 'detect' (...caa200) ['Found book, car, book...'] took 79ms


I hadn't noticed this before.
Indeed very buggy.


Still my variables say

1727460192383.png
My 2nd instance I have yolov8 on and seems more stable. Accurate also

1727460342795.png
 

AlwaysSomething

Pulling my weight
Joined
Apr 24, 2023
Messages
133
Reaction score
105
Location
US
Good catch see what you mean.

I had mine set to efficientdet-lite medium.
Tested in the dashboard and took 434ms!

Then ran trigger on my cameras and got this in the logs (went from 434ms to 79ms with it stating it forced a model reload.

Then I tested the same pic in the dashboard and it was reduced to 40ms!!

What size is the testing dashboard using then? And yes the logs says 'Model change detected. Forcing model reload'


14:52:15:eek:bjectdetection_coral_adapter.py: Object Detection (Coral) started.
14:52:15:eek:bjectdetection_coral_adapter.py: Model change detected. Forcing model reload.
14:52:15:eek:bjectdetection_coral_adapter.py: Refreshing the Tensorflow Interpreter


14:52:15:Object Detection (Coral): Retrieved objectdetection_queue command 'detect'
14:52:15:Response rec'd from Object Detection (Coral) command 'detect' (...03a2e3) [''] took 22ms
14:52:15:Response rec'd from Object Detection (Coral) command 'detect' (...caa200) ['Found book, car, book...'] took 79ms


I hadn't noticed this before.
Indeed very buggy.
Yeah it was hard to document for people to understand. I also think that is why people gave up on the TPU because no matter what model or size they tried it didn't improve their results. Unless there were happy with the default.

Interesting. I didn't see the forcing model reload in the logs but maybe because I only had it set for info. I'll have to remember that.

I think the default size is Small but I can't remember if that was what it was actually using too.
 

AlwaysSomething

Pulling my weight
Joined
Apr 24, 2023
Messages
133
Reaction score
105
Location
US
One more thing to note in case you didn't know. The first run of a model will always be longer if the model needs to be loaded. So if you really want to get accurate times run more than one test in a row or use the Benchmark tool. I used the Explorer to test for accuracy and then use the Benchmark tool for times. I also try not to use my BI PC since it will be passing images as well and can skew the results.
 

Pentagano

Getting comfortable
Joined
Dec 11, 2020
Messages
627
Reaction score
285
Location
Uruguay
Yeah it was hard to document for people to understand. I also think that is why people gave up on the TPU because no matter what model or size they tried it didn't improve their results. Unless there were happy with the default.

Interesting. I didn't see the forcing model reload in the logs but maybe because I only had it set for info. I'll have to remember that.

I think the default size is Small but I can't remember if that was what it was actually using too.
I'm sure in future versions it will improve. I've removed my gpu now as I'm happy so far.
efficientdet-lite whether it is small or medium
But also my 2nd instance with yolov8 large using the AITOOL to my unraid cpai docker works well as a failover and double check

So the blue line indicates the jpegs going to the AITOOL only.
The other triggers being consumed by the CPAI on the actual windows machine

1727460762065.png
 

Pentagano

Getting comfortable
Joined
Dec 11, 2020
Messages
627
Reaction score
285
Location
Uruguay
One more thing to note in case you didn't know. The first run of a model will always be longer if the model needs to be loaded. So if you really want to get accurate times run more than one test in a row or use the Benchmark tool. I used the Explorer to test for accuracy and then use the Benchmark tool for times. I also try not to use my BI PC since it will be passing images as well and can skew the results.
Also the Vision detection on the dashboard does not say what size model it is using. This is after I press custome detect several times so it is 'warmed up'

I'm assuming this is medium or even large as the same pics only take 40+ms on my cameras

#LabelConfidence
0person79%
Processed byObjectDetectionCoral
Processed onlocalhost
Analysis round trip516 ms
Processing509 ms
Inference501 ms
Timestamp (UTC)Fri, 27 Sep 2024 18:17:23 GMT
 

Pentagano

Getting comfortable
Joined
Dec 11, 2020
Messages
627
Reaction score
285
Location
Uruguay
Yeah it was hard to document for people to understand. I also think that is why people gave up on the TPU because no matter what model or size they tried it didn't improve their results. Unless there were happy with the default.

Interesting. I didn't see the forcing model reload in the logs but maybe because I only had it set for info. I'll have to remember that.

I think the default size is Small but I can't remember if that was what it was actually using too.
This may be of interest in the json ("PreInstall": false for all EfficientDet models).
MobileNet is set to true
I am changing this to true to see the effect.



"DownloadableModels":[

{ "Name": "MobileNet Large", "Filename": "objectdetection-mobilenet-large-edgetpu.zip", "Folder": "assets", "Description": "MobileNet object detection, Large", "FileSizeKb": 275800, "PreInstall": true },
{ "Name": "MobileNet Medium", "Filename": "objectdetection-mobilenet-medium-edgetpu.zip", "Folder": "assets", "Description": "MobileNet object detection, Medium", "FileSizeKb": 275800, "PreInstall": true },
{ "Name": "MobileNet Small", "Filename": "objectdetection-mobilenet-small-edgetpu.zip", "Folder": "assets", "Description": "MobileNet object detection, Small", "FileSizeKb": 275800, "PreInstall": true },
{ "Name": "MobileNet Tiny", "Filename": "objectdetection-mobilenet-tiny-edgetpu.zip", "Folder": "assets", "Description": "MobileNet object detection, Tiny", "FileSizeKb": 275800, "PreInstall": true },

{ "Name": "EfficientDet Large", "Filename": "objectdetection-efficientdet-large-edgetpu.zip", "Folder": "assets", "Description": "EfficientDet object detection, Large", "FileSizeKb": 275800, "PreInstall": false },
{ "Name": "EfficientDet Medium", "Filename": "objectdetection-efficientdet-medium-edgetpu.zip", "Folder": "assets", "Description": "EfficientDet object detection, Medium", "FileSizeKb": 275800, "PreInstall": false },
{ "Name": "EfficientDet Small", "Filename": "objectdetection-efficientdet-small-edgetpu.zip", "Folder": "assets", "Description": "EfficientDet object detection, Small", "FileSizeKb": 275800, "PreInstall": false },
{ "Name": "EfficientDet Tiny", "Filename": "objectdetection-efficientdet-tiny-edgetpu.zip", "Folder": "assets", "Description": "EfficientDet object detection, Tiny", "FileSizeKb": 275800, "PreInstall": false },



ALSO modulesettings.jetson.json

was set to false and tiny. I changed this to true and Medium. No idea if this has any effect.


{
"Modules": {

"ObjectDetectionCoral": {
"LaunchSettings": {
"AutoStart": true,
"Runtime": "python3.8"
},
"EnvironmentVariables": {
"MODEL_SIZE": "Medium"
}
}
}
}
 
Last edited:

AlwaysSomething

Pulling my weight
Joined
Apr 24, 2023
Messages
133
Reaction score
105
Location
US
Yeah it's improved a lot since it first came out. It was the main reason I bought a TPU.... and then a few more. :p

When I found the bug on the models not being used I did look at the downloaded files to compare the sizes hoping it was something simple like that but whey seemed legit (same file sizes from version to version). It also always downloaded them for me at the install.

If I had more time I would like to actually get in the code and learn it and maybe even help with debugging. But I've been wanting to do that for a year now and still haven't soo... :idk: I still have a few cameras I bough from the summer sale that I still haven't put up. :banghead:

What is AITool? I've seen that in these threads before (maybe from you LOL) but when Googling it I just a list of AI Tools that are available. I thought I found something once but it was a service and I don't want to use any services and rely on the internet. Prefer everything local.

I've only been using BI a little over a year and still learning new things every day. Same thing with CPAI and even the cameras themselves. This has become addiction :eek:
 
Top