Btw, noticed that any inference below the minimum confidence level you set in Blue Iris's global AI settings counts as a "failed inference" with an error code of 500 from CPAI. Just a heads up. If you set it to 1% or some other low number, that might get rid of all failed inferences. This is for the Coral but might apply to other modules too.
I have 2 instances running and the one in docker (unraid app) seems to be much more stable than the cpai service on the windows. Maybe due to it using the M2 coral as opposed to the usb coral on the windows.Interesting.
Just to be clear we're talking about CPAI counting it as a failure?
View attachment 203860
I always thought the failed inference seemed high since I rarely actually saw any failures (once I got it to be stable). I felt like it was counting nothing found as a failed, which to me is not necessarily a failure (there was just nothing there). I assumed it was checking the Post trigger images (after the object left the FOV) and there was nothing there.
I recently saw on another post somewhere either IPCamTalk or Code Projects site that the "Pre Trigger" images causes high failure rates as well but I have all my cameras set to zero for pre trigger already. That also gives my assumption some validation but doesn't mean it's true.
As you can see in the picture, I'm running the 2.1.0 version of the Coral TPU module (CPAI v 2.5.1). That was the most stable version where selecting the models still worked (I have some other posts explaining that bug). The startup seems wonky and every once in a while I see the CPAI is "Waiting" but no errors. However I feel the startup has always been wonky and it's the main app and not necessarily the modules ( I notice it on newer version of CPAI 2.6.5 where I am using just the ALRP module).
I have 2 instances running and the one in docker (unraid app) seems to be much more stable than the cpai service on the windows. Maybe due to it using the M2 coral as opposed to the usb coral on the windows.
Plan to get a couple more pcie corals and put them in the m2 wifi slots or pcie adaptor whichever pc I'm running it on
So many settings in BI also that maybe we have different set ups. I still don't fully understand what they all do after 3-4 years of using BI.I'm also using only M.2 cards. No USB. I think I had one for the Wifi slot (M.2 A+E) but it would not work in any of the Wifi slots on my HP or Dell so ended up getting the ones that goes in the M.2 B+M slots (similar to SSD). I also have a couple dual TPU using the PCIE adapter.
I have never had a use for Docker so haven't used it but maybe I will in the future.
I still feel like the fails are due to the objects not being there anymore (or no objects found to begin with) but your stats with just 1 failed could prove me wrong. LOL
Does 2.5.1 have multi-TPU support? Since 2.8.0 and 2.6.5 are both broken, I’m on 2.6.2, but even that is broken where I can’t use any model except MobileNet Small and Medium.Interesting.
Just to be clear we're talking about CPAI counting it as a failure?
View attachment 203860
I always thought the failed inference seemed high since I rarely actually saw any failures (once I got it to be stable). I felt like it was counting nothing found as a failed, which to me is not necessarily a failure (there was just nothing there). I assumed it was checking the Post trigger images (after the object left the FOV) and there was nothing there.
I recently saw on another post somewhere either IPCamTalk or Code Projects site that the "Pre Trigger" images causes high failure rates as well but I have all my cameras set to zero for pre trigger already. That also gives my assumption some validation but doesn't mean it's true.
As you can see in the picture, I'm running the 2.1.0 version of the Coral TPU module (CPAI v 2.5.1). That was the most stable version where selecting the models still worked (I have some other posts explaining that bug). The startup seems wonky and every once in a while I see the CPAI is "Waiting" but no errors. However I feel the startup has always been wonky and it's the main app and not necessarily the modules ( I notice it on newer version of CPAI 2.6.5 where I am using just the ALRP module).
I had to go into the modulesettings.json and update several parameters myself and save it.Does 2.5.1 have multi-TPU support? Since 2.8.0 and 2.6.5 are both broken, I’m on 2.6.2, but even that is broken where I can’t use any model except MobileNet Small and Medium.
What version are you running?I had to go into the modulesettings.json and update several parameters myself and save it.
autostart/model/size etc
Thought they had fixed that in 2.8 though
C:\Program Files\CodeProject\AI\modules\ObjectDetectionCoral\
Does 2.5.1 have multi-TPU support? Since 2.8.0 and 2.6.5 are both broken, I’m on 2.6.2, but even that is broken where I can’t use any model except MobileNet Small and Medium.
Good catch see what you mean.Here it is:
CodeProject.AI Server: AI the easy way.
Version 2.6.5. Our fast, free, self-hosted Artificial Intelligence Server for any platform, any languagewww.codeproject.com
That was running the EfficientDet Lite model using the medium size. If you try changing the model in the current versions it doesn't change the times or inferences (objects and confidence). I think someone said using the Large size did work but I can't find that post and don't remember if I tested it. I know I didn't use Large because the times were longer and no increased accuracy for my cases.
Good catch see what you mean.
I had mine set to efficientdet-lite medium.
Tested in the dashboard and took 434ms!
Then ran trigger on my cameras and got this in the logs (went from 434ms to 79ms with it stating it forced a model reload.
Then I tested the same pic in the dashboard and it was reduced to 40ms!!
What size is the testing dashboard using then? And yes the logs says 'Model change detected. Forcing model reload'
14:52:15bjectdetection_coral_adapter.py: Object Detection (Coral) started.
14:52:15bjectdetection_coral_adapter.py: Model change detected. Forcing model reload.
14:52:15bjectdetection_coral_adapter.py: Refreshing the Tensorflow Interpreter
14:52:15:Object Detection (Coral): Retrieved objectdetection_queue command 'detect'
14:52:15:Response rec'd from Object Detection (Coral) command 'detect' (...03a2e3) [''] took 22ms
14:52:15:Response rec'd from Object Detection (Coral) command 'detect' (...caa200) ['Found book, car, book...'] took 79ms
I hadn't noticed this before.
Indeed very buggy.
I'm sure in future versions it will improve. I've removed my gpu now as I'm happy so far.Yeah it was hard to document for people to understand. I also think that is why people gave up on the TPU because no matter what model or size they tried it didn't improve their results. Unless there were happy with the default.
Interesting. I didn't see the forcing model reload in the logs but maybe because I only had it set for info. I'll have to remember that.
I think the default size is Small but I can't remember if that was what it was actually using too.
Also the Vision detection on the dashboard does not say what size model it is using. This is after I press custome detect several times so it is 'warmed up'One more thing to note in case you didn't know. The first run of a model will always be longer if the model needs to be loaded. So if you really want to get accurate times run more than one test in a row or use the Benchmark tool. I used the Explorer to test for accuracy and then use the Benchmark tool for times. I also try not to use my BI PC since it will be passing images as well and can skew the results.
# | Label | Confidence |
---|---|---|
0 | person | 79% |
Processed by | ObjectDetectionCoral |
Processed on | localhost |
Analysis round trip | 516 ms |
Processing | 509 ms |
Inference | 501 ms |
Timestamp (UTC) | Fri, 27 Sep 2024 18:17:23 GMT |
This may be of interest in the json ("PreInstall": false for all EfficientDet models).Yeah it was hard to document for people to understand. I also think that is why people gave up on the TPU because no matter what model or size they tried it didn't improve their results. Unless there were happy with the default.
Interesting. I didn't see the forcing model reload in the logs but maybe because I only had it set for info. I'll have to remember that.
I think the default size is Small but I can't remember if that was what it was actually using too.