Some success with a coral tpu (m.2) with CPAI and BI

Pentagano

Getting comfortable
Joined
Dec 11, 2020
Messages
613
Reaction score
282
Location
Uruguay
Thought I'd see what was new in the latest coral cpai module.
Trying to reduce the consumption of my servers and having the old gtx970 (solid and fast) sucks up quite a bit running just for this 24/7

Last year I had zero success with tpu/cpai and bi combo.

Last night I managed to get it working with a medium and large model with fast speeds. (of course not as accurate as the gpu models).
But with the default MobileNet SSD is fast and in current testing mode.


"inferenceDevice": null,
"inferenceLibrary": "TF-Lite",
"canUseGPU": "false",
"successfulInferences": 472,
"failedInferences": 101,
"numInferences": 573,
"averageInferenceMs": 8.201271186440678

No custom models built-in, Seen a few around git but haven't tried any yet.
 

jrbeddow

Getting comfortable
Joined
Oct 26, 2021
Messages
374
Reaction score
489
Location
USA
Thought I'd see what was new in the latest coral cpai module.
Trying to reduce the consumption of my servers and having the old gtx970 (solid and fast) sucks up quite a bit running just for this 24/7

Last year I had zero success with tpu/cpai and bi combo.

Last night I managed to get it working with a medium and large model with fast speeds. (of course not as accurate as the gpu models).
But with the default MobileNet SSD is fast and in current testing mode.


"inferenceDevice": null,
"inferenceLibrary": "TF-Lite",
"canUseGPU": "false",
"successfulInferences": 472,
"failedInferences": 101,
"numInferences": 573,
"averageInferenceMs": 8.201271186440678

No custom models built-in, Seen a few around git but haven't tried any yet.
Goon news, but the number of failed inferences seems rather high, more than one out of every six. Isn't that a problem?
 

Pentagano

Getting comfortable
Joined
Dec 11, 2020
Messages
613
Reaction score
282
Location
Uruguay
Tried the large MobileNet SSD and was much slower, back to medium and will test over the next few days to observe stability and accuracy. Not sure how it works on small animals yet
Not pulling my gpu out just yet
 

Pentagano

Getting comfortable
Joined
Dec 11, 2020
Messages
613
Reaction score
282
Location
Uruguay
Goon news, but the number of failed inferences seems rather high, more than one out of every six. Isn't that a problem?
I believe those failures were due to some cams not set up properly or something, still got some days of testing, observations and tweaking to do.
 

Pentagano

Getting comfortable
Joined
Dec 11, 2020
Messages
613
Reaction score
282
Location
Uruguay
Tweaked a few things - medium model and unticked 'use mainstream if available' on BI

Better 129 - failed 5.
I'll have a look at the dat files on BI

1713536918155.png
 

Pentagano

Getting comfortable
Joined
Dec 11, 2020
Messages
613
Reaction score
282
Location
Uruguay
It's working very well in combination with the AI-Tool also which I just re-added.
 

Pentagano

Getting comfortable
Joined
Dec 11, 2020
Messages
613
Reaction score
282
Location
Uruguay
So far so good.
Removed my gpu.
Switched off the Blueiris AI tick option.
So all triggered jpegs go into a local folder and aitool processes them for cpai. Can finely tune each object better.
Faster also.
15-19ms round trips and 8.6 inference speeds
AItool flashes through them in a blink of an eye
Picked up the dog just fine
 

Attachments

freman

n3wb
Joined
Jul 3, 2020
Messages
16
Reaction score
10
Location
Australia
I finally made the effort to install my dual tpu last night, went reasonably well, may have nuked the CPAI install 3 or 4 times before I got it working with yolo8-small... I'm clearly going to need to spend some time tuning as my fence is neither a bench or a train and my entryway isn't a plane :D

Spent most of the night trying to convince one of my cameras that rain doesn't qualify as "movement" and to stop trying to autofocus.

I don't suppose you have this strangeness in your logs or is it just me?

Code:
10:52:12:objectdetection_coral_adapter.py: WARNING:root:No work in 60.0 seconds, watchdog shutting down TPUs.
10:52:15:Response rec'd from Object Detection (Coral) command 'detect' (...b307f7) ['']  took 5ms
10:52:15:Response rec'd from Object Detection (Coral) command 'detect' (...866eea) ['Found car']  took 14ms
10:52:16:Response rec'd from Object Detection (Coral) command 'detect' (...382749) ['Found car, car']  took 14ms
10:52:16:Response rec'd from Object Detection (Coral) command 'detect' (...66df48) ['Found car']  took 13ms
10:52:17:Response rec'd from Object Detection (Coral) command 'detect' (...5542dc) ['No objects found']  took 13ms
10:52:17:Response rec'd from Object Detection (Coral) command 'detect' (...a49dd4) ['No objects found']  took 13ms
10:52:17:objectdetection_coral_adapter.py: WARNING:root:No work in 60.0 seconds, watchdog shutting down TPUs.
10:52:19:Response rec'd from Object Detection (Coral) command 'detect' (...a8ab63) ['']  took 3ms
10:52:19:Response rec'd from Object Detection (Coral) command 'detect' (...3bbb01) ['Found car']  took 15ms
10:52:19:Response rec'd from Object Detection (Coral) command 'detect' (...9ac8be) ['Found car, car, car']  took 13ms
10:52:20:Response rec'd from Object Detection (Coral) command 'detect' (...2c1a1c) ['Found car, car, train']  took 12ms
10:52:22:objectdetection_coral_adapter.py: WARNING:root:No work in 60.0 seconds, watchdog shutting down TPUs.
It's clearly got work to do...
 

mailseth

Getting the hang of it
Joined
Dec 22, 2023
Messages
126
Reaction score
88
Location
California
Yeah, it’s a little overzealous in shutting down the TPUs when not used, but they should come back online pretty quickly. The next version should both wait longer before shutting them down and bring them back up faster.
 

freman

n3wb
Joined
Jul 3, 2020
Messages
16
Reaction score
10
Location
Australia
Yeah, it’s a little overzealous in shutting down the TPUs when not used, but they should come back online pretty quickly. The next version should both wait longer before shutting them down and bring them back up faster.
I mean it's working, I'm not complaining, was just a little concerned the timestamps aren't even a second apart :D
 

Pentagano

Getting comfortable
Joined
Dec 11, 2020
Messages
613
Reaction score
282
Location
Uruguay
Certainly not as accurate as yolov5 using gpu. Did not pick up a small dog during the night.
So in 2 minds still as to whether pay the extra keeping my gpu running or compromising detection for savings
 

freman

n3wb
Joined
Jul 3, 2020
Messages
16
Reaction score
10
Location
Australia
Certainly not as accurate as yolov5 using gpu.
Lol, I was joking with coworker today, I upgraded to coral edge tpu and now I can identify trees and garbage bins as people faster than ever before while significantly saving in power consumption. I also get a bonus airplane (front porch) bench/train (fence by the drive way) and the occasional cup (letter box) and a book that shows up at night (solar garden light)

I'm hanging for the ipcam-dark set lol
 

Pentagano

Getting comfortable
Joined
Dec 11, 2020
Messages
613
Reaction score
282
Location
Uruguay
Lol, I was joking with coworker today, I upgraded to coral edge tpu and now I can identify trees and garbage bins as people faster than ever before while significantly saving in power consumption. I also get a bonus airplane (front porch) bench/train (fence by the drive way) and the occasional cup (letter box) and a book that shows up at night (solar garden light)

I'm hanging for the ipcam-dark set lol
LOL

This is where the AITool comes in very handy with this method. Filters out all the crap and you can fine tune the % to image ratio for each object.
With the AI options in BI you can only specify the % confidence not min/max size. Hope that comes in the future.

Is the yolo8-small more accurate than the medium mobilenet ssd model?

Still buggy and had to reinstall several times when trying to tweak models etc

I had all these errors on the multi tpu mode also so turned it to single.

'objectdetection_coral_adapter.py: WARNING:root:No work in 60.0 seconds, watchdog shutting down TPUs.'

I also increased the image size for 1 camera which is higher up and did not capture the small animal where it did before with the gpu.
 
Last edited:

freman

n3wb
Joined
Jul 3, 2020
Messages
16
Reaction score
10
Location
Australia
I haven't had a chance to get back to it since I crashed at 5am after spending "just a couple of hours" on it. It's better at detecting people in the yard than my old setup, I'm going to put that down to speed. I run all my alerting over mwtt to nodered which figures out if the people are in an area I care about before it alerts me. Old setup would never ever ping people from the driveway or the front door no matter how I fiddled with it, rarely even flagged them in BI. Now it's faster than the doorbell
 

Pentagano

Getting comfortable
Joined
Dec 11, 2020
Messages
613
Reaction score
282
Location
Uruguay
I haven't had a chance to get back to it since I crashed at 5am after spending "just a couple of hours" on it. It's better at detecting people in the yard than my old setup, I'm going to put that down to speed. I run all my alerting over mwtt to nodered which figures out if the people are in an area I care about before it alerts me. Old setup would never ever ping people from the driveway or the front door no matter how I fiddled with it, rarely even flagged them in BI. Now it's faster than the doorbell
Certainly is fast is all I can say. My old gpu was no slouch -40ms or so but at the cost of generating lots of heat in my pc $$$ (and room)
I use mqtt to nodered also but mainly for some automations like lights and sprinklers(for stray cats)

We should do some comparision between the small and medium models. There was a big inference speed difference between medium and large though.
I always used large with the gpu as it could handle it without sweating.
 

Pentagano

Getting comfortable
Joined
Dec 11, 2020
Messages
613
Reaction score
282
Location
Uruguay
Medium yolov8 - 15-20ms round trip or so
Large - 235ms!!
BIg difference. Not sure if it gives such a x times difference in accuracy
 

freman

n3wb
Joined
Jul 3, 2020
Messages
16
Reaction score
10
Location
Australia
Might up mine to medium next time I feel like reinstalling cpai twice :) (I don't know if it's cpai or my system, lol)
 

Pentagano

Getting comfortable
Joined
Dec 11, 2020
Messages
613
Reaction score
282
Location
Uruguay
Might up mine to medium next time I feel like reinstalling cpai twice :) (I don't know if it's cpai or my system, lol)
hardly any difference in small and medium inference speeds tbh - but big leap from medium to large.

Maybe if you want accuracy over speeds then large is the way to go. I will play around with them over the coming days.
Large might be better at night time. You could use AITOOL to use 1 server set on large to night time and medium for daytime. Just a thought if you have 2 instances and 2 tpus

I'm still struggling with getting 2 instances of CPAI set up on unraid. even with the port changes the 2nd webgui does not spin up
 
Last edited:

mailseth

Getting the hang of it
Joined
Dec 22, 2023
Messages
126
Reaction score
88
Location
California
Honestly, I don’t really understand what would cause you to have to reinstall to change the model size, but I haven’t been running that part of the code so I can’t speak to it. It’s an area we’ve actively been working on, however, to try to reduce the CPAI bandwidth cost. So if you’re feeling handy with code and want to dig into what’s going wrong, you may find something serious that needs fixing.
 

Pentagano

Getting comfortable
Joined
Dec 11, 2020
Messages
613
Reaction score
282
Location
Uruguay
Honestly, I don’t really understand what would cause you to have to reinstall to change the model size, but I haven’t been running that part of the code so I can’t speak to it. It’s an area we’ve actively been working on, however, to try to reduce the CPAI bandwidth cost. So if you’re feeling handy with code and want to dig into what’s going wrong, you may find something serious that needs fixing.
I've only had to re-install the coral module when adding a 2nd tpu device on the container side or change from multi- to single on occasions. Model size I can change without issues fortunately.
 
Top