Some success with a coral tpu (m.2) with CPAI and BI

Some interesting results testing the tiny, small, medium and large MobileNet SSD with the same picture.
The small model found far more objects that all the other models even though some were wrong!
 

Attachments

  • MobileNet Large.png
    MobileNet Large.png
    216.8 KB · Views: 29
  • MobileNet Medium.png
    MobileNet Medium.png
    203.4 KB · Views: 30
  • MobileNet Small.png
    MobileNet Small.png
    208.2 KB · Views: 29
  • MobileNet Tiny.png
    MobileNet Tiny.png
    199.5 KB · Views: 27
  • Like
Reactions: CanCuba
The funny thing about accuracy measurements is there are so many false positives to get rid of also, and everyone has different opinions about the cost of false positives.

It also just occurred to me that if anyone is setting up a system for development, it should be based off of a GitHub feature branch to make sure everyone is in sync. I’ll set one up later to work off of.
 
  • Like
Reactions: Pentagano
The funny thing about accuracy measurements is there are so many false positives to get rid of also, and everyone has different opinions about the cost of false positives.

It also just occurred to me that if anyone is setting up a system for development, it should be based off of a GitHub feature branch to make sure everyone is in sync. I’ll set one up later to work off of.
very true - sometimes a compromise - a small fluffy dog or cat may be recognised as a sheep or rabbit on the small model but not at all on the medium - large model.
I'm trying to understand how to use some customized models from git but do not understand how to implement and use them yet like this one:

I only want to identify people, some cars, small dogs and cats.

A few years ago I did create my own model with python but cannot remember how I did it. Going to start collecting images from my yard anyway so I can start to look into it
 
Last edited:
Here's a feature branch to base any USB work off of that will make it much easier to keep things in sync:

I wonder if the training set from that link can be adapted to train a YOLOv8 model? You should look into the docs here. They make it relatively easy:

I haven't done it myself, but I don't have the right hardware and don't need any more distractions in my life. ;)
 
Well... I'm not sure if that went well or not...

Stopped the coral module (Thought I'd be nice to it)
Told it to download the yolo8-medium
Got disturbed by "ObjectDetectionCoral went quietly" - seems ominous
Switched to the medium from the menu
Clicked start

I'm getting the sense there definitely might be some debugging in the download/install code.

I'm not sure if it's working or not given it couldn't find the models then it failed to start multi tpu but it apparently succeeded with a single.

Code:
13:37:17:Module ObjectDetectionCoral has shutdown
13:37:17:objectdetection_coral_adapter.py: has exited
13:37:18:Preparing to download model 'objectdetection-yolov8-medium-edgetpu.zip' for module ObjectDetectionCoral
13:37:18:Downloading module 'objectdetection-yolov8-medium-edgetpu.zip' to 'C:\Program Files\CodeProject\AI\downloads\modules\ObjectDetectionCoral\objectdetection-yolov8-medium-edgetpu.zip'
13:37:18: (using cached download for 'objectdetection-yolov8-medium-edgetpu.zip')
13:37:21: objectdetection-yolov8-medium-edgetpu.zip has been downloaded and installed.
13:37:35:ObjectDetectionCoral went quietly
13:41:26:Update ObjectDetectionCoral. Setting MODEL_SIZE=medium
13:41:26:Restarting Object Detection (Coral) to apply settings change
13:41:36:Update ObjectDetectionCoral. Setting AutoStart=true
13:41:36:Restarting Object Detection (Coral) to apply settings change
13:41:36:Running module using: C:\Program Files\CodeProject\AI\modules\ObjectDetectionCoral\bin\windows\python39\venv\Scripts\python
13:41:36:
13:41:36:Attempting to start ObjectDetectionCoral with C:\Program Files\CodeProject\AI\modules\ObjectDetectionCoral\bin\windows\python39\venv\Scripts\python "C:\Program Files\CodeProject\AI\modules\ObjectDetectionCoral\objectdetection_coral_adapter.py"
13:41:36:
13:41:36:Module 'Object Detection (Coral)' 2.2.2 (ID: ObjectDetectionCoral)
13:41:36:Valid:         True
13:41:36:Module Path:   <root>\modules\ObjectDetectionCoral
13:41:36:Starting C:\Program Files...ws\python39\venv\Scripts\python "C:\Program Files...ectdetection_coral_adapter.py"
13:41:36:AutoStart:     True
13:41:36:Queue:         objectdetection_queue
13:41:36:Runtime:       python3.9
13:41:36:Runtime Loc:   Local
13:41:36:FilePath:      objectdetection_coral_adapter.py
13:41:36:Start pause:   1 sec
13:41:36:Parallelism:   16
13:41:36:LogVerbosity:
13:41:36:Platforms:     all
13:41:36:GPU Libraries: installed if available
13:41:36:GPU Enabled:   enabled
13:41:36:Accelerator:
13:41:36:Half Precis.:  enable
13:41:36:Environment Variables
13:41:36:CPAI_CORAL_MODEL_NAME = YOLOv8
13:41:36:CPAI_CORAL_MULTI_TPU  = True
13:41:36:MODELS_DIR            = <root>\modules\ObjectDetectionCoral\assets
13:41:36:MODEL_SIZE            = medium
13:41:36:
13:41:36:Started Object Detection (Coral) module
13:41:39:objectdetection_coral_adapter.py: ERROR:root:TFLite file C:\Program Files\CodeProject\AI\modules\ObjectDetectionCoral\assets\yolov8m__segment_0_of_2_edgetpu.tflite doesn't exist
13:41:39:objectdetection_coral_adapter.py: WARNING:root:Model file not found: [Errno 2] No such file or directory: 'C:\\Program Files\\CodeProject\\AI\\modules\\ObjectDetectionCoral\\assets\\yolov8m__segment_0_of_2_edgetpu.tflite'
13:41:39:objectdetection_coral_adapter.py: WARNING:root:No Coral TPUs found or able to be initialized. Using CPU.
13:41:39:objectdetection_coral_adapter.py: ERROR:root:TFLite file C:\Program Files\CodeProject\AI\modules\ObjectDetectionCoral\assets\yolov8m-416_640px.tflite doesn't exist
13:41:39:objectdetection_coral_adapter.py: WARNING:root:Unable to create interpreter for CPU using edgeTPU library: [Errno 2] No such file or directory: 'C:\\Program Files\\CodeProject\\AI\\modules\\ObjectDetectionCoral\\assets\\yolov8m-416_640px.tflite'
13:41:39:objectdetection_coral_adapter.py: MODULE_PATH:           C:\Program Files\CodeProject\AI\modules\ObjectDetectionCoral
13:41:39:objectdetection_coral_adapter.py: MODELS_DIR:            C:\Program Files\CodeProject\AI\modules\ObjectDetectionCoral\assets
13:41:39:objectdetection_coral_adapter.py: CPAI_CORAL_MODEL_NAME: yolov8
13:41:39:objectdetection_coral_adapter.py: MODEL_SIZE:            medium
13:41:39:objectdetection_coral_adapter.py: TPU detected
13:41:39:objectdetection_coral_adapter.py: Running init for Object Detection (Coral)
13:41:39:objectdetection_coral_adapter.py: CPU_MODEL_NAME:        yolov8m-416_640px.tflite
13:41:39:objectdetection_coral_adapter.py: TPU_MODEL_NAME:        yolov8m-416_640px_edgetpu.tflite
13:41:39:objectdetection_coral_adapter.py: Attempting multi-TPU initialisation
13:41:39:objectdetection_coral_adapter.py: Failed to init multi-TPU. Falling back to single TPU.
13:41:39:objectdetection_coral_adapter.py: Input details: {'name': 'normalized_input_image_tensor', 'index': 0, 'shape': array([  1, 320, 320,   3]), 'shape_signature': array([  1, 320, 320,   3]), 'dtype': , 'quantization': (0.0078125, 128), 'quantization_parameters': {'scales': array([0.0078125], dtype=float32), 'zero_points': array([128]), 'quantized_dimension': 0}, 'sparsity_parameters': {}}
13:41:39:objectdetection_coral_adapter.py: Output details: {'name': 'TFLite_Detection_PostProcess', 'index': 6, 'shape': array([  1, 100,   4]), 'shape_signature': array([  1, 100,   4]), 'dtype': , 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}
13:41:39:objectdetection_coral_adapter.py: Using Edge TPU

Edit: Despite all that it seems to be magically working lol

Code:
Status Data:  {
  "inferenceDevice": null,
  "inferenceLibrary": "TF-Lite",
  "canUseGPU": "false",
  "successfulInferences": 135,
  "failedInferences": 25,
  "numInferences": 160,
  "averageInferenceMs": 10.548148148148147
}
 
Last edited:
  • Like
Reactions: Pentagano
Well... I'm not sure if that went well or not...

Stopped the coral module (Thought I'd be nice to it)
Told it to download the yolo8-medium
Got disturbed by "ObjectDetectionCoral went quietly" - seems ominous
Switched to the medium from the menu
Clicked start

I'm getting the sense there definitely might be some debugging in the download/install code.

I'm not sure if it's working or not given it couldn't find the models then it failed to start multi tpu but it apparently succeeded with a single.

Code:
13:37:17:Module ObjectDetectionCoral has shutdown
13:37:17:objectdetection_coral_adapter.py: has exited
13:37:18:Preparing to download model 'objectdetection-yolov8-medium-edgetpu.zip' for module ObjectDetectionCoral
13:37:18:Downloading module 'objectdetection-yolov8-medium-edgetpu.zip' to 'C:\Program Files\CodeProject\AI\downloads\modules\ObjectDetectionCoral\objectdetection-yolov8-medium-edgetpu.zip'
13:37:18: (using cached download for 'objectdetection-yolov8-medium-edgetpu.zip')
13:37:21: objectdetection-yolov8-medium-edgetpu.zip has been downloaded and installed.
13:37:35:ObjectDetectionCoral went quietly
13:41:26:Update ObjectDetectionCoral. Setting MODEL_SIZE=medium
13:41:26:Restarting Object Detection (Coral) to apply settings change
13:41:36:Update ObjectDetectionCoral. Setting AutoStart=true
13:41:36:Restarting Object Detection (Coral) to apply settings change
13:41:36:Running module using: C:\Program Files\CodeProject\AI\modules\ObjectDetectionCoral\bin\windows\python39\venv\Scripts\python
13:41:36:
13:41:36:Attempting to start ObjectDetectionCoral with C:\Program Files\CodeProject\AI\modules\ObjectDetectionCoral\bin\windows\python39\venv\Scripts\python "C:\Program Files\CodeProject\AI\modules\ObjectDetectionCoral\objectdetection_coral_adapter.py"
13:41:36:
13:41:36:Module 'Object Detection (Coral)' 2.2.2 (ID: ObjectDetectionCoral)
13:41:36:Valid:         True
13:41:36:Module Path:   <root>\modules\ObjectDetectionCoral
13:41:36:Starting C:\Program Files...ws\python39\venv\Scripts\python "C:\Program Files...ectdetection_coral_adapter.py"
13:41:36:AutoStart:     True
13:41:36:Queue:         objectdetection_queue
13:41:36:Runtime:       python3.9
13:41:36:Runtime Loc:   Local
13:41:36:FilePath:      objectdetection_coral_adapter.py
13:41:36:Start pause:   1 sec
13:41:36:Parallelism:   16
13:41:36:LogVerbosity:
13:41:36:Platforms:     all
13:41:36:GPU Libraries: installed if available
13:41:36:GPU Enabled:   enabled
13:41:36:Accelerator:
13:41:36:Half Precis.:  enable
13:41:36:Environment Variables
13:41:36:CPAI_CORAL_MODEL_NAME = YOLOv8
13:41:36:CPAI_CORAL_MULTI_TPU  = True
13:41:36:MODELS_DIR            = <root>\modules\ObjectDetectionCoral\assets
13:41:36:MODEL_SIZE            = medium
13:41:36:
13:41:36:Started Object Detection (Coral) module
13:41:39:objectdetection_coral_adapter.py: ERROR:root:TFLite file C:\Program Files\CodeProject\AI\modules\ObjectDetectionCoral\assets\yolov8m__segment_0_of_2_edgetpu.tflite doesn't exist
13:41:39:objectdetection_coral_adapter.py: WARNING:root:Model file not found: [Errno 2] No such file or directory: 'C:\\Program Files\\CodeProject\\AI\\modules\\ObjectDetectionCoral\\assets\\yolov8m__segment_0_of_2_edgetpu.tflite'
13:41:39:objectdetection_coral_adapter.py: WARNING:root:No Coral TPUs found or able to be initialized. Using CPU.
13:41:39:objectdetection_coral_adapter.py: ERROR:root:TFLite file C:\Program Files\CodeProject\AI\modules\ObjectDetectionCoral\assets\yolov8m-416_640px.tflite doesn't exist
13:41:39:objectdetection_coral_adapter.py: WARNING:root:Unable to create interpreter for CPU using edgeTPU library: [Errno 2] No such file or directory: 'C:\\Program Files\\CodeProject\\AI\\modules\\ObjectDetectionCoral\\assets\\yolov8m-416_640px.tflite'
13:41:39:objectdetection_coral_adapter.py: MODULE_PATH:           C:\Program Files\CodeProject\AI\modules\ObjectDetectionCoral
13:41:39:objectdetection_coral_adapter.py: MODELS_DIR:            C:\Program Files\CodeProject\AI\modules\ObjectDetectionCoral\assets
13:41:39:objectdetection_coral_adapter.py: CPAI_CORAL_MODEL_NAME: yolov8
13:41:39:objectdetection_coral_adapter.py: MODEL_SIZE:            medium
13:41:39:objectdetection_coral_adapter.py: TPU detected
13:41:39:objectdetection_coral_adapter.py: Running init for Object Detection (Coral)
13:41:39:objectdetection_coral_adapter.py: CPU_MODEL_NAME:        yolov8m-416_640px.tflite
13:41:39:objectdetection_coral_adapter.py: TPU_MODEL_NAME:        yolov8m-416_640px_edgetpu.tflite
13:41:39:objectdetection_coral_adapter.py: Attempting multi-TPU initialisation
13:41:39:objectdetection_coral_adapter.py: Failed to init multi-TPU. Falling back to single TPU.
13:41:39:objectdetection_coral_adapter.py: Input details: {'name': 'normalized_input_image_tensor', 'index': 0, 'shape': array([  1, 320, 320,   3]), 'shape_signature': array([  1, 320, 320,   3]), 'dtype': , 'quantization': (0.0078125, 128), 'quantization_parameters': {'scales': array([0.0078125], dtype=float32), 'zero_points': array([128]), 'quantized_dimension': 0}, 'sparsity_parameters': {}}
13:41:39:objectdetection_coral_adapter.py: Output details: {'name': 'TFLite_Detection_PostProcess', 'index': 6, 'shape': array([  1, 100,   4]), 'shape_signature': array([  1, 100,   4]), 'dtype': , 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}
13:41:39:objectdetection_coral_adapter.py: Using Edge TPU

Edit: Despite all that it seems to be magically working lol

Code:
Status Data:  {
  "inferenceDevice": null,
  "inferenceLibrary": "TF-Lite",
  "canUseGPU": "false",
  "successfulInferences": 135,
  "failedInferences": 25,
  "numInferences": 160,
  "averageInferenceMs": 10.548148148148147
}

Nice!

They've made decent improvements as now it appears to work.
Still buggy though - not sure what model it is using really as the dashboard always says mobilenet SSD.

I switched to yolov5 but all inference speeds seems to be the same for mobilenet, yolov8 and yolov5.
I'd like to know how to swap and add custom tflite libraries to the modules. Tried playing around with the configs/jsons in VS code last night but got a bit over my head.

Environment Variables
CPAI_CORAL_MODEL_NAME = YOLOv5
CPAI_CORAL_MULTI_TPU = True
MODELS_DIR = <root>/modules/ObjectDetectionCoral/assets
MODEL_SIZE = medium

"inferenceDevice": "TPU",
"inferenceLibrary": "TF-Lite",
"canUseGPU": "false",
"successfulInferences": 797,
"failedInferences": 3,
"numInferences": 800,
"averageInferenceMs": 8.60978670012547
}
 
I've got a wishlist item...

Code:
11:46:38:Response rec'd from Object Detection (Coral) command 'detect' (...b6725b) ['']  took 5ms
11:46:38:Response rec'd from Object Detection (Coral) command 'detect' (...d03a99) ['Found umbrella, train']  took 18ms
11:46:39:Response rec'd from Object Detection (Coral) command 'detect' (...8cf178) ['Found umbrella, person']  took 17ms
11:46:40:Response rec'd from Object Detection (Coral) command 'detect' (...1c839c) ['Found umbrella, person']  took 14ms
11:46:40:Response rec'd from Object Detection (Coral) command 'detect' (...0ef0b0) ['Found train']  took 15ms
11:46:41:Response rec'd from Object Detection (Coral) command 'detect' (...9fb238) ['No objects found']  took 15ms
11:46:43:Response rec'd from Object Detection (Coral) command 'detect' (...2605a7) ['']  took 4ms
11:46:43:Response rec'd from Object Detection (Coral) command 'detect' (...499817) ['Found umbrella, umbrella']  took 16ms
11:46:43:Response rec'd from Object Detection (Coral) command 'detect' (...231f5d) ['Found umbrella, person']  took 15ms

Sure would be nice if I could send a little metadata over the api say... camera name so I can see which camera is seeing umbrellas... I know I could probably pull it off with aitool but just getting that note in the logs would make it so much easier to figure out which of the 11 cameras is a good candidate for some tweaking.
 
  • Like
Reactions: Richard Harnwell
That would be an API change between BI and CPAI, I’d guess. BI would need to send the info to CPAI in order for it to appear in anything on the CPAI side, and I don’t think it does. I’m not familiar with the details of the API, but I haven’t seen anything like that in there.
 
That would be an API change between BI and CPAI
Oh yeh, no doubt, just pondering reaching out to BI to see what they think. I could probably also make it work over mwtt maybe (not 100% sure if detections will be sent over mqtt if they're not in my "wanted" list.

But another reason to ponder aitool might be that it has a copy of the detection image which will be better for my pushover notifications which seem to be happy to notify me of things now but send pictures way too after the fact.
 
After several days of tweaking and some frustration I may give in and put my gpu back in.
Can't fault the tpu speed and low power consumption but the models are just not working as I want them to. Fine if you just want to identify people in good light.
My nvidia gpu was spot on 90% of the time.
 
BlueIris license blocked trying to reapply an old config!! sent email to support.

Anyways with my gtx970 I have found YOLOv5 3.1 to be most effective. Tried v8.0 and it did not pickup small animals possibly due to the lack of custom models? Only had general. 3.1 has ipcam-combined which in my scenario works great.
Just a few more bucks for the utility bill each month.
 
Most of the time my gpu gtx970 is at idle in P8 state at about 24w the unraid plugin says. Maybe less than that I've read
If it runs at 24w over 24 hours - calculated extra electricity cost 3 pesos here (day is split into 3 different costs depending on the hour!)
11 pesos per Kwh for 4 hours
5 pesos per Kwh for 13 hours
2.298 pesos Kwh for 7 hours

So about 2.3 USD over 31 days plus vat.
Less than a cheap bottle of wine.
Think I can afford that. The accuracy is far superior. Very impressed with it really.
 
Good to know. What sort of video stream load are you putting on it? (Average FPS?)
All of the time 6 of the exterior cameras. Sometimes the interior ones when I'm out. FPS all pretty low but high fps is not really needed for the analysis
But 6 of them using the gpu - it flies through the images at less than 50ms

13:28:11:Response rec'd from Object Detection (YOLOv5 3.1) command 'detect' (...0db89b) ['No objects found'] took 42ms
13:28:11:Object Detection (YOLOv5 3.1): Detecting using ipcam-combined
13:28:11:Response rec'd from Object Detection (YOLOv5 3.1) command 'custom' (...018312) ['No objects found'] took 18ms
13:28:11:Response rec'd from Object Detection (YOLOv5 3.1) command 'detect' (...280c84) ['No objects found'] took 41ms
13:28:12:Object Detection (YOLOv5 3.1): Detecting using ipcam-combined
13:28:12:Response rec'd from Object Detection (YOLOv5 3.1) command 'custom' (...d51652) ['No objects found'] took 16ms



1714150420725.png
 
If I'm reading that correctly, that's ~700 frames analyzed (motion triggers) over 6 hours, for around 1 frame every 30 seconds? How many frames does a motion trigger cause to be analyzed?