Sharing setups with CodeProject AI on Coral hardware (Spring 2024)

You should be always able to use the multi-TPU implementation

Is there anyway to be able to tell the difference between the two?

I do see the multi_tpu environment variable

21:52:10:CPAI_CORAL_MULTI_TPU = True

My start up logs show the attemping and then fall back messages.

21:52:12objectdetection_coral_adapter.py: TPU detected
21:52:12objectdetection_coral_adapter.py: Attempting multi-TPU initialisation
21:52:12 objectdetection_coral_adapter.py: Failed to init multi-TPU. Falling back to single TPU.
21:52:12 objectdetection_coral_adapter.py: Using Edge TPU
 
It looks like you are attempting to use multi-TPU by default, but something is wrong so it falls back to the old method. If you’re feeling adventurous you could dig into it and figure out what’s wrong and let me know what to change. (But it may be just the installed coral library version, which is harder to fix.)
 
I'm months behind on current projects so don't want to slide down the rabbit hole on a new adventure. :lol:

But should I trip into one as I'm walking around, what is the best setup? Mind you, I haven't coded in over a decade (besides a couple of small VBscript or javascript files to help someone automate a task). I used to use MS Visual Studio on Windows back in the day but I know that's OLD and have changed dramatically. I'm not opposed to Linux even though it would be more of a learning curve for me (I mean I would have already tripped into the rabbit hole right :lol: ).

Are most developers using Windows or Linux? Also, what IDE are they using?
 
The Coral TPU code is all in python, although some of the failures may be related to the setup configuration/environment. There really isn’t any hardcore coding that you’d need to do. Just figure out how to make the existing code run more reliably via trial and error. Things like refining the logic to ensure the watchdog is working correctly and recovering from errors

Or logic like this:

Or make sure that the model segment download or module install code is working well, but I didn’t write that so I’m less familiar.

Edit: and I’d just focus on making sure it runs well on your setup first.
 
Last edited:
  • Like
Reactions: AlwaysSomething
As far as an IDE, I’ve used Eclipse, BBEdit, Sublime, and most recently VS Code. My Linux machine that I’ve done my Coral development on I’m using Sublime and the default Linux text editor (simple text?). And EMacs sometimes. Or vi if I have to. And sometimes I’ll work on some prototype code using the GitHub editing interface. Anyway, it depends.
 
Yeah I think most of the issues are with the installer and not the actual execution/"running".

I wan't sure how the debugging works. I used to be able to put break points and step through code.

I've been hesitating since the only problem I really have is it says it's using the model I chose but I don't think it really is (based on testing). However, it is a pain point since I only really care about people on certain cameras and it is currently marking a lot of cars as people for some reason. In other words, a car will drive by and it gets marked as a person.

If I fall in the rabbit hole you may hear me screaming for help. :lol:
 
I’ve been doing this for many years, but still prefer print statements over breakpoints for whatever reason. I’ve been doing all of my testing using a script to run and test the tpu_runner.py code, so I’m not really even that familiar with how things interact within CPAI, and I may not be of much use.
 
Software configuration
  • CP.AI version: 2.6.2
  • Modules: Object Detection (Coral) v2.2.2
  • Model: Medium yolov5. Note: my BI configuration says "Default Object Detection : small". Now I manually changed the model from small to medium within the codeproject control panel. I likely probably should have changed this from within BI itself.
Other
I didn't investigate this enough to realize that that there could be issues with single TPU implementations. Specifically the error "Unable to run inference: There is at least 1 reference to internal data". From googling this I think it means that TPU is already busy and is unable to process the current request.
Checked the codeproject AI log window and noticed what appeared to be near constant analysis.
Reduced this by checking my BI settings and seeing that I had "Static object analysis" turned on. This reduced the issue significantly. I'd guess that the dual TPU implementations do not see this nearly as much?

Great share and awesome feedback. I'll add in your findings about BI config to the top post. Good note that static object analysis should be disabled to prevent flooding requests. In my case, I'm not having BI control CPAI, so I wasn't able to adjust the model size from there. Good reminder about that too.
 
Good note that static object analysis should be disabled to prevent flooding requests. In my case, I'm not having BI control CPAI, so I wasn't able to adjust the model size from there. Good reminder about that too.

This is documented in the codeproject AI blue iris faq here : Blue Iris Webcam Software - CodeProject.AI Server v2.6.2 under the section marked "CodeProject.AI Server log shows requests every minute or less when there is no motion detection"
 
  • Like
Reactions: silencery
I think static object analysis is dependent on the user/camera needs.

For example, I have a couple of cameras that cover my driveway which can have a couple of cars parked in it. On those cameras I am tracking person, car, truck, cat, and dog. Without the static object analysis if there were was motion, like a bug flying across the camera, it would detect my parked cars and trigger an alert on the cars. So in that case I have detecting the static objects turned on to prevent false positives. This was a real PITA with some previous versions of BI where it wasn't working correctly. You may also want this if you have a camera in a garage with a parked car that may have motion such as a bug (especially if using IR which can attract the bugs).

Conversely, on my backyard and indoor cameras I do not have any static objects that would trigger so I do not have it turned on for those cameras. Oddly enough, there was a stuffed dog in my living room once that would trigger when the shadows caused motion (large window with lots of sun). So again, you have to evaluate based on your camera's situation.

One last thought is just how many cameras are configured for static object analysis. If only a couple (like in my case) you shouldn't be flooding CPAI. If you have a lot of cameras configured (like a parking garage) then you will have a lot traffic for CPAI. I would say turn off and see how many false positives you get to determine if you need it.
 
  • Like
Reactions: silencery
This is documented in the codeproject AI blue iris faq here : Blue Iris Webcam Software - CodeProject.AI Server v2.6.2 under the section marked "CodeProject.AI Server log shows requests every minute or less when there is no motion detection"

LOL, oops. I didn't even know a FAQ existed! Added this reminder to the top post in case anyone is lost in the future.


I think static object analysis is dependent on the user/camera needs.

100% Everyone's environment is different, so it's all about tuning things to fit your needs
 
The Coral TPU code is all in python, although some of the failures may be related to the setup configuration/environment. There really isn’t any hardcore coding that you’d need to do. Just figure out how to make the existing code run more reliably via trial and error. Things like refining the logic to ensure the watchdog is working correctly and recovering from errors

Or logic like this:

Or make sure that the model segment download or module install code is working well, but I didn’t write that so I’m less familiar.

Edit: and I’d just focus on making sure it runs well on your setup first.

@mailseth

Couple of quick questions.

1. Is the installer part of the source code that can be downloaded from the CPAI website? It seems like most of the problems are usually with the installer/installation. I upgraded to 2.6.5 and having issues with it not saving the settings to autostart, reverting to CPU, and some other errors I can't remember off the top of my head. The CORAL module works fine once I can get it running.

2. I know the TPU is limited on memory (and therefore the model it can hold) but are there any plans to make it work for the LPR module if there is a prerequisite that you need multiple dedicated TPUs? For example, you need n number of TPUs just for the LPR module. If you want to use other TPUs for other modules you need to install them on a separate machine. For example using VMs, have one VM for regular Object detection with TPUs passed through and then a second VM for LPR with separate TPUs passed through.

Just curious as I want to use LPR in the very near future and I would need a GPU for that based on the current requirements. If I have to go the GPU route then I may need to build a new rig since I can't really fit any GPU's in my Dell Optiplex SFF or EliteDesk. I need to determine if I bite the bullet now or do I wait and then maybe have to migrate everything later to new machine.

Thanks
 
I'm afraid that the module file management & installation stuff is all outside of what I've worked on. I've looked into it before, but haven't made sense of it, unfortunately. I just send Chris a zip file of the compiled models and he has been setting things up to fit in the larger CPAI system.

I don't think that LPR will ever fully run on a TPU because the logic is more complicated and TPUs work best running a single model (not swapping between many models). So it would work to port the LPR over, but it may still not be the best use of TPU resources.

I've been working on a custom model that I think will make a good general use case for security cameras. It includes everything from ipcam-combined, but also includes classes like Fire that I'm going to find useful for my future projects. I've also included classes for Vehicle registration plate and Human face. My intention is that this single model can be run on the TPU, and then someone can allocate work as required to other models running on another device like the CPU or GPU.

I expect that most of the LPR compute time is spent in running plate localization to get a bounding box around the plate, and then later OCR is run only if it finds a car & plate. So my hope is that the TPU is able to do the heavy lifting for running the common-case model, but this is something that has absolutely no support besides having support in a model that I'm still working on training. I don't know what the BI/CPAI logic would look like.
 
I've found that the coral ai small model isn't as accurate for my use case (people detection) as what it was before when I was using the ipcam-general.

I found that I had significantly more false positive triggers than ipcam-general so i've had to increase confidence right up (70%) and number of images (9) and i still miss triggers.
So better cpu utilization overall but not quite a 1:1 replacement for what I had ... yet.
 
@mailseth thanks for your work on this project. Im struggling to understand whether MikeLud ipcam-general.tflite model can be used with CPAI?

If yes,
  • what is the API call to invoke it?
  • in what coral directory should it be stored?

Thanks again
 
The short answer is no, that model needs to be reworked to run on Coral. The long answer is yes, I’ve done so and you can download it here:

Unfortunately, I don’t have a good guide for getting it running in the current version of CPAI. You’re welcome to download and play with the model, however. Or any model in that repo, like YOLOv9. If you’re feeling ambitious with some code, I’d love to get someone submitting some GitHub pull requests against my current code that’s been in a state of purgatory since about June.