Search results

  1. M

    CodeProject.AI Version 2.5

    So, for a good benchmark you'd need to add interpolation=cv.INTER_LANCZOS4 to the cv2 call, and add a pillow comparison that uses thumbnail instead of resize, test that, and then install pillow-simd (w/ AVX) and test it again. Edit: Looking a bit further, you might want to try the...
  2. M

    CodeProject.AI Version 2.5

    Also, the latest Coral pillow code uses the thumbnail() call: https://pillow.readthedocs.io/en/stable/reference/Image.html#PIL.Image.Image.thumbnail Which cheats a bit (see "reducing gap") to downsample a large image to a much smaller one. I don't know if there is an equivalent OpenCV operation.
  3. M

    CodeProject.AI Version 2.5

    Looks like pillow resize ANTIALIAS is the LANCZOS filter, while cv2 resize uses a linear filter by default, which are basically on opposite ends of the performance spectrum. Also, the version of pillow used isn't the SIMD one. So it's a bit of an apples-to-oranges comparison. Still, even on the...
  4. M

    CodeProject.AI Version 2.5

    Do you know what versions you’re looking at there? On their performance metrics page they have OpenCV listed as being a bit slower for image resizing. It it might be version dependent. https://python-pillow.org/pillow-perf/
  5. M

    CodeProject.AI Version 2.5

    If folks want to run large images and have CPAI downsample them, I’d suggest figuring out how to install the ‘pillow-simd’ package on Windows and make step-by-step instructions. It does the image resizing significantly faster using the AVX instruction set which is a large win for larger images...
  6. M

    CodeProject.AI Version 2.5

    I’d guess that the main stream is the highest quality, but it’s probably not worth it because it’s going to take some time to work with a large image. For example, a 4k image on my computer was taking 15 ms just to resize to the size of a the input tensor. It is pretty resource intense to...
  7. M

    CodeProject.AI Version 2.5

    What sized model? You generally don’t see serious multi-segment gains until you get into medium and large YOLO-based models. It’s still a work in progress, but i just got the latest TPU pipeline code merged which should give it a lot more options for balancing segment usage between TPUs. I...
  8. M

    CodeProject.AI Version 2.5

    I should start a poll for Coral interface type vs number of problems vs operating system. I’m beginning to suspect that a lot of it has to do with the OS also.
  9. M

    CodeProject.AI Version 2.5

    If you do go with Linux for CPAI, I’ve been developing the TPU code on Ubuntu 20.04 and things have been relatively rock solid. I had some problems with some of the tooling under 22.04.
  10. M

    CodeProject.AI Version 2.5

    Thanks for the bug reports. I know there has been some reworking of what models get run and when and file names. Hopefully that is fixed in the next release. Do you see any log lines like "Loading pci:0: <filename>"? That should tell you exactly what model's file is being read. If that says...
  11. M

    CodeProject.AI Version 2.0

    The default model for Coral is very small and fast. It’s designed to fit entirely on the TPU with little additional computing. It’s great if you want to analyze at 130 FPS. But it’s not all that accurate.
  12. M

    CodeProject.AI Version 2.0

    What model are you using? Have you tried a different/newer/larger model? Ideally we will have the YOLOv8 IPcam models soon, which will probably be the best option.
  13. M

    CodeProject.AI Version 2.0

    Yeah, the default YOLO models are looking for irrelevant things ranging from a bench to broccoli. My ideal model would be something based on YOLOv8 medium that does everything (all IPcam labels + plate labels + fire detection, etc). Running a medium model is roughly the compute cost of running a...
  14. M

    CodeProject.AI Version 2.0

    An answer to that question is going entirely depend on your setup. How many models, cameras, & FPS for starters. What sort of accuracy are you looking for and what hardware are you plugging it into and how cost sensitive are you. Personally, I’d say go for it and see if it works for you.
  15. M

    YOLO v8 issue with Coral TPU

    I'm actually running these tests with the objectdetection_coral_mulitpu.py file on the CLI, independent of most of the rest of CPAI. It's just calling the tpu_runner.py file for my local testing purposes. (I don't even have the CPAI server running on my machine right now.) So I'm running the...
  16. M

    YOLO v8 issue with Coral TPU

    I'm using a pre-pre-release version. Normal pre-release versions can be found here: https://github.com/MikeLud/CodeProject.AI-Custom-IPcam-Models/tree/main/TensorFlow%20Edge%20TPU%20Models/YOLOv8/custom-models
  17. M

    Error with Coral M.2 TPU and Yolov8 Medium model

    With that said, I might have made a mistake with the YOLOv8 image scaling. I fixed a bug this afternoon and am not sure when it will get rolled out. Try running YOLOv5 for now if you’re getting poor v8 results.
  18. M

    Error with Coral M.2 TPU and Yolov8 Medium model

    Looks like the file is named wrong. There is no file named: yolov8m-int8_edgetpu.tflite I think this is the correct fix: https://www.codeproject.com/Messages/5995426/Re-Yolo-v8-will-only-use-tiny-or-small-on-TPU-Reso
  19. M

    CodeProject.AI Version 2.0

    Yeah, pulling heat off the bottom isn't nearly as important as a heatsink on the top. I figured with the big heatsink, it wouldn't hurt to add some structure on the bottom, too. If anything, I wish I used a slightly thicker thermal pad for better support. Also, of note, you may want to invest...
  20. M

    CodeProject.AI Version 2.0

    I saw that M.2 adapter, but decided that I’d fill my PCIe slots first if I wanted to max my machine out. It’s just a HP EliteDesk G4 800 I got cheap on eBay. Close. I put the pads under the Dual card to serve as extra heat dissipation. Then on top of the Dual card I put copper heat spreaders...
  21. M

    CodeProject.AI Version 2.0

    If you’re upgrading to a dual, you may want to consider a heat sink on your Dual + Adapter. I overheated it pretty fast and eventually ended up with one of these: https://a.co/d/cdIqqBf And you’d need one of these too, cut to fit the chips: https://a.co/d/68TysVG And drilled out some holes in...
  22. M

    CodeProject.AI Version 2.0

    I actually have a total of eight TPUs in my machine. Two M.2 cards and three dual TPU setups (w/ three adapters). The intent was to split them between a recorder at my off-grid cabin and one at home, but I seem to have gotten side tracked playing with TPU development. So I’m not running BI/CPAI...
  23. M

    CodeProject.AI Version 2.0

    I’d personally recommend at least two or three TPUs with a YOLO medium model size. That way you can fit more of the 24 MB model into multiple onboard 8 MB TPU caches. See my post here measuring performance: https://ipcamtalk.com/threads/yolo-v8-issue-with-coral-tpu.74987/#post-838571
  24. M

    CodeProject.AI Version 2.0

    Yeah. I’m both surprised by what’s there and what isn’t. There’s a large target audience that isn’t us.
  25. M

    CodeProject.AI Version 2.0

    You can see the MobileNet and EfficientDet models listed out here. https://www.coral.ai/models/object-detection/
  26. M

    CodeProject.AI Version 2.0

    All of the non-custom models use the COCO labels. https://tech.amikelive.com/node-718/what-object-categories-labels-are-in-coco-dataset/ There are many irrelevant labels in there, thus the custom models have pruned them.
  27. M

    CodeProject.AI Version 2.0

    The MobileNet SSD models are roughly 10 ms fast. The total time may be larger for other reasons. The large models like YOLOv5 large may take over 1000 ms, but can also only fit a fraction of the model on the TPU. So timing will vary.
  28. M

    CodeProject.AI Version 2.0

    I don’t believe that the custom IPcam models are easy to get going on coral yet. Hopefully soon, however.
  29. M

    CodeProject.AI Version 2.0

    Well, the inference time sounds right. Not sure what it’s doing the rest of the time. Resizing a 4K image to the input tensor size takes roughly 10 ms. (Should be even faster in the latest version.)
  30. M

    CodeProject.AI Version 2.0

    What model are you running?