Hey ipcamtalk. Sorry if this has been brought up, but things move quickly, so just wanted to get some opinions on the best hardware accelerator options for under $100USD? I was previously looking at coral m.2 since I have one lying around, but it won't work in my current Intel-based system. I can spare 1 PCIe slot in my system, so I'd like to ask what the best options would be right now. Used Nvidia Quadro? USB stick? Something else I'm not aware of?
I don't really have a target inference level, just something better than CPU or 630 iGPU which is currently around 150-200ms. I guess 50ms or faster would be nice?
Thanks!
Edit: This conversation took a turn to focus solely more on Google Coral TPU setups, so editing the title accordingly.
Edit (5/11/2024): Here's the Coral/CP.AI setup I've settled with for now. If you're new to BlueIris and CP.AI, remember to read this before starting: FAQ: Blue Iris and CodeProject.AI Server
Hardware
Results
I don't really have a target inference level, just something better than CPU or 630 iGPU which is currently around 150-200ms. I guess 50ms or faster would be nice?
Thanks!
Edit: This conversation took a turn to focus solely more on Google Coral TPU setups, so editing the title accordingly.
Edit (5/11/2024): Here's the Coral/CP.AI setup I've settled with for now. If you're new to BlueIris and CP.AI, remember to read this before starting: FAQ: Blue Iris and CodeProject.AI Server
Hardware
- Coral M.2 dual TPU. There's also an option for a single TPU or other form factors. USB version has been documented to be unstable. M.2 version seems to be the most stable and performant
- PCIe adapter: You can get a generic adapter, but only one core will work. This specific adapter will support dual TPUs Dual Edge TPU Adapter - m.2 2280 B+M key
- CP.AI version: 2.6.2
- Modules: Object Detection (Coral) v2.2.2
- Model: Medium yolov5 (according to the UI, but it could be actually using a different model?)
- According to @mailseth, it's suggested to enable multi-TPU support for better stability even if you're running single core TPU
- BI version: 5.9.1.0
- Main config:
- Auto-start with Blue Iris: unchecked (this is just my preference, I run CPi.AI as a standalone windows service on the same box. this was to address the delayed startup bug with BI)
- Use custom models: unchecked (no idea if this makes any difference)
- Default object detection: checked
- Per camera config:
- To confirm: Same as CPU version. whatever you want to scan for as usual. In my case, i'm using "person,car,truck,bicycle"
- Custom models: Blank. I had to remove all my custom models to get the TPU version working. I wanted to scan for just vehicles and people and remove object detection with "objects:0" but this caused all confirmations to fail.
- Static object analysis: Because you have to keep custom models blank, there's no way to filter out stuff you don't care about. The yolov5 coral model may turn up tons of results such as "potted plant, banana, banana." This means if you keep static object analysis on, you may flood the TPU with too many requests. Unless you need it, it's probably best to keep it off. Thanks to @koops for this reminder
Results
- TUNING REMINDER: You can test any AI setup by watching a clip in BlueIris: Right click on the video, go to "Testing & Tuning" and then choose "Analyze with AI" You should see realtime detections from there. While the video is playing, you can also watch the CP.AI console logs to see how it's responding.
- Speed: Object Detection inference times are averaging 20-30ms for medium model.
- Accuracy: Using yolov5 and medium model size, object detection accuracy is reasonably good for my purposes (detecting people and vehicles). As others have reported, accuracy isn't as good as CPU, but with filtering, it's fine. I've not seen any person being missed, although it also returns lots of false low confidence positives (elephant, airplane, banana, banana, banana....).
- Stable? yes, at least for a week so far
- Single TPU working? Yes (but enable multi-TPU support anyway since it's more stable)
- Dual TPU working? Yes
- Overall impressions: Despite not being able to use custom models, the coral setup is overall faster and still accurate enough for my use case. Having it allows me to offload CPU cycles to other stuff, which was my goal at the outset. Pretty nice!
- I am NOT using the coral for LPR or facial recognition. Still using CPU for those.
Last edited: