Recent content by snuuba

  1. snuuba

    CodeProject.AI Version 2.5

    Finally my low-powered /low profile / single slot AMD Radeon RX6400 GPU arrived after a month of waiting and I was able to move Blue Iris & CPAI from my desktop computer to dedicated SFF server. While RX6400 seems to be universally hated by all the gaming youtubers, it is doing surprisingly...
  2. snuuba

    CodeProject.AI Version 2.5

    This is probably the main factor for longer response times. I've seen typical response time in CPAI log jump from normal ~10-40 ms to ~80-120 ms while analysing main stream images. I only use main stream once and while when collecting new images to retrain my custom dog model. It seems to help...
  3. snuuba

    CodeProject.AI Version 2.5

    From BI phone app I can only see response times for situations where AI did not find anything, ranging from 30-300 ms. Typically 80-100 ms. Not sure how exactly BI calculates these numbers, but the model size, number of images sent to AI per trigger and image size (main vs substream) will...
  4. snuuba

    CodeProject.AI Version 2.5

    Here is my "System Info". I haven't had any issues with .NET using 2.5.6 or 2.6.2. You seem to have more recent version of .NET runtime, maybe using older version could help? Server version: 2.6.2 System: Windows Operating System: Windows (Microsoft Windows 11 version 10.0.22631)...
  5. snuuba

    CodeProject.AI Version 2.5

    Sweet. The only difference I've spotted is that ONNX model file size seems to be roughly double the equivalent PT model file size. I assume that this is just minor "cosmetic" difference, both models have the same number of parameters and should consume about the same amount of GPU memory? The...
  6. snuuba

    CodeProject.AI Version 2.5

    This partly answers a question I've been thinking of. If .NET is faster or equal as CUDA with some GPUs, what is the point of running CUDA version of the model? I recently upgraded to 4070 Super Ti on my desktop / BI test machine. While experimenting with it, I noticed that YoloV5.NET is...
  7. snuuba

    CodeProject.AI Version 2.5

    Yes, the difference in accuracy is due to using default model yolo8v with Coral vs using Ipcam Custom models with non coral system. Getting ip-cam custom models support with Coral would probably fix this.
  8. snuuba

    CodeProject.AI Version 2.5

    When I was running CP.AI in a remote server, I used "Override server" option on per camera "AI" option page. I think that this results in Blue Iris using default model on the remote CP.AI server and images are processed (sometimes, mostly with AI Error 500) When running CP.AI locally with Coral...
  9. snuuba

    CodeProject.AI Version 2.5

    Here is another test with single Coral TPU. I switched to "tiny" Yolov8 model to alleviate possible problems with memory. Blueiris 5.8.8.1 / CP.AI 2.5.6 (fresh install as per first page instructions) This alert was cancelled with AI error 500, due to "Unable to create interpreter" Few...
  10. snuuba

    CodeProject.AI Version 2.5

    I will stick with my AMD GPU based system running yolo5vNET (large) with custom models. My single Coral TPU (yolo8v / medium) test system is using the same cameras and trigger settings, while inference is faster with Coral (about 15ms vs 25-30ms), otherwise Coral is not doing so well. Accuracy...
Top