Which nVidia GPUs are you using?

FYI, you can get a true (long term, since last restart of the CPAI service) running average inference time by clicking the "Info" button on the right side of the green status bar in the CPAI webpage, scroll down, lots of useful info there.
 
FYI, you can get a true (long term, since last restart of the CPAI service) running average inference time by clicking the "Info" button on the right side of the green status bar in the CPAI webpage, scroll down, lots of useful info there.
Thanks. Forgot about that being in there.
 
  • Like
Reactions: jrbeddow
Yeap Im aware of the .NET saying without Cuda on its description, but if you follow MikeLud1, he always has been saying the same thing about how the .NET module works faster on his RTX 3090 I believe he has. I never got it to be faster, but previously I had the Quadro P1000 card in here and the .NET was always slower, the Yolov5 6.2 or 8 was always better. But this 3050 with the .NET using GPU is faster.

Well thank you very much for giving me something to consume my lazy Sunday, now I'm going to have to play around with .NET(DirectML) and Yolo8 to compare them vs my current YOLOv5 6.2 to see which my RTX 4060 can handle the best (FYI I use the standard LARGE model)
:)
 
Well thank you very much for giving me something to consume my lazy Sunday, now I'm going to have to play around with .NET(DirectML) and Yolo8 to compare them vs my current YOLOv5 6.2 to see which my RTX 4060 can handle the best (FYI I use the standard LARGE model)
:)
FYI .NET(DirectML) will be the winner.
 
Trying to benchmark my system running BlueIris and the CodeProject AI (with my refurbished Nvidia Quadro M2000 using YOLOv5 3.1 for older GPUs). My kill-a-watt is saying that the computer is continuously using about 160watts. I guess that's OK since I'm using an older i7 computer and older video card. I'm not sure I want to keep this running at this level forever though. Not a crazy amount of electricity but I'm not sure it's worth the results I'm seeing: the detection is definitely happening quicker during daylight but it doesn't seem to be detecting as accurately at dusk or in the dark under the street lights.

I didn't benchmark BI with AI on my previous i5 system using YOLOv5 .NET so I don't know how much more power this i7 setup is drawing ... maybe not enough more to worry about.

It's been an interesting experiment so far either way. ;)
 
  • Like
Reactions: Flintstone61
FYI .NET(DirectML) will be the winner.
I've got at least another week of collecting sample data but so far on my system Yolov5 6.2 is marginally beating .NET(DirectML) - I still need much more sample data for ipcam-general on all three modules and Yolov8

FYI my system is an older Intel i5-6600K with an RTX 4060, I wonder if the older CPU is limiting .NET.

Also note that all sample data are from using the LARGE standard model.


1737533726599.png
 
Trying to benchmark my system running BlueIris and the CodeProject AI (with my refurbished Nvidia Quadro M2000 using YOLOv5 3.1 for older GPUs). My kill-a-watt is saying that the computer is continuously using about 160watts. I guess that's OK since I'm using an older i7 computer and older video card. I'm not sure I want to keep this running at this level forever though. Not a crazy amount of electricity but I'm not sure it's worth the results I'm seeing: the detection is definitely happening quicker during daylight but it doesn't seem to be detecting as accurately at dusk or in the dark under the street lights.

I didn't benchmark BI with AI on my previous i5 system using YOLOv5 .NET so I don't know how much more power this i7 setup is drawing ... maybe not enough more to worry about.

It's been an interesting experiment so far either way. ;)
160w seems high, I was about 100w with my i5-6600K and a 970GTX (large standard Yolo5 6.5) - I'm now at 60w on the same system/CPU but with an RTX4060 - That's with 7 cameras all using sub stream for motion/AI but also all recording the main stream direct to disk 24/7.