Which nVidia GPUs are you using?

I am running 3050 6gb models. They only need power from the PCIe slot and no external connector. Max power is 70 watts, normal power usage is about 20 watts.

I was running EVGA GeForce GTX 1650 XC and they worked just fine with CPAI and BI with 16 HD cameras.

I would let one of the EVGA GeForce GTX 1650 XC go for a decent price
 
As an Amazon Associate IPCamTalk earns from qualifying purchases.
  • Like
Reactions: Flintstone61
Okay.

What's more important? Cuda Cores or compute capacity?
My favorite is gtx980 ti with Much more cuda cores(2816) the GTX 1650 has only 896.
 
  • Like
Reactions: Flintstone61
It's not quite that simple because DL inference actually use tensor cores with FP16 (or int8). I thought this was a great article on the subject:

Unfortunately, it doesn't directly answer your question of which GPU is a better choice, but basically: anything with Tensor Cores will be better than anything without (and neither of those cards have Tensor Cores.) This discussion:

Sounds like the cheapest card with tensor cores are either the RTX 2060 or 3050, which look like they are selling for $100-$200 on eBay. Note that GPUs seem to be particularly expensive right now due to geopolitics and being in a lull between the 4xxx series being discontinued and 5xxx not yet being released, from what I can tell.
 
Optimized decisions will depend on what is most important to your use case.

My primary use case is CPAI connected to BI machine running 16 HD cameras.
The optimum choice is whatever can handle that workload for the least amount of electricity cost, the cheapest card, and does not add a bunch of heat to the room.

The 1650 meets those requirements and only runs at about 25% or less on average.

The 3050 6GB model also meets those requirements. (Note that the 3050 with more memory is a different card and uses more power)

A different use case is experimenting with AI, models, or whatever. I have another computer with a 3070TI and I use that one to generate new models and play around with different machine learning algorithms.

Also have a couple of Jetson nanos that I play around with as well.

If your use case requires more compute then you should choose a higher level card and be prepared to pay for it, pay for the electricity it consumes, and deal with the heat that the card generates.

Best for your use case may not always be the fastest card. Consider all of the information, make a decision and go for it.
 
Thx,

My Primary use is CPAI With BI with 4 HD Cams and 2 SD Cams. I have at the moment Coral Dual Edge ony my M2 slot, but due too low SRAM of this i can only use the tiny/small models, and this is too inaccurate for my requirement. Higher models too slow.

My BI NVR is 95% of the day in Idle Mode and takes 10-20W. I had 8901 Request on CPAI in 4 Days :D thats nothing i thing


Code:
Module 'Object Detection (Coral)' 2.4.0 (ID: ObjectDetectionCoral)
CPAI_CORAL_MODEL_NAME = MobileNet SSD
MODEL_SIZE            = Small
inferenceDevice": "TPU",
successfulInferences": 4796,
failedInferences": 4104,
numInferences": 8900,
averageInferenceMs: 15.782735613010843
Started:      30 Dez. 2024 10:38:22  Mitteleuropäische Zeit
LastSeen:     02 Jan. 2025 7:57:23  Mitteleuropäische Zeit
Status:       Started
Requests:     8901 (includes status calls)

My biggest problem is the inaccurate.


My Trashbin is a Snowboard and sometimes a Suitcase:
11:57:38:Response rec'd from Object Detection (Coral) command 'detect' (...86cfe3) ['Found snowboard'] took 340ms
11:57:47:Response rec'd from Object Detection (Coral) command 'detect' (...5d61cd) ['Found suitcase'] took 319ms

My Mailbox is a fire hydrant
14:41:08:Response rec'd from Object Detection (Coral) command 'detect' (...e747cc) ['Found fire hydrant'] took 98ms

Im sometimes a Person, sometimes a airplane and sometimes a dog
11:55:25:Response rec'd from Object Detection (Coral) command 'detect' (...f6d825) ['Found airplane'] took 37ms
11:55:40:Response rec'd from Object Detection (Coral) command 'detect' (...710574) ['Found person'] took 173ms
16:56:59:Response rec'd from Object Detection (Coral) command 'detect' (...9d5d0d) ['Found dog'] took 30ms


When i change the AI Server to my locale Machine with Object Detection (YOLOv5 .NET) on a Intel Core Ultra CPU with Intel Iris Plus Graphics (on Board GPU) with Large models is it more accurarte and ALPR works too.
 
Looking at just tensor cores, looks like 3050 has 73 AI TOPS and 4060 has about 242 AI TOPS. So thats a decent bump. Cuda Cores wise we have a decent bump of 3072 Cuda cores for the 4060 and 2304 Cuda cores for the 3050 (6GB) cards or 2560 Cuda Cores for the 3050 (8GB) card. Cuda Compute is 8.9 vs 8.6 on these (4060 vs 3050). Im leaning on throwing a 4060 on this unless Im thinking wrong on this.
 
Looking at just tensor cores, looks like 3050 has 73 AI TOPS and 4060 has about 242 AI TOPS. So thats a decent bump. Cuda Cores wise we have a decent bump of 3072 Cuda cores for the 4060 and 2304 Cuda cores for the 3050 (6GB) cards or 2560 Cuda Cores for the 3050 (8GB) card. Cuda Compute is 8.9 vs 8.6 on these (4060 vs 3050). Im leaning on throwing a 4060 on this unless Im thinking wrong on this.
I think im answering my own question that 4060 is probably out given that I dont think they have a version that can be powered by PCI slot alone, I believe they all need minimum 115w power, so they would need a 6 or 8pin connector which basically throws that card out of contention for lower power usage on my Dell SFF setup
 
Thx,

My Primary use is CPAI With BI with 4 HD Cams and 2 SD Cams. I have at the moment Coral Dual Edge ony my M2 slot, but due too low SRAM of this i can only use the tiny/small models, and this is too inaccurate for my requirement. Higher models too slow.

My BI NVR is 95% of the day in Idle Mode and takes 10-20W. I had 8901 Request on CPAI in 4 Days :D thats nothing i thing

My biggest problem is the inaccurate.

I was confusing you with the OP for a minute.

You state your biggest problem is the accuracy. I am not confident that I can provide a good solution for that so I wont address that part.

You have 6 cameras, that is not a heavy workload and most likely you could handle that without a dedicated GPU but if you want to buy one it would mostly depend on your financial situation.

The 3050 6GB is a great choice for low power AI. I see them on Amazon or Newegg for about $170. If you can afford that price, I would buy one of those.

My experience with 16HD cameras on BI
3050 6GB using YOLO5 CUDA running huge model returns on average 20 to 30ms and sits around 5% utiliz accordng to task manager and about 28watts according to GPUz

1650GTX using YOLO5 CUDA running huge model returns on average 180 to 200ms and sits around 12% utiliz accordng to task manager and about 30watts according to GPUz
( I am not 100% sure I ran the huge on the 1650, it might have been large. I did not write it down so want to caveat that)

If you want more power by all means buy a faster GPU.
My goal is to use as little power as possible for the camera computer.
Use less electricity and helps to extend the UPS runtime when power goes out.