Hell Yeah! Direct Deepstack Integration - 5.4.0 - March 31, 2021

When I tested this way back using AITOOLS, running deepstack in docker resulted in much less CPU usage than running the Windows version of deepstack.

Never tested to see if there were any processing time differences though so might be something worth testing.

I did exactly this last night. I thought my Vmmem process was using a bit of cpu so thought using the windows deepstack would reduce this by being able to shut down docker desktop, but spiked to 100% on every alert. Much more than deepstack in docker.
Processing time for each image was just as fast.

And in portainer on docker you can tweak the settings, cpu cores/memory and easily have different modes for each model.

I think for now I'll be sticking with docker deepstack.

I was quite disappointed unless I've gone wrong somewhere setting it up.
 
Oooops, wrong thread.
 
Hi all, just started playing around with the deep stack integration. This stuff is mind blowing!

Quick question, I am using the web server to cycle through my camera's and it is automatically pulling up the one that is triggered. Is it possible to only do this when deepstack has identified an object of interest?

Hope I am making sense!
 
Some questions about Deepstack / Graphics Cards:

1. If a Quadro Graphics Card states power consumption as 40w, is that 40w max or always 40w ie if the load is less than 100% will it still use 40W?

2. If you use a Quadro card for anaylsis instead of CPU will the power used by the Quadro be offset by lower power draw for the CPU due to lower cpu usage?

3. I read elsewhere Fenderman stated that most use DS for notifications rather than triggers. Is this because it's not yet reliable enough?

BTW, if you're wondering about the power questions the UK Government is about to double energy prices with green taxes plus the shortages from the EU. So a yearly bill of £1,500 could be about to become £3,000!!! :oops:
 
I believe the power rating is at maximum load. I an tell you that the GTX970 I run can use as much as 180 watts under full load but when idle or doing "lightweight" tasks it sits down around 10 watts.

There is still some power used by the CPU for processing even when using a GPU but it is short duration, a spike of a second or two, and not continuous. The GPU cuts that time down significantly versus the CPU version of DS.

I use both DS and BI for alerts. I only have notifications occur when DS detects specific objects and it seems to work fine for me. YMMV

Your electric bill, even after the increases is still lower than mine here in the US.
 
Some questions about Deepstack / Graphics Cards:

1. If a Quadro Graphics Card states power consumption as 40w, is that 40w max or always 40w ie if the load is less than 100% will it still use 40W?

2. If you use a Quadro card for anaylsis instead of CPU will the power used by the Quadro be offset by lower power draw for the CPU due to lower cpu usage?

3. I read elsewhere Fenderman stated that most use DS for notifications rather than triggers. Is this because it's not yet reliable enough?

BTW, if you're wondering about the power questions the UK Government is about to double energy prices with green taxes plus the shortages from the EU. So a yearly bill of £1,500 could be about to become £3,000!!! :oops:

You might want to wait for @IReallyLikePizza2 to receive his nVidia Tesla P4 and report back if it works well with DS (or not). As I understand it, the Tesla cards are aimed at applications, like DeepStack and OpenALPR, which are heavy on AI processing. I read a recent article showing AI performance on CPU vs Tesla P4 vs Tesla T4 and while the T4 blows the doors off the P4, that P4 performance over the CPU is VERY significant ....


NVIDIA-Tesla-T4-v-Tesla-P4-GPU-Inferencing.jpg
 
Yeah, the T4 is a good card, but they are around $2K, thats a no-go for me!

The P4 arrives tomorrow morning
 
Yeah, the T4 is a good card, but they are around $2K, thats a no-go for me!

The P4 arrives tomorrow morning

AND ... I strongly suspect the T4 would be massive overkill for our purposes. I am betting (hoping) that P4 is going to be an excellent balance between price, performance and efficiency. The P4 is in the same overall performance category as the 1060, but it has more cores, more memory, and uses less power, for about the same purchase price. My 1060 delivers sub 50ms DS detection times. While it might be nice to get a 3x performance improvement with the T4, I am not sure I'd really notice the difference between 50ms and 15ms in real world application. Maybe I would. If I had an unlimited budget I suppose I might check it out. :)

BTW - did you buy your P4 off E-bay?
 
  • Like
Reactions: sebastiantombs
Yeah, the T4 is a good card, but they are around $2K, thats a no-go for me!

The P4 arrives tomorrow morning


Same for me. I'm looking at the P620. Recently a store has had these in for not much more than others are selling the the P4 for over here and it has 2x the CUDA. On the power side it's 25W vs 40W, so not a lot of difference and if not using much of the overall capacity, I have to wonder what the relative power draws will be (thanks Sebastian :) ).

So if I go the DS route, it's going to be P4 or P620 depending on funds and availability.
 
  • Like
Reactions: sebastiantombs
Same for me. I'm looking at the P620. Recently a store has had these in for not much more than others are selling the the P4 for over here and it has 2x the CUDA. On the power side it's 25W vs 40W, so not a lot of difference and if not using much of the overall capacity, I have to wonder what the relative power draws will be (thanks Sebastian :) ).

So if I go the DS route, it's going to be P4 or P620 depending on funds and availability.
Anything which draws power from the mainboard is going to be pretty efficient, read, economical. On the CUDA side it depends on how hard you are pushing Ai/Deepstack, I can get my GTX1660 to spike up to 40-50% during events, below is the graph at 'idle' without any major events:

1635149352242.png

The example graphs below are during an event on 3-4 cameras, highlighted in red:


1635149771048.png

1635150327969.png

1635150432562.png

Most of the low continuous load will likely be Rekor agent for ALPR which is using GPU acceleration at the moment.
 
Last edited:
  • Like
Reactions: sebastiantombs
Does anyone know if the Forsa 1050 works with cuda/deepstack gpu? Architecture is Nvidia. I'm somewhat confused as I've seen so many makes of the 1050. Will any of them work?

I don't see why not, 1050 is the chipset, brand is irrelevant. Maybe double check the compatibility with the Cuda software/drivers to be sure you are good for Deepstack but I can't see it being a problem.
 
  • Like
Reactions: Pentagano
As with anything, you get what you pay for. I've always used NVidia for cuda based cards. They are very robust in terms of longevity when under constant loading. The less expensive cards are less expensive for a reason, just like less expensive cameras. Proceed with caution and as looney2ns says, buy once, cry once.
 
No doubt, buy cheap, buy twice as they say. I live my life by that rule, just ask my wallet :lmao: but it pays in the long run. If we are talking Graphics card brands I would stick with Gigabyte, EVGA or of course Nvidia Quadro. I have supplied and used plenty of Gigabyte GPU's and they are solid, warranty likewise. For Radeon cards I would go Sapphire but they aren't relevant for Cuda :wtf::lol:
 
Does this msi need an extra power supply or can it run on the board's power through the pins?

Spec suggests it needs external power:

1635174149616.png

Probably sailing a bit close to the wind for mainboard power alone.
 
Spec suggests it needs external power:

View attachment 106044

Probably sailing a bit close to the wind for mainboard power alone.

From what I can see looks as it can run on the 75w alone. Could always upgrade my psu at a later date if purchased
 
From what I can see looks as it can run on the 75w alone. Could always upgrade my psu at a later date if purchased
If it says it needs external power then you can be sure it needs external power :thumb: