Help with deepstack gpu for windows

jonrub11

n3wb
Dec 13, 2018
18
0
Sweden
Hi,

I have been using Deepstack with CPU and BlueIris for some days and it works great. I am curious if I could test my GPU, a GTX 680.
I installed Cuda, Cudnn and Deepstack GPU version like it says here.

I have tried to start Deepstack both via BlueIris and via command line but it won't recognize any objects. Anyone who can help me troubleshoot this?
Is GTX 680 even supported?

Thanks in advance!

/Jonas
 
same issue, but using a gt1030, According to the cuda stuff the 1030 is supported but most think its waste of time. I am curious as the gpu just sits there idle otherwise. I thought I saw the gtx 6xx series in the cuda docs.
 
Tried a GTX 760 with similar results. Installed it. Web page comes up saying it's running but when trying to detect things it just doesn't seem to work. If I did something wrong in the install, it wasn't obvious.
 
Well I do see Deepstack comments in my iris logs now after adjusting my frame rates and hot zones in blue iris. But I got some more tweaking I need to do with triggers. I may have to use the option to send notice regardless, but I know my cams was giving me fits with the winds, I do use the IVS features in the cameras. But I not 100% which was triggering the winds did not think to save the log before I cleared it. I have since turned on logging to file.
 
I would verify that the GPU shows up in Task Manager. I must have installed the NVIDIA driver wrong the first time because the GPU did not show up in Task Manager. However, the Deepstack web page showed up and looked like it was running, but had the same result your seeing.
 
I running windows server 2016.. GPU does not appear in the performance info. That's Win10 and 2019 only. I am using gpu-z to monitor it . I think Need to find a better GPU though, when deepstack check things it mostly avg 250ms sometime up to 500ms. But I can not view deepstack-gpu ui and see when items are analized like you guys show using the cpu version.
 
Going to try lowering my confidence levels, currently at 50%. Also wondering if I need to lower the movement threshold in BI currently using 1 sec. I do continuous recording so preview doe snot help out.
 
I was going to mention that the GPU version require CUDA extensions and something else as well, i just can't remember the name of it and I'm not at the BI machine. Thinking about it, mine is an NVidia card and I had to install an NVidia utility as well. That added too much overhead for my tastes so I'm back on the CPU version.
 
@sdfsdfsdf33333 Any idea which cuDNN library version is correct? I assume it would be the one for Toolkit 10.1.
 
You don't HAVE to install the Cuda 10.1 updates. I didn't install them and I had no problem getting my GPU processing images through deepstack. I doubt they will give any benefit or speed increase. But if its easy for you and you want to install them it wont hurt anything.

I'm wondering if it has to be Cuda 10.1 or if the latest version will work too. I would assume everything in 10.1 is in the latest version but I don't want to break what's already working to find out.
 
Maybe I'll try the latest version just for laughs, or cries as the case may be.
 
  • Haha
Reactions: looney2ns
If any of you reading this are the kind that likes to play with settings to see if you can get more speed out of your GPU... NvidiaInspector with NvidiaProfileInspector is what I'd use.

NvidiaInspector d/l
NvidiaProfileInspector github

These ^^^ two programs are what I use to squeeze every last hash per watt out of my GPU's for mining Ethereum (unlike anything any of the other overclocking programs can do). There are settings in Profile Inspector you wont find in the Nvidia Control Panel. (The goal of mining is to get the highest hash rate possible on the least amount of power). But realistically I think its pointless unless you able to place 100% load on the GPU. One instance of DeepStack only places <5% load on my P400

I installed them on my BI machine last night just to see if DeepStack puts the GPU into P-State 0, which it does so I didn't experiment with it any further.
 
Last edited:
  • Like
Reactions: sebastiantombs
I uninstalled the CPU version and installed the GPU version on my BI machine. It is detecting in about 1/10th the time that the CPU version was taking. The CPU was typically running 500-1000ms and frequently went to 1500ms and higher depending on the load from python, multiple triggers. In about a half hour of watching things I've seen times as low as 54ms but they are typically around 100ms and under 200ms for a maximum so far. That's a very significant difference.

On top of that the detection rate is probably over 95% now. Somehow, I can't help but think the two, speed and success, are related to each other.

A question for those with more knowledge and experience with DS and NVidia. If there is ore than one NVidia card installed will DS use both GPUs when the load gets higher?
 
Last edited:
I have a spare 970gtx and 1060ti so may have to give those a try as my older i7 takes a bit of a beating running BI and DS if there's a lot of activity.
 
what would be better to use m EVGA GTX1080TI? or my cpu I9 9720x? I know both are kind of dated but with this economy who wants to throw money around on just having the fastest
 
I'd use the 1080TI. It will "unload" work from the CPU, make the machine more responsive and, may, give better results with DeepStack. If it's only processing DeepStack detection the power consumption should be around, or even under, ten watts.