DeepStack Case Study: Performance from CPU to GPU version

I just wondered because you can get an Nvidia 1030 graphics card with 384 CUDA cores & 2GB for around £90. Which gives it 50% more CUDA cores than the P400 for around £40 less. Was wondering which would be better for this application.
 
I juts had a laugh at the recommendations for the 1030 on a vendors website (supposedly the manufacturers specs): 30W MAXIMUM TDP under full load. Recommended minimum PSU 650W!!!!
 
OK, I knew if anyone could screw this up, it would be me! Installed new P4oo Card. Took a while to get that all worked out, but I think the card itself is ok now. BUT after installing GPU version of DeepStack, I'm getting ONLY Deepstack: 100 errors. Reverted back to (ie. installed windows version again) and still get the same. When going to http://127.0.0.1:8082 I get the Deepstack page, Blue Iris "says" it is running and clicking on "Test in Browser" goes to the same page. But it seems Blue Iris and Deepstack aren't communicating or something behind the scenes. FYI, I am letting Blue Iris start / stop DS.
Any thoughts on where I went wrong here? Looking for advice here, not statements of the obvious that I'm not that smart, I think I have established that fact. :-(
 
The DeepStack 100 errors means that is is timing out. Does the port number match what it says in BI?

Did you uninstall Deepstack or just delete the folder?

Which computer (inumber and CPUnumber) did this go in?
 
The DeepStack 100 errors means that is is timing out. Does the port number match what it says in BI?

Did you uninstall Deepstack or just delete the folder?

Which computer (inumber and CPUnumber) did this go in?
Did not delete or uninstall anything (per advice and statements in this thread. Just installed GPU version and let it override - tried it and got "100" errors with GPU version - stopped everything - re-installed CPU version - rebooted - back to where we are. FYI, IP and port numbers in BI match the DS page that comes up.

Not 100% sure I am answering your last question but this is on a Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz.

It was working great until I put in new card and then tried to change to GPU version (install over the top of) from CPU version, then back again.
 
Yeah, I didn't uninstall either, I simply named the folder DeepStackCPU and then downloaded and ran the GPU version, so now I have both on the computer and can change just by directory LOL.

But I did have an issue last week and a simple uninstall and reinstall got it working again.

A third generation could have some lag going to the GPU?

Try the ole reboot of the computer and see if that gets it going.
 
Yeah, I didn't uninstall either, I simply named the folder DeepStackCPU and then downloaded and ran the GPU version, so now I have both on the computer and can change just by directory LOL.

But I did have an issue last week and a simple uninstall and reinstall got it working again.

A third generation could have some lag going to the GPU?

Try the ole reboot of the computer and see if that gets it going.

I uninstalled deep stack, reinstalled GPU version, started via BI and looks like it’s all working. Bad timing as I can’t get back to it until late in the weekend.
But my advice is, if changing from CPU to GPU, is just go ahead and uninstall. Or at least have that as step 1 if you have issues.


Sent from my iPhone using Tapatalk
 
  • Like
Reactions: sebastiantombs
My comment regarding uninstall was that an uninstall is not "clean: and leaves some remnants of the original install, CPU. However, the GPU install simply overwrites those left overs, but an uninstall has to happen.
 
  • Like
Reactions: CCTVCam
Some companies (usually AV companies) write an clean install utility which removes all remenents of any previous install including in the registry.
 
  • Like
Reactions: sebastiantombs
Deepstack is not a heavy workload. You would need to run multiple instances of DS to compute in parallel to attempt to place a heavy load on a Quadro P400. If you were using something like a old GTX1070 it'd be such overkill you wouldn't even realize it was working.

If you will also want the card to decode video on more than 4-6 cams memory will be the limiting factor. i would be looking at the Quadro T600 with 4 GB GDDR6 @ 160 GB/s then.

Only just noticed this bit. So do IPCT members just use Deepstack on a single point camera then and use just motion detection on the others?

Also, from what I can see, if you want 4gb of memory the only option now is a Quadro P1000 which becomes expensive.
 
Do IPCT members just use Deepstack on a single point camera then and use just motion detection on the others?

Not sure I understand your question but I'll give it a shot... Personally I run DeepStack with AITool to integrate with Blue Iris. But both I as a AITool user and others as DS Integrated Blue Iris users use motion detection on all cams to send jpegs when motion is detected to a folder that either AITool or Blue Iris with DS integration sends to DS to analyze for objects of interest. ie: Car, Person, ect...

With AITool you can run multiple instances of DeepStack in order to process multiple jpegs simultaneously aka in parallel. One instance = one jpeg, two instances = two jpegs ect.... When multiple cams detect motion while using BI integrated DS, or if your just using a single instances of DS with AITool, multiple jpegs can build up in queue and it could take up to 3-4 seconds for DS to process those images.

With AITool you could set one instance of DS to process jpegs for SecCams 1-4 and another instance to process SecCams 5-8. But I don't use it that way, I set it up my multiple instances so they just take the next jpeg from queue.


Also, from what I can see, if you want 4gb of memory the only option now is a Quadro P1000 which becomes expensive.
Nvidia Quadro T600 (40w), 4GB of memory. The Nvidia Quadro P1000 is (47w) so if your using a SFF PC that could be pushing the limits on power draw from the 16x PCIe slot.

But the only reason you would need a GPU with 4GB of memory was if you wanted to decode video directly from your cams for Blue Iris. Most people including myself just use Intel Quick Sync for that. Decoding video has nothing to do with AI integration or DeepStack.

I run two or three instances of DeepStack, decode/encode video for Windows Remote Desktop and decode/encode video for UI3 in BI with my Nvidia Quadro P400 with 2GB of Mem and don't even come close to using the 2GB of mem.
 
Last edited:
Not sure I understand your question but I'll give it a shot... Personally I run DeepStack with AITool to integrate with Blue Iris. But both I as a AITool user and others as DS Integrated Blue Iris users use motion detection on all cams to send jpegs when motion is detected to a folder that either AITool or Blue Iris with DS integration sends to DS to analyze for objects of interest. ie: Car, Person, ect...

What I was trying to ask is do you run deepstack on every camera if it bogs down on quadro 400 if using more than 6 cameras, or do you run it only on specific cameras eg 2 out of 6 to keep the processing demand low?
 
P400 or 1030 are comparable.

In benchmark tests online, it appears the 1030 has a slight advantage as it has more CUDA cores. So go with whichever is cheaper in your area.
 
  • Like
Reactions: bqz
I'd say the 1030 based on the cuda core count.
 
  • Like
Reactions: bqz