Do IPCT members just use Deepstack on a single point camera then and use just motion detection on the others?
Not sure I understand your question but I'll give it a shot... Personally I run DeepStack with AITool to integrate with
Blue Iris. But both I as a AITool user and others as DS Integrated Blue Iris users use motion detection on all cams to send jpegs when motion is detected to a folder that either AITool or Blue Iris with DS integration sends to DS to analyze for objects of interest. ie: Car, Person, ect...
With AITool you can run multiple instances of DeepStack in order to process multiple jpegs simultaneously aka in parallel. One instance = one jpeg, two instances = two jpegs ect.... When multiple cams detect motion while using BI integrated DS, or if your just using a single instances of DS with AITool, multiple jpegs can build up in queue and it could take up to 3-4 seconds for DS to process those images.
With AITool you
could set one instance of DS to process jpegs for SecCams 1-4 and another instance to process SecCams 5-8. But I don't use it that way, I set it up my multiple instances so they just take the next jpeg from queue.
Also, from what I can see, if you want 4gb of memory the only option now is a Quadro P1000 which becomes expensive.
Nvidia Quadro T600 (40w), 4GB of memory. The
Nvidia Quadro P1000 is (47w) so if your using a SFF PC that could be pushing the limits on power draw from the 16x PCIe slot.
But the only reason you would need a GPU with 4GB of memory was if you wanted to decode video directly from your cams for Blue Iris. Most people including myself just use Intel Quick Sync for that. Decoding video has nothing to do with AI integration or DeepStack.
I run two or three instances of DeepStack, decode/encode video for Windows Remote Desktop and decode/encode video for UI3 in BI with my Nvidia Quadro P400 with 2GB of Mem and don't even come close to using the 2GB of mem.