You can use Deepstack on another machine using Blue Iris integration if you need to, or you can use it on same machine. You just put the IP address of the second machine in BI. BI does seem to use more CPU time at this stage than AITools, but I am sure that will improve over time.Dealbreaker for me is the possibility to run it on a different computer. I think deepstack on the same machine with lots of cameras will eat alot of CPU...
You can try the GPU version of DeepStack. Depends how many cameras you have and how much RAM you have.So I have a Nvidia 970 card in my Blue Iris PC (4th Gen Intel), should I run the GPU version of Deepstack? Want to try native Deepstack in BI...
I still use AITool as there is a lot more granular control over things like percentage size of detected object, percentage of confidence per object (in the newer versions).
I have cameras up on second story due to the way the place is laid out and AITools can be set to pick up objects that the native BI integration misses.
Even if development on AITools stopped right now and didn't proceed further than it is currently at, you still have a lot more granular control than the native integration has. and it would continue to be usable for a long time.
i still use AI Tools for LPR, but have settled on BI Native, it starts Deepstack as a service, frees up a ton of memory, ignore station objects works so much better for myself compared to AI Tool's Dynamic. To be clear this wasn't the fault of AI Tools, but a fault in the way Blue Iris sometimes send low quality then high quality images (despite constant recording and ample pre-record buffer)I run deepstack with lots of cameras on the same machine with no problem. I do use the gpu version. I'd still use AITool though.
if you use BlueIris native integration it’s built in, including face detection.It would be great if there was a "face" function. Where to upload faces to deepstack. Or more importantly: saving unknown faces.
It's not really built in. You still have to run Deepstack separately. (Just being picky with terminology ) That said, BI integration does meet most needs.i still use AI Tools for LPR, but have settled on BI Native, it starts Deepstack as a service, frees up a ton of memory, ignore station objects works so much better for myself compared to AI Tool's Dynamic. To be clear this wasn't the fault of AI Tools, but a fault in the way Blue Iris sometimes send low quality then high quality images (despite constant recording and ample pre-record buffer)
if you use BlueIris native integration it’s built in, including face detection.
Alll my custom use cases are now met by BI native. Still have 1 can on AI tools out of loyalty.
Thanks for the reply, sorry I should of mentioned it is an i7 with 32Gigs of RAM. So I ended up installing the GPU version, I am up an running, right now I have Deepstack on my Hik Doorbell Cam to test it out. It is working, I am getting Detections...If yo
You can try the GPU version of DeepStack. Depends how many cameras you have and how much RAM you have.
Which 4thGEN Intel is it? Is it a Core i5 or i7?
If you have at least 16GB RAM in your system, then DeepStack on the same machine is worth a try, though if you have more than about 3 cameras and a couple may trigger at once, then you will find a Core i5 will definitely lag out.
My primary system runs DS in a VM on Docker, but I have a test system with 3 x 5MP cameras and Deepstack installed locally using the BI Integration. It runs 16GB RAM with a Core i7 7700 CPU and when BI is processing AI it will regularly hit 80% CPU usage with the CPU version of DeepStack and idle it will sit at 3 to 4% CPU usage. It is BI using the CPU time and not Deepstack as I do monitor which programs are using the CPU time. This test system does not have an NVIDIA GPU so the Deepstack GPU version won't run on it properly. I plan obtaining an NVIDIA GPU of some kind for it to test the GPU version of Deepstack to see if there is any differences in object detection accruacy.
How did you get BI to create SnapShots via the Sub-Stream? I couldn't find that setting, the only way I could was by setting up clones which then used the Sub-Stream as the main feed.@spammenotinoz 100% agreed. Sending high res images to deepstack, especially if using the face recognition API, takes a fair chunk of CPU time, even if you use the GPU version of Deepstack.
I don't send the full High Res image to Deepstack using AITools. My cameras all have three feeds, Low Res, Medium Res and High Res. I send use the Medium Res as the "sub stream" and this is the one Blue Iris sends to Deepstack. That said, I did not notice a huge difference in detection of objects between low res, med res and high res during experimentation. Where it makes a big difference is doing facial recognition, which I only do on 3 cameras I have.
I settled on using the medium res feed to send to AI Tools by using the medium res as the substream. Blue Iris, when it takes the snapshot, uses the sub-stream for snapshots. I don't use the high res jpg option, except for the facial recognition .
Sorry, bad wording on my part. I do use a clone camera that I use for "sub streams" for image captures.How did you get BI to create SnapShots via the Sub-Stream? I couldn't find that setting, the only way I could was by setting up clones which then used the Sub-Stream as the main feed.
yes but i not use BI deepstack I need this in AITool
There is.It would be great if there was a "face" function. Where to upload faces to deepstack. Or more importantly: saving unknown faces.
It's not really built in. You still have to run Deepstack separately. (Just being picky with terminology ) That said, BI integration does meet most needs.
If you have specific needs, the AI Tool is still very good. It's nothing to do with loyalty really. It's to do with what meets your needs best.
For me, the BI Integration doesn't have the granularity due to where my camera's are and what I need to detect.
I don't use telegram or any of that stuff as my external triggers are handled by MQTT triggers from Blue Iris.
BI integration tends to miss events that AI Tools will grab, but as I said that is really due to my situation and placement of cameras. I think the BI Integration will grab 98% of events for most home users.
AITools kicks butt with LPR though. Plate Recognizer integration in BI is really hit and miss.
I never had memory issues with AITools. I never found it to use much memory space on it's own and I limited my jpeg snap folder to only keeping images for upto 3 hours to save disk.
The one thing I really hope BI Deepstack integration will include in the future is the ability to have more than one DS server used. I find that 1 DS server struggles to keep up with several cameras in a busy environment. The multiple server and refinement server options in AITools are a god send for me, otherwise the lag / delay in processing would trigger the cameras at the wrong times.
It's not really built in. You still have to run Deepstack separately. (Just being picky with terminology ) That said, BI integration does meet most needs.
If you have specific needs, the AI Tool is still very good. It's nothing to do with loyalty really. It's to do with what meets your needs best.
For me, the BI Integration doesn't have the granularity due to where my camera's are and what I need to detect.
I don't use telegram or any of that stuff as my external triggers are handled by MQTT triggers from Blue Iris.
BI integration tends to miss events that AI Tools will grab, but as I said that is really due to my situation and placement of cameras. I think the BI Integration will grab 98% of events for most home users.
AITools kicks butt with LPR though. Plate Recognizer integration in BI is really hit and miss.
I never had memory issues with AITools. I never found it to use much memory space on it's own and I limited my jpeg snap folder to only keeping images for upto 3 hours to save disk.
The one thing I really hope BI Deepstack integration will include in the future is the ability to have more than one DS server used. I find that 1 DS server struggles to keep up with several cameras in a busy environment. The multiple server and refinement server options in AITools are a god send for me, otherwise the lag / delay in processing would trigger the cameras at the wrong times.
I am running a GTX 970 in my BI box and running DS GPU version, no problems, but I am just testing one Cam right now though. My card has 4Gigs of Memory, which I am hoping will be enough...How much Memory does your 670 have? I had a GTX 760 card once that had 4Gigs even though 2 Gigs was the norm, I paid more for it since I was using it as a gaming card back then.@austwhite, I am still running AI Tool 1.67 from the 1st post in this thread. Are you running a newer version from github? Is there a minimum NVidia GPU version for DS to work? I have a GTX 670 that I can put into my BI server. Thanks.