Hell Yeah! Direct Deepstack Integration - 5.4.0 - March 31, 2021

If iframe, key frame, matches the frames per second it still means one full frame per second is transmitted, 5,10, 15, 25, 30 or 60 are all the same in that regard.
 
Thinking about this, I’ve realised that the requirement of waiting for a key frame to arrive from the camera before an image can be sent to DeepStack and at one second intervals is not valid. When recording continuously, Blue Iris already has memorised the preceding key frame. Prior to DeepStack, complete alert images have always been captured in Bl the instant they “turned red”, regardless of whenever key frames occur.

So this means that I no longer understand why DeepStack cancels many of the the cars that are only in view for roughly one second when BI has captured one perfect image immediately following the pick-up time.

Can anyone please explain?
 
  • Like
Reactions: beandip
Thinking about this, I’ve realised that the requirement of waiting for a key frame to arrive from the camera before an image can be sent to DeepStack and at one second intervals is not valid. When recording continuously, Blue Iris already has memorised the preceding key frame. Prior to DeepStack, complete alert images have always been captured in Bl the instant they “turned red”, regardless of whenever key frames occur.

So this means that I no longer understand why DeepStack cancels many of the the cars that are only in view for roughly one second when BI has captured one perfect image immediately following the pick-up time.

Can anyone please explain?
Could be that the moment where a fast moving car was in view happens to fall in the 1 second interval between two DeepStack sends. The DS analyzer wouldn’t pick up on this coincidence because I don’t think it sends the exact frames it sent during the original incident.

If that’s the case, I wouldn’t mind BI keeping a stack of previous motion clips to later run through DS with higher time resolution when things are more idle (stack in case things are too busy to get all the analysis in, so there’s a best effort to analyze more recent clips).
 
  • Like
Reactions: beandip
I just moved from AITool to using DeepStack inside BI. It is a much better user experience. I don't have all the JPGs on the alert list any more :).

I am pretty sure that I am running the GPU version of DeepStack. But looking into C:\deepstack, there are both CPU and GPU references in the file names. Is there a way to confirm that Deepstack is indeed using my GPU and not my CPU?
 
Watch in Task Manager for changes in GPU utilization when DS is analyzing an alert.
 
I tried that and I am not seeing much change in the graphs when I manually trigger a camera (I assume that will trigger deepstack to analyze pictures generated by my manula trigger). I am using GPU encode so there is already a lot of work on the GPU Video Decode graph.

1622324567338.png
 
Well, then does the CPU spike when DS is analyzing? It's gotta be one or the other.
 
Last edited:
I started deepstack in Power Shell to see its console output. The output has no indication whether it is using CPU or GPU. However with Task Manager's GPU view next to the console output, I triggered 6 deepstack-enabled cameras. As you can see in the video, when the console has activity, the Cuda graph also shows utilization.

I miss AITool's history view and Telegram notification, but not enough to keep me using AITool now that deepstack is fully integrated into BI.

View attachment 2021-05-29 16-58-46.mp4
 
  • Like
Reactions: beandip
I can see that deepstack.cc guide to install Docker on windows and Deepstack as docker (Opposite to guide shared later in their docs)
Is it the better way with Docker ?
 
I can see that deepstack.cc guide to install Docker on windows and Deepstack as docker (Opposite to guide shared later in their docs)
Is it the better way with Docker ?

Typically those running it in a Docker are using one of the 3rd party add-ons and all the customization it provides; whereas those looking for the simple solution but without all the customization (yet) are installing the CPU or GPU version and letting BI do its thing.
 
When I tested this way back using AITOOLS, running deepstack in docker resulted in much less CPU usage than running the Windows version of deepstack.

Never tested to see if there were any processing time differences though so might be something worth testing.
 
  • Like
Reactions: beandip
Anyone have a way to actually get these confirmed alert images (snapshots only, not the clip) backed up somewhere? The alerts' "FTP image upload" option doesn't actually use the alert image. All other options seem to include non-confirmed alerts as well.
you found solution for this?
 
you found solution for this?
I wrote a script to copy the alert image elsewhere when certain AI objects are detected, like faces. Using the &ALERT_PATH macro (which is not even a path, it’s just a filename). It works most of the time, but BI craps out every now and then.
 
I wrote a script to copy the alert image elsewhere when certain AI objects are detected, like faces. Using the &ALERT_PATH macro (which is not even a path, it’s just a filename). It works most of the time, but BI craps out every now and then.
would you mind share your script?
 
I asked Ken about sending images to DS faster than the current 1 second interval. This is what he said:

“Next version will allow overlapping analysis. 250 and 500 msec options. Regardless of the processing time. This will cause some people to run lower on ram.”

Update: Out now, there it is:

Screen Shot 2021-06-30 at 03.11.05 PM.png

By increasing this to 500ms, I'm now able to "confirm" cars that drive through the LPR frame quickly. Previously, at 1 sec, it would often miss them when only part of the vehicle was visible at the edge of frame.
 
Last edited:
Big thank you to all who have contributed and tested on this thread, helped me a great deal to give good daytime results with DeepStack. Now working on the ExDark for night use and attempting to link with profiles :thumb:

Side note, I believe increasing the camera frame rate helped on some of my cameras for AI detection, mainly with moving objects like cars, bikes.
 
Managed to secure a confirmed pre order EVGA GTX 1660 6GB at MSRP amazingly, some are selling at almost twice that price. Soon as I get it installed will report back regarding Deepstack performance and times, old news probably as it seems like sub 100ms times should be possible depending on the model(s) used. Interested to see how it compares to CPU performance in both times and accuracy. I know some are using the P400 cards but also noticed some 1660 and even 2060 users, justifying a 2060 is tough for me but I can deal with it's smaller brother at retail price.
 
Last edited: