Hell Yeah! Direct Deepstack Integration - 5.4.0 - March 31, 2021

sebastiantombs

Known around here
Joined
Dec 28, 2019
Messages
11,511
Reaction score
27,690
Location
New Jersey
If iframe, key frame, matches the frames per second it still means one full frame per second is transmitted, 5,10, 15, 25, 30 or 60 are all the same in that regard.
 

Dave Lonsdale

Pulling my weight
Joined
Dec 3, 2015
Messages
456
Reaction score
195
Location
Congleton Edge, UK
Thinking about this, I’ve realised that the requirement of waiting for a key frame to arrive from the camera before an image can be sent to DeepStack and at one second intervals is not valid. When recording continuously, Blue Iris already has memorised the preceding key frame. Prior to DeepStack, complete alert images have always been captured in Bl the instant they “turned red”, regardless of whenever key frames occur.

So this means that I no longer understand why DeepStack cancels many of the the cars that are only in view for roughly one second when BI has captured one perfect image immediately following the pick-up time.

Can anyone please explain?
 

m_listed

Getting the hang of it
Joined
Jun 11, 2016
Messages
176
Reaction score
57
Thinking about this, I’ve realised that the requirement of waiting for a key frame to arrive from the camera before an image can be sent to DeepStack and at one second intervals is not valid. When recording continuously, Blue Iris already has memorised the preceding key frame. Prior to DeepStack, complete alert images have always been captured in Bl the instant they “turned red”, regardless of whenever key frames occur.

So this means that I no longer understand why DeepStack cancels many of the the cars that are only in view for roughly one second when BI has captured one perfect image immediately following the pick-up time.

Can anyone please explain?
Could be that the moment where a fast moving car was in view happens to fall in the 1 second interval between two DeepStack sends. The DS analyzer wouldn’t pick up on this coincidence because I don’t think it sends the exact frames it sent during the original incident.

If that’s the case, I wouldn’t mind BI keeping a stack of previous motion clips to later run through DS with higher time resolution when things are more idle (stack in case things are too busy to get all the analysis in, so there’s a best effort to analyze more recent clips).
 

Futaba

Pulling my weight
Joined
Nov 13, 2015
Messages
220
Reaction score
153
I just moved from AITool to using DeepStack inside BI. It is a much better user experience. I don't have all the JPGs on the alert list any more :).

I am pretty sure that I am running the GPU version of DeepStack. But looking into C:\deepstack, there are both CPU and GPU references in the file names. Is there a way to confirm that Deepstack is indeed using my GPU and not my CPU?
 

Futaba

Pulling my weight
Joined
Nov 13, 2015
Messages
220
Reaction score
153
I tried that and I am not seeing much change in the graphs when I manually trigger a camera (I assume that will trigger deepstack to analyze pictures generated by my manula trigger). I am using GPU encode so there is already a lot of work on the GPU Video Decode graph.

1622324567338.png
 

Futaba

Pulling my weight
Joined
Nov 13, 2015
Messages
220
Reaction score
153
I started deepstack in Power Shell to see its console output. The output has no indication whether it is using CPU or GPU. However with Task Manager's GPU view next to the console output, I triggered 6 deepstack-enabled cameras. As you can see in the video, when the console has activity, the Cuda graph also shows utilization.

I miss AITool's history view and Telegram notification, but not enough to keep me using AITool now that deepstack is fully integrated into BI.

View attachment 2021-05-29 16-58-46.mp4
 

Chura

Getting the hang of it
Joined
Mar 9, 2018
Messages
160
Reaction score
45
I can see that deepstack.cc guide to install Docker on windows and Deepstack as docker (Opposite to guide shared later in their docs)
Is it the better way with Docker ?
 

wittaj

IPCT Contributor
Joined
Apr 28, 2019
Messages
24,428
Reaction score
47,549
Location
USA
I can see that deepstack.cc guide to install Docker on windows and Deepstack as docker (Opposite to guide shared later in their docs)
Is it the better way with Docker ?
Typically those running it in a Docker are using one of the 3rd party add-ons and all the customization it provides; whereas those looking for the simple solution but without all the customization (yet) are installing the CPU or GPU version and letting BI do its thing.
 

IAmATeaf

Known around here
Joined
Jan 13, 2019
Messages
3,287
Reaction score
3,252
Location
United Kingdom
When I tested this way back using AITOOLS, running deepstack in docker resulted in much less CPU usage than running the Windows version of deepstack.

Never tested to see if there were any processing time differences though so might be something worth testing.
 

nutiserver

n3wb
Joined
Jun 27, 2021
Messages
2
Reaction score
0
Location
Estonia
Anyone have a way to actually get these confirmed alert images (snapshots only, not the clip) backed up somewhere? The alerts' "FTP image upload" option doesn't actually use the alert image. All other options seem to include non-confirmed alerts as well.
you found solution for this?
 

m_listed

Getting the hang of it
Joined
Jun 11, 2016
Messages
176
Reaction score
57
you found solution for this?
I wrote a script to copy the alert image elsewhere when certain AI objects are detected, like faces. Using the &ALERT_PATH macro (which is not even a path, it’s just a filename). It works most of the time, but BI craps out every now and then.
 

nutiserver

n3wb
Joined
Jun 27, 2021
Messages
2
Reaction score
0
Location
Estonia
I wrote a script to copy the alert image elsewhere when certain AI objects are detected, like faces. Using the &ALERT_PATH macro (which is not even a path, it’s just a filename). It works most of the time, but BI craps out every now and then.
would you mind share your script?
 

aesterling

Getting comfortable
Joined
Oct 9, 2017
Messages
352
Reaction score
346
I asked Ken about sending images to DS faster than the current 1 second interval. This is what he said:

“Next version will allow overlapping analysis. 250 and 500 msec options. Regardless of the processing time. This will cause some people to run lower on ram.”

Update: Out now, there it is:

Screen Shot 2021-06-30 at 03.11.05 PM.png

By increasing this to 500ms, I'm now able to "confirm" cars that drive through the LPR frame quickly. Previously, at 1 sec, it would often miss them when only part of the vehicle was visible at the edge of frame.
 
Last edited:

CamCrazy

Pulling my weight
Joined
Aug 23, 2017
Messages
416
Reaction score
194
Location
UK
Big thank you to all who have contributed and tested on this thread, helped me a great deal to give good daytime results with DeepStack. Now working on the ExDark for night use and attempting to link with profiles :thumb:

Side note, I believe increasing the camera frame rate helped on some of my cameras for AI detection, mainly with moving objects like cars, bikes.
 

CamCrazy

Pulling my weight
Joined
Aug 23, 2017
Messages
416
Reaction score
194
Location
UK
Managed to secure a confirmed pre order EVGA GTX 1660 6GB at MSRP amazingly, some are selling at almost twice that price. Soon as I get it installed will report back regarding Deepstack performance and times, old news probably as it seems like sub 100ms times should be possible depending on the model(s) used. Interested to see how it compares to CPU performance in both times and accuracy. I know some are using the P400 cards but also noticed some 1660 and even 2060 users, justifying a 2060 is tough for me but I can deal with it's smaller brother at retail price.
 
Last edited:
Top