5.4.8 - June 16, 2021 - A DeepStack status page has been added

View attachment 92821

"Apply to motion triggers only." Can someone give their opinion on the context of this option? Does the option provide for the use of deepstack during alert confirmation for motion triggers ONLY and not...what? Or is this option designed to analyze deepstack's previously confirmed and recorded clips? I can't make sense of it...and the help file as far as I can tell doesn't elaborate.
Agreed.
 
To me it says that if applied to "motion triggers" it will run video through DS once the camera is fully triggered. We had a fairly breezy day here , today, and several of my cameras would sense motion but not trip due to the dancing shadows. In turn, DS was not invoked to analyze. If I didn't check that box DS would have been running python.exe all day. Just my observation and opinion and I certainly could be wrong.
 
To me it says that if applied to "motion triggers" it will run video through DS once the camera is fully triggered. We had a fairly breezy day here , today, and several of my cameras would sense motion but not trip due to the dancing shadows. In turn, DS was not invoked to analyze. If I didn't check that box DS would have been running python.exe all day.
This was also my assumption, and it is my testing experience as well.

Suggest asking Ken for his design intent for this setting. And to clarify this in the help pdf. He’s always seemed open to feedback like this.
 
To me it says that if applied to "motion triggers" it will run video through DS once the camera is fully triggered. We had a fairly breezy day here , today, and several of my cameras would sense motion but not trip due to the dancing shadows. In turn, DS was not invoked to analyze. If I didn't check that box DS would have been running python.exe all day. Just my observation and opinion and I certainly could be wrong.
Oh. I thought BI only sends an image to DS for analysis when BI has been fully triggered regardless of whether or not “Apply to motion triggers only” is checked and the meaning was as per JL-F1 #24. This also ties in with my previous unanswered query - does BI send it’s actual triggered image (amalgamation of iframe plus successive pframes) to DS for analysis or wait up to one second to send the next iframe to DS? For my camera looking down the drive to the road, by the time a car crosses my trigger zone and then adds the make time, the car has often gone and DS cancels. Or have I misunderstood how the system works?
 
Last edited:
I suspect, no proof, the DS will be activated on motion detection and prior to a full trigger if that box is not checked. Otherwise that box has no purpose that I can see.

If you notice the image timing is set at one second. I believe that is so that a full frame is sent, an iframe, to DS when it is triggered. Again, there seems to be some logic to that since an iframe is a complete frame. DS is not concerned with motion, only "still" frames.
 
"Apply to motion triggers only." Can someone give their opinion on the context of this option? Does the option provide for the use of deepstack during alert confirmation for motion triggers ONLY and not...what? Or is this option designed to analyze deepstack's previously confirmed and recorded clips? I can't make sense of it...and the help file as far as I can tell doesn't elaborate.

I have audio triggers on several of my cameras and can confirm that if "Apply to motion triggers only" is checked and an audio trigger happens the image does not get processed by deepstack.

edit: I am still on 5.4.7.11
 
It is interesting reading everyone's take on what that check box means!

My motion settings are dialed in pretty good, and when i tried it with and without it checked, I didn't see a difference.

Since it isn't checked by default, I leave it off, but did try limited testing. I assumed maybe it was for a future update feature?
 
I suspect, no proof, the DS will be activated on motion detection and prior to a full trigger if that box is not checked. Otherwise that box has no purpose that I can see.

If you notice the image timing is set at one second. I believe that is so that a full frame is sent, an iframe, to DS when it is triggered. Again, there seems to be some logic to that since an iframe is a complete frame. DS is not concerned with motion, only "still" frames.
OK sebastiontombs, thanks for your appraisal, it seems to correlate with jaydeel’s testing experience. However, to throw my cards on the table, I’m still with JL-F1 and I’ll do my own test when I get home. Why would BI send low level motion sensing without a full trigger to DS when for sure it won’t reach the 40% minimum confidence level and be cancelled? If what you suggest is true, wouldn’t it call for a message to support?

Regarding your view that only iframes are sent to DS, I would say you’re virtually certain to be correct. For me this is a shame, it causes wanted triggers to be cancelled. I don’t know how video encoders/decoders work, but would comment that it’s possible to view individual frames occurring at say 15 times per second. Isn’t any one of them somehow or other able to be used to trigger BI? They don’t all need to be iframes.
 
@ Dave Lonsdale I guess both Ken and DS feel that a complete frame is required for analysis, just a speculative guess, to produce the best accuracy. Partial frames. depending on the target size, lighting conditions, yadda, yadda, could "confuse" DS is my guess.
 
Don't let me discourage you all from continued hypotheses; just wanted to let you know I sent a request for information to support on this topic.
 
Great, let's see if you get an answer and, if you do, let's see what the real answers are. Preconceived ideas can lead to too many "what the f&^#%" moments :rofl:
 
"Apply to motion triggers only." Can someone give their opinion on the context of this option? Does the option provide for the use of deepstack during alert confirmation for motion triggers ONLY and not...what? Or is this option designed to analyze deepstack's previously confirmed and recorded clips? I can't make sense of it...and the help file as far as I can tell doesn't elaborate.
Just tried it out. It makes it so DeepStack doesn't start analyzing External triggers until it detects actual motion in the video. Useful for false positives in some cases, even if the static object detection should weed it out, but also helps so you don't run out of the "analyze additional images" queue until there is actual detectable motion on the screen. (Btw, you can now set analyze additional images up to 999 instead of the previous limit of 15, and then 30.)
 
I find it odd that BI with DS would only use/require iframes, one a sec, because if you use AiTool fro DS, it uses the jpg snapshots BI makes, and you can set that timer, in the BI record tab, for less than one sec per jpg snapshot.

That is a reason I delayed using DS in BI: similar to Dave Lonsdale, I have cars that move out of frame in less than one sec and I miss them using the BI/DS combo
 
In some cases, I see that BI hits the DS endpoint faster than once a second, like 10 times a second, overwhelming my CPU. Other times it does the normal once a second thing. I haven't narrowed down when the former happens yet, or whether it's a good thing or not...
 
If BI produced a jpg for DS it would be produced from a complete frame, IE and iframe.
 
If BI produced a jpg for DS it would be produced from a complete frame, IE and iframe.
Hi sebastiontombs
Whilst I really do respect the fact that your level of knowledge here is on a far higher level than mine, I’m not sure you are correct.

I am now able to log into my BI and have noticed the new check box function, “Begin analysis with motion-leading image” To check out what perhaps this may mean, with this box checked, I changed the iframe interval on my drive camera from 1 second to 10 seconds. So now, with 5 cars driving by and only in view for a little over one second, they all captured a jpg image. And DS confirmed them all.

Not an exhaustive test (around one hour) but doesn’t it seem likely that an iframe was not needed to both capture the jpg image and whatever DS gets from BI for confirmation?

Could I encourage another forum member to try this? For 15fps I used an iframe interval of 150 for both main and substreams. Of course, I guess this test wouldn’t work if only recording when triggered.
 
Hi sebastiontombs
Whilst I really do respect the fact that your level of knowledge here is on a far higher level than mine, I’m not sure you are correct.

I am now able to log into my BI and have noticed the new check box function, “Begin analysis with motion-leading image” To check out what perhaps this may mean, with this box checked, I changed the iframe interval on my drive camera from 1 second to 10 seconds. So now, with 5 cars driving by and only in view for a little over one second, they all captured a jpg image. And DS confirmed them all.

Not an exhaustive test (around one hour) but doesn’t it seem likely that an iframe was not needed to both capture the jpg image and whatever DS gets from BI for confirmation?

Could I encourage another forum member to try this? For 15fps I used an iframe interval of 150 for both main and substreams. Of course, I guess this test wouldn’t work if only recording when triggered.
Continued testing with a 10s iframe interval…

So far today, 37 cars confirmed but still 3 cancelled. This compares with a cancellation rate previously of approx one in three with a 1s iframe interval. So not quite the conclusive result I would have preferred.

Also, I now notice that the help file states that 3 images following the trigger are considered for object detection prior to the start of the optional images at a 1 second interval for further analysis.

The 3 car images that were cancelled apeared to be “crisp” in the cancelled alert images list. So BI did pick them up but they didn't have a red box round them. (They also have bombproof percentages when using testing and tuning). Can somebody please suggest what the difference with those confirmed may be? I’ve looked at the DS status window, but the cancelled ones do not appear. My CPU seems to be very low but perhaps I’ll reduce the general AI setting for “mode” down from High to Medium?

Perhaps I’ll also reduce the iframe interval to 5s to be shorter than my end trigger unless re-triggered setting of 6s and increase my pre-trigger of 2.5s a bit.
 
Last edited:
As of right now, tuning and live DS analysis are two separate things at the moment. Tuning will always find more because it is after the fact analysis and not real time needing to process quickly.

In tuning, DS has the ability to slow the frames down to check it (and the help file does mention the AI boxes may be delayed); whereas live it doesn't give it that much time. So it is neat to see, but not yet there on correlation.

Now if everyone could get the blue frame around the entire image to know which picture was sent to DS, then it would start to provide some use.

Some people get a blue box outline around the outside of the entire picture, but many of us have not seen that. I have yet to get the blue border around the entire picture (not the blue rectangle for a stationary object within the picture), but the whole border.

So the question is, do you get the blue border around the entire image during tuning and playback that shows which image was sent to DS during live viewing?