5.4.8 - June 16, 2021 - A DeepStack status page has been added

I do know that BI starts motion detection, triggers, on an iframe, or at least it has in the past. It is entirely possible that has changed somewhere along the line as a result of other changes like sub streams and DS. Another member here has sent an email to BI support asking these questions. If/when those questions are answered we'll all know what s really going on.
 
Thanks for the quick replies guys.
wittaj, unfortunately I have never seen a blue border around the entire image. By the way, I am very much aware of the difference between tuning and live DS.
sebastiontombs, do you not think that, as a proving exercise, extending the iframe interval well beyond the time for which a moving object is in view is statistically a valid test? If you have similar missed triggers, could I not persuade you to try it also?
 
I've fooled with all the settings and done a fair amount of testing, too. My conclusion is that DS struggles with lighting and contrast issues more than anything else when it comes to missed identifications. In my particular case I have two cameras providing opposing views of the area I'm most interested in. That gives me two different lighting/contrast viewpoints so one or the other always identifies, until after dusk. Then it's very hit or miss.
 
  • Like
Reactions: looney2ns
The 3 car images that were cancelled apeared to be “crisp” in the cancelled alert images list. So BI did pick them up but they didn't have a red box round them. (They also have bombproof percentages when using testing and tuning). Can somebody please suggest what the difference with those confirmed may be? I’ve looked at the DS status window, but the cancelled ones do not appear. My CPU seems to be very low but perhaps I’ll reduce the general AI setting for “mode” down from High to Medium?

My bad, I took "They also have bombproof percentages when using testing and tuning" as a question as to why BI/DS didn't recognize them live? All of my tuning percentages are way higher than any live percentage.

Unfortunately, until all of us can get that blue border, it is speculation as to why DS didn't pick up on something.

But I have noticed in certain situations, it can struggle live - dusk/dawn type. A grey car against a grey street if the camera view is too high. A dark car against green grass or dark trees if the camera view is too low, a tough field of view, etc.
 
Last edited:
I've also noticed, by using the deepstack status display, that cars nearly overlaping/in front of stationary cars get ignored. I've also, recently had an issue where it will inconsistenly analyze the extra frames, even though I have 'giraffe' in cancel and it should always analyze the extra 10 frames/sec I have set unless it sees a giraffe
 
From Ken on the question I posed several days ago:

"Trigger sources are: Motion, Audio, Group, External, ONVIF, DIO

Unless you check this box, DeepStack will be applied to group, audio, DIO, etc. When you right-click a camera and "Trigger now" that's an External trigger."
 
Continued testing with a 10s iframe interval…

So far today, 37 cars confirmed but still 3 cancelled. This compares with a cancellation rate previously of approx one in three with a 1s iframe interval. So not quite the conclusive result I would have preferred.

Also, I now notice that the help file states that 3 images following the trigger are considered for object detection prior to the start of the optional images at a 1 second interval for further analysis.

The 3 car images that were cancelled apeared to be “crisp” in the cancelled alert images list. So BI did pick them up but they didn't have a red box round them. (They also have bombproof percentages when using testing and tuning). Can somebody please suggest what the difference with those confirmed may be? I’ve looked at the DS status window, but the cancelled ones do not appear. My CPU seems to be very low but perhaps I’ll reduce the general AI setting for “mode” down from High to Medium?

Perhaps I’ll also reduce the iframe interval to 5s to be shorter than my end trigger unless re-triggered setting of 6s and increase my pre-trigger of 2.5s a bit.
After a further full day (dawn till dusk) with the revised settings (iframe 5s, mode medium, pre-trigger 4s), 68 cars past the end of my drive (in view for only a little over one second). 68 were confirmed and 0 cancelled. I have now changed only the iframe interval back to 1s. If the result is similar tomorrow, this would appear to be conclusive proof that with “Begin analysis with motion-leading image” checked, BI does not wait to send the next iframe to DS.

One thing I have not checked is if some cars past the end of my drive and were missed altogether - neither confirmed nor cancelled. I think it’s unlikely and for the moment it would be very tedious to carefully check all the footage for the entire day.
 
  • Like
Reactions: Wen
After a further full day (dawn till dusk) with the revised settings (iframe 5s, mode medium, pre-trigger 4s), 68 cars past the end of my drive (in view for only a little over one second). 68 were confirmed and 0 cancelled. I have now changed only the iframe interval back to 1s. If the result is similar tomorrow, this would appear to be conclusive proof that with “Begin analysis with motion-leading image” checked, BI does not wait to send the next iframe to DS.

One thing I have not checked is if some cars past the end of my drive and were missed altogether - neither confirmed nor cancelled. I think it’s unlikely and for the moment it would be very tedious to carefully check all the footage for the entire day.
Clone the camera and setup motion detection on the clone, then you can check.
 
After a further full day (dawn till dusk) with the revised settings (iframe 5s, mode medium, pre-trigger 4s), 68 cars past the end of my drive (in view for only a little over one second). 68 were confirmed and 0 cancelled. I have now changed only the iframe interval back to 1s. If the result is similar tomorrow, this would appear to be conclusive proof that with “Begin analysis with motion-leading image” checked, BI does not wait to send the next iframe to DS.

One thing I have not checked is if some cars past the end of my drive and were missed altogether - neither confirmed nor cancelled. I think it’s unlikely and for the moment it would be very tedious to carefully check all the footage for the entire day.
Today’s test with “Begin analysis with motion-leading image” checked and the iframe interval at 1s instead of 5s wasn’t perfect this time for cars passing the end of my drive - 81 confirmed but 5 cancelled. To be provocative, are the cancelled triggers caused when an iframe happens to occur within the initial triple measurement window? - I don’t think the help file indicates by how many frames the 2 additional images are separated but is there in any case a reason to believe the image data or its timing BI sends to DS from a clean iframe is different to one created from a combination of frames? I’m going to give it another day with mode set back to high to see what happens.
 
Have you tried setting record to 'when triggered' so it records even if DS cancels it and then opening the DS status page while viewing a recorded cancelled alert?

The DS status page tells you exactly when, in ms, the frames that were analyzed happened

For example in mine the motion leading image is usually around T-200ms and then the first image is ~T+400ms, to me that says the first two analyzed images happened 600ms apart, not one second. Then every image after that is in one sec increments

unless I am reading everything wrong

Here is an example(first two are 514ms apart, T-200(leading image) then T+314(first non leading image):


1624576809133.png
 
Have you tried setting record to 'when triggered' so it records even if DS cancels it and then opening the DS status page while viewing a recorded cancelled alert?

The DS status page tells you exactly when, in ms, the frames that were analyzed happened

For example in mine the motion leading image is usually around T-200ms and then the first image is ~T+400ms, to me that says the first two analyzed images happened 600ms apart, not one second. Then every image after that is in one sec increments

unless I am reading everything wrong

Here is an example(first two are 514ms apart, T-200(leading image) then T+314(first non leading image):


View attachment 93409
Thanks for the info JL-F1. Today I had one passing car cancelled. At the moment my recording is set to 'continuous + triggered' and not 'when triggered' but when I do the DS status thing with it, the result is below.

Screenshot 2021-06-25 224040.png

Of course, this was not performed live and I don't understand why it would be valid. Is there any chance you could explain please?
 
For anyone interested, your analysis .dat files are located in your Alerts folder. As this update specifies, you can open your BI status window to the Deepstack tab, minimize BI if you need screen room, then drag the applicable camera's .dat file into the status window.

Edit, I believe you must check "Save DeepStack analysis details" in cam properties>trigger>AI in order to have a .dat file to drag and drop.
 
Last edited:
Dave: turn off 'test and tune' while watching the vid. The part with the .bvr is the 'live analysis' and as you say useless.

The top section with the .dat is what happened

then click on each 'T+xxx' event and see how well you can see the car in the snapshot. It will show what the scene looked like at each interval it took a snap

according to the manual: "A red X shows an object that was either insignificant or below the confidence threshold."

and those time clock things showed because it thought it was a static object so it ignored.


what is you confidence at?

I'd guess that car was unusually fast or they stopped or slowed almost to a stop?
 
It also appears you must have Object Detection checked in cam properties>Trigger>Configuration checked in order to see the highlighted motion boxes when analyzing the .dat file....otherwise they won't be available.
 
  • Like
Reactions: Dave Lonsdale
What brilliant help guys! Thank you - although I still have a way to go to get to grips with it all. What brilliant software tools! I've updated to 5.4.9 and done a new cancelled analysis:-

Screenshot 2021-06-26 123647.png

I think the picture relates to the lower of the two analysis details and can now see from the asterisk that the image used is after the car has gone. It did not stop by the way. I've now turned the additional 1 sec images to '0' to see what happens. I don't understand the reversal of the position of the double blue/yellow car icons in the two caption details or what they mean. The time in the image is from the camera and is NTP synchronised - I would have thought the time in the BI alerts list would have been later and not earlier. More cancellations today, don't know why.
 
The motion icon:
"The motion icon is used to show objects that were rejected because they did not overlap
areas where motion was detected."
IIRC you have your motion area in rectangles with gaps. Maybe try a solid area, since a car could overlap both a motion area and a non motion area and BI misreads that, not sure...
Also try something it will never see in 'To cancel:' in trigger/ai. I put 'giraffe' in mine and now it doesn't just stop on a random frame, it goes through the entire multiple frames analysis before it decides
also: what is your confidence setting for 'car'? I know it ignores below 40% but I have mine at 10

stuff to try anyway :)
 
Thanks again JL-F1 for the very useful feedback. Removing the additional 1 second images has caused a very large number of false cancellations. I will try a solid detection area now. But I note other comments from the 5.4.9 thread. I guess it needs a 5.4.9.1. Picking up on one of the comments from sebastiontombs, the problem for me now is that analysis starts before the trigger - see the pic below. And only one image sample is in the .dat file instead of the three that Ken's help file states.

My confidence percentage is set at 30%.

Screenshot 2021-06-26 201453.png

Wouldn't the problem be solved if the first image sent to DS was at T-0msec? The car would be where the highlight is - centre screen. Is there a reason for sending the first image before the trigger point - and then waiting for a whole second before another one is analysed (assuming additional ones have been set)?
 
Agree: I rolled back to 4.8.2, issues with 4.9.x

Uncheck 'Begin Analysis with motion-leading image' then it will start as close to T-0 as possible, if that's what you are wanting

That's why I used giraffe in 'to cancel' then it keeps analyzing even if it spots something you are looking for.

IMO:
In your example it cancelled the alert because it was out of the motion zone, but it spottad a car which you were looking for, and stopped analyzing

if you had giraffe in to cancel, it would have kept analyzing however many images you have chosen, I use 10, in the '+ real time images' and if any one of those was what you were looking for it would alert, instead of cancelling on first or early image.

And, as you can see in my posted pic, a few posts back, the second image wasn't a full second after the leading image.
 
  • Like
Reactions: fenderman
OK JL-F1. I had perhaps mistakenly assumed that ‘Begin analysis with motion- leading image’ meant that BI would send an image to DS at T-0 instead of sending the next iframe. I think that many visiting the forum have/had the view that BI uses/used iframes exclusively. I did note that your second image was not one second after the first, but I had also assumed that DS continues to analyse all the images it receives to choose the best one. Is there anyone out there with definitive information about all of this?

Cars, trucks, cyclists, joggers and some animals are only in view for a little over one second. I want them all to be confirmed so don’t think having 10 additional images is worthwhile. By the way, a giraffe is often seen momentarily from animals with test and tune - a different object is required!

But I’m losing - no lost track of all the tweaking changes I’ve made so far. Maybe I need to pause for a while and hopefully soak up details from other forum members.

Clearly I will persevere using DeepStack with Blue Iris. I have to use the built-in illumination in several cameras and DS eliminates all the reflection triggers from spiders webs, spiders, insects, rain, snow and fog as well as triggers from daytime dancing shadows even with basic motion detection.