BI and DS Fine Tuning

rjp1267

n3wb
Joined
Sep 2, 2021
Messages
28
Reaction score
2
Location
Earth
Ok after getting setup on the new machine with DS I have many questions but am going to address them in order of importance.

After reviewing the alert footage I am seeing that I am getting hours of recordings of our vehicles that are parked in the driveway and not some event such as approaching vehicle or person (i use the testing/tuning for DS to see what occurred). I would like to fine tune the system so that actual events trigger recordings to eliminate what I feel are false positives, I have the sensitivity set to 61%.

Also I am seeing DS identify stationary objects like chairs and hanging items as being people, how do I address this to resolve false positives?

I see within the DS documentation that there are variables that can be set when starting, etc, but not sure if those are applicable when running with BI. I know within BI you can set the port and mode (inspection) but unclear how to fine tune the sensitivity and where BI and DS diverge.

Thank you. Happy New Year.
 

fenderman

Staff member
Joined
Mar 9, 2014
Messages
36,902
Reaction score
21,274
Ok after getting setup on the new machine with DS I have many questions but am going to address them in order of importance.

After reviewing the alert footage I am seeing that I am getting hours of recordings of our vehicles that are parked in the driveway and not some event such as approaching vehicle or person (i use the testing/tuning for DS to see what occurred). I would like to fine tune the system so that actual events trigger recordings to eliminate what I feel are false positives, I have the sensitivity set to 61%.

Also I am seeing DS identify stationary objects like chairs and hanging items as being people, how do I address this to resolve false positives?

I see within the DS documentation that there are variables that can be set when starting, etc, but not sure if those are applicable when running with BI. I know within BI you can set the port and mode (inspection) but unclear how to fine tune the sensitivity and where BI and DS diverge.

Thank you. Happy New Year.
Ensure you have "detect ignore static objects" selected in BI AI settings for that camera.
Dont use DS to eliminate triggers (unless you are already recording 24/7), rather use it to eliminate alerts to your mobile device. If you use it to eliminate triggers you will inevitable miss an important even that DS may fail to detect.
For most of my systems and cams, I have BI constantly record the substream (at 720p or 1080p substreams at cams that support it using low bitrates) and then record on trigger using the continuous + triggered option in the record tab. This will record the main stream on all triggers regardless of DS.
 

rjp1267

n3wb
Joined
Sep 2, 2021
Messages
28
Reaction score
2
Location
Earth
Detect ignore static objects is checked for all cameras. I am recording 24/7 and have continuous+trigger selected but I am still getting false triggers from static cars and objects that are nothing what DS thinks they are. See pics. I have looked through the DS docs and the solution is not jumping out at me, I was thinking about custom models but since I trying to train it to ignore certain objects such as static objects I thought it wouldnt achieve the desired outcome. I am also seeing things like a jacket over a railing or car seat in a hallway as being identified as a person. Do I have to go into deepstack via command line or python to make any tweaks or does BI give us everything we need? I ask because the DS documentation is at times confusing and there isnt always instructions for windows. Thanks.

1641229206312.png

1641229271599.png
 

rjp1267

n3wb
Joined
Sep 2, 2021
Messages
28
Reaction score
2
Location
Earth
PS..I also just noticed that people were walking right in the area of the pics and were note detected until feet from the building, hardly enough time for the system to trigger and capture a recording. Is dialing in motion sensor>configure>sensitivity required? I am unclear from the tutorial video as t o what this should be set at. My impression was that DS didnt use this and it should be set and forget.
 

wittaj

IPCT Contributor
Joined
Apr 28, 2019
Messages
24,927
Reaction score
48,632
Location
USA
Night time is always going to be problematic - every car that goes by had a different headlight lamp and color and angle and it reflects off your car and to DS it appears as moving or different vehicle.

The only way to address that and misidentification is to train a model with your own pics. But this is outside of BI.

Nighttime B/W is the most difficult for DS to analyze. There might be some other settings to help trigger for people sooner, but at the height your cams are and the difficulty DS has on the edges of field of views due to inadequate lighting, that is a limitation you have to accept at some point. You could lower the cam and add external IR to help.
 

sebastiantombs

Known around here
Joined
Dec 28, 2019
Messages
11,511
Reaction score
27,692
Location
New Jersey
What brand and model are the cameras? What is the frame rates, iframe rates and pre-trigger time? Keep in mind that running the video through analysis it is completely different from what happens during regular, real time, analysis. That will result in detecting everything in the scene that can be detected.
 

wittaj

IPCT Contributor
Joined
Apr 28, 2019
Messages
24,927
Reaction score
48,632
Location
USA
^+1 above!

Keep in mind that the "Analyze with Deepstack" under "Testing & Tuning" will ALWAYS perform better than live as it is after the fact and should not be used as an analysis tool to try to figure out why it didn't see a car or person. It should only be used to see what DeepStack can find in that clip, like "hmm I wonder if DeepStack can find a toothbrush" and then walk around with a toothbrush and have it identify it. I can run this on a camera not using Deepstack and it will show EVERYTHING that Deepstack has in its objects to find that it sees in the clip.

Your image above tells you absolutely nothing other than that DeepStack can identify it is a vehicle and misidentify the steps as an umbrella...but does zero to tell you how it is performing in real time based on your settings.

You need to use the .DAT file analysis that shows how DeepStack responded and behaved live. You have to select it under AI for the camera to check the Deepstack analysis option. Only then can you start to figure out why it it is behaving like it is or missing something.

So if you are not using the "Save Deepstack Analysis details", check that box so that you can then use the .DAT file to actually see why BI is missing it. Otherwise all you are doing is chasing and looking at the wrong data to try to figure that part out.

And as said above, some cameras are problematic. If the KEY in the BI camera status page is less than 1.00, it will miss motion.
 

rjp1267

n3wb
Joined
Sep 2, 2021
Messages
28
Reaction score
2
Location
Earth
Night time is always going to be problematic - every car that goes by had a different headlight lamp and color and angle and it reflects off your car and to DS it appears as moving or different vehicle.

The only way to address that and misidentification is to train a model with your own pics. But this is outside of BI.

Nighttime B/W is the most difficult for DS to analyze. There might be some other settings to help trigger for people sooner, but at the height your cams are and the difficulty DS has on the edges of field of views due to inadequate lighting, that is a limitation you have to accept at some point. You could lower the cam and add external IR to help.
Thank you for the response. I may should I feel adventurous try to use the custom models but I will see how bad or better things get with trying to fine tune within BI and adjust my expectations. I was thinking about installing external IRs, I will revisit in the Spring, as for camera height it really doesnt do us any good to have them at a height that can be accessed from the ground. What are your thoughts on where motion sensitivity is set? Would this help or hinder attempts at getting better results?
 

wittaj

IPCT Contributor
Joined
Apr 28, 2019
Messages
24,927
Reaction score
48,632
Location
USA
For Deepstack to work, it needs to be triggered in BI to send an image to DS for analysis. But it is a balance - too sensitive and it is sending images nonstop to deepstack and spiking the CPU. Too Restrictive and then it misses motion. You need to find that happy medium where every leaf and shadow isn't triggering, but not so tight that it misses a car or person either.

Putting the camera higher to prevent tampering or theft is silly. Most people are oblivious to them anyway and if they want to damage them, they will regardless of where placed. My neighbor has had his on his fence post that is less than 3 feet from the public sidewalk. Cams are only 4 feet high and nobody has touched them...most haven't even noticed them! They have been there for years.

Plus if you are trying to cover cars parked in a driveway, that should be a minimum of two cameras and then if they tamper with one, you have good video of them with the other cam.
 

rjp1267

n3wb
Joined
Sep 2, 2021
Messages
28
Reaction score
2
Location
Earth
^+1 above!

Keep in mind that the "Analyze with Deepstack" under "Testing & Tuning" will ALWAYS perform better than live as it is after the fact and should not be used as an analysis tool to try to figure out why it didn't see a car or person. It should only be used to see what DeepStack can find in that clip, like "hmm I wonder if DeepStack can find a toothbrush" and then walk around with a toothbrush and have it identify it. I can run this on a camera not using Deepstack and it will show EVERYTHING that Deepstack has in its objects to find that it sees in the clip.

Your image above tells you absolutely nothing other than that DeepStack can identify it is a vehicle and misidentify the steps as an umbrella...but does zero to tell you how it is performing in real time based on your settings.

You need to use the .DAT file analysis that shows how DeepStack responded and behaved live. You have to select it under AI for the camera to check the Deepstack analysis option. Only then can you start to figure out why it it is behaving like it is or missing something.

So if you are not using the "Save Deepstack Analysis details", check that box so that you can then use the .DAT file to actually see why BI is missing it. Otherwise all you are doing is chasing and looking at the wrong data to try to figure that part out.

And as said above, some cameras are problematic. If the KEY in the BI camera status page is less than 1.00, it will miss motion.
Yes I always use testing and tuning when I am trying to find out what is triggering an event I also use the status info which I will share in a future post, the events seem high so may be a starting point. I setup each camera to save the DS analysis file, thanks for that nugget. I am not sure I know what you mean by "KEY in BI camera status page", sorry.
 

wittaj

IPCT Contributor
Joined
Apr 28, 2019
Messages
24,927
Reaction score
48,632
Location
USA
It would look like this in the BI Camera Status page:

1641435131646.png

You want the KEY to be 1.00. If it is less than 0.50, that is how a lot of motion can be missed.
 
Top