Deepstack - Soo many motion detections are being marked as "nothing found".. othertimes it works?

wittaj

IPCT Contributor
Joined
Apr 28, 2019
Messages
24,955
Reaction score
48,668
Location
USA
That delivery driver is basically the same color as the shadow in the background and the supports between the windows, so that could be how it misses it.

Another member here misses a big garbage truck because the truck is green and basically matched the color of the trees on the other side of the street.

How many ms did DS take to analyze the images?
 

wittaj

IPCT Contributor
Joined
Apr 28, 2019
Messages
24,955
Reaction score
48,668
Location
USA
OK those times are not too outrageous. A little high, but not enough that should be causing the issue.

I think it is the color of the uniform matching so close to the wall color that it misses it.

The good news is you have events before and after so you know someone is there.

What are your DS settings in BI - how many images is it sending, interval, etc. Maybe an adjustment there would pick up the person after they are more centered in between the walls then closer to it?
 

sebastiantombs

Known around here
Joined
Dec 28, 2019
Messages
11,511
Reaction score
27,693
Location
New Jersey
I'd like to see your DS stats for a successful detection. That may give us a better clue. In terms of contrast, if DS can't discriminate between the background color and the object it simply can't see anything. We can because we have more processing power and "know" what should be there.

In terms of headlight bloom, when headlights cause a bright spot double or triple the size of the target DS can't see it and, actually, neither can you but we have the advantage of knowing what headlights are and what they're attached to. A custom model could be trained to successfully detect in those situations.

Remember DS basically compares a capture to a set of pre-loaded images to determine what it is "seeing".
 

wittaj

IPCT Contributor
Joined
Apr 28, 2019
Messages
24,955
Reaction score
48,668
Location
USA
Yeah, DS is still evolving in BI and we have seen some weird things happen.

I suspect the hotspot zone issue is another area that DS is messing with that wasn't an issue prior to DS.

Which is why I suggested to you yesterday to make a clone of this camera and do not use DS on the clone and simply have it enabled to trigger on the hotspot zone. Do not set anything else up and just let the clone be the hotspot trigger.

Another option would be to train a custom model to your field of view.

Another option would be to use a camera with AI built in.

Other than those three options, you will continue pulling your hair out because while BI/DS is great, it just isn't at the point to be 100% for every situation and we need to figure out another way to accomplish what we want. This looks like one of those field of views that will be problematic.
 

wittaj

IPCT Contributor
Joined
Apr 28, 2019
Messages
24,955
Reaction score
48,668
Location
USA
<<slaps forehead>> NOW I understand what you were getting at. It's an interesting solution. I have to think about it more.
The nice thing is the clone doesn't add any overhead - just make sure you have the original camera selected as the clone master. And then make sure in the clone that it show * after it's name.

I have two cameras that have been problematic with recent DS updates that I had to do this to as well. And the kicker is those two cameras were not using DS. But we have seen that the introduction of DS has somehow broke some things....or they were not broke but the way it was written allowed it to work, but with this added DS feature, it then bombs out what would work previously.
 

wittaj

IPCT Contributor
Joined
Apr 28, 2019
Messages
24,955
Reaction score
48,668
Location
USA
I'm not saying I'm right, but it doesn't seem to me that DS broke something... It seems to me that the integration isn't as tight as it should be. As I see it, BI blindly hands off Alerting to DS instead of a tighter integration with things like the hotspot.
But that can be the same as breaking some things...as in it worked before but with this integration, it exploited problems that were not known prior to this integration.

Doesn't matter what one calls it, but it worked before DS was introduced and doesn't now...
 

sebastiantombs

Known around here
Joined
Dec 28, 2019
Messages
11,511
Reaction score
27,693
Location
New Jersey
Looking at that .dat file you're hovering just below time out for DS. Set the confidence level at 40%, DS won't go any lower than that which is a little disappointing. I've been using "5" for images to analyze and it's been working fine. Pre-trigger buffer at three seconds. I'd also say try using it on alerts only.

Once again, the behavior in the .dat file is very similar to what I experienced using the CPU version.
 

sebastiantombs

Known around here
Joined
Dec 28, 2019
Messages
11,511
Reaction score
27,693
Location
New Jersey
I'm saying that with some CPUs in some installations, probably very load dependent, that a GPU will perform much better. Given I see detection times always below 500ms and am getting excellent detection I am concluding that the speed involved in detection is a crucial factor. Once you get out past 750ms, or longer, DS may easily time out and detect nothing. In my case it's an i7-6700K and AI was too much for reliable AI. I already had a couple of NVidia cards so trying one was the next logical step for me. The results proved out, to me anyway, what I suspected. Newer generation CPUs may handle it much better.
 

wittaj

IPCT Contributor
Joined
Apr 28, 2019
Messages
24,955
Reaction score
48,668
Location
USA
They (BI or DS) have not defined a number, but from our experiences, once you start getting over 2000ms, the Server Error 100 starts popping up and BI has said that is a timeout issue. From looking around the DS forum, it appears people like to try to keep them under 200ms, but many are in the 40-75ms. But that is obviously dependent on your hardware.

But I have also noticed with BI that it is a sequence of events and not simultaneous actions (but probably something to do with the CPU of the computer as well). I have a camera for LPR that takes an snapshot. If I clone the camera, the snapshots are off a split second between the two cameras. One would think they would be identical snapshots being a clone, but the clone master takes the picture first and then the clone takes a picture.

So if a lot of activity is going on during the day - windy, clouds, etc. and lots of cameras are in a sensing state and then several cameras have DS kick-in to do an analysis, that split second can be the difference between AI recognizing an object or not depending on the field of view.
 

wittaj

IPCT Contributor
Joined
Apr 28, 2019
Messages
24,955
Reaction score
48,668
Location
USA
The analysis portion is a work in progress with BI and the interpretation of this data is not well defined, so you ask 3 people and you will probably get 3 different responses LOL.

So let's look at this one where the delivery person is basically behind the wall between the windows - we can see he is there by his arm but DS missed it.

My take is that this is the setup when you were analyzing at 1 second intervals, so it just so happens that at the time of each iteration, the person was either blocked or blended in. It was taking 500-750ms for DS to do this analysis.

What about doing playback with the DS tuner on and see what the image looks like when you get the blue border around the entire image (not the person but the whole picture) that will show you which image was used by DS for analysis.



1628131163251.png
 

joshwah

Pulling my weight
Joined
Apr 25, 2019
Messages
298
Reaction score
146
Location
australia
Just some follow-up. I've made the adjustments as suggested i.e. changing the buffer to 3 seconds and reduce the number of images sent to DS to 4-5 and set to "motion only".

Here is an example of a missed alert... yet in the alerts pane, it still shows as "nothing found".

1628166633936.png
1628166687981.png
 

sebastiantombs

Known around here
Joined
Dec 28, 2019
Messages
11,511
Reaction score
27,693
Location
New Jersey
@joshwah I think I found your problem. You have DS set to detect "person" and not "people". DS is very specific with detection definitions. As a side note, if you use the dark.pt model it is "person" for that specific model, the stock model nomenclature is "people". The devil is in the details.
 
Last edited:

Buxton

Young grasshopper
Joined
Mar 1, 2019
Messages
33
Reaction score
17
Location
los angeles, CA
@joshwah I think I found your problem. You have DS set to detect "person" and not "people". DS is very specific with detection definitions. As a side note, if you use the dark.pt model it is "person" for that specific model, the stock model nomenclature is "people". The devil is in the details.
According to the DS website, the nomenclature is the opposite of what you describe Object Detection | DeepStack v1.2.1 documentation and GitHub - OlafenwaMoses/DeepStack_ExDark: A DeepStack custom model for detecting common objects in dark/night images and videos.
 

sebastiantombs

Known around here
Joined
Dec 28, 2019
Messages
11,511
Reaction score
27,693
Location
New Jersey
Look at your detection selections "person, car, truck, dog, unknown". Look at the analysis and you'll see "People". The lower case is the stock model, the upper case is dark.pt model. In this case you are not telling DS to detect "People".
 
Top