5.4.7 - May 25, 2021 - Support for DeepStack custom model files

With some playing around and a lot of great feedback from the folks on this site, here are my current settings for DeepStack:

Daytime....

AiDayCapture.JPG


and night...

AiNightCapture.JPG


A wise man on this very forum suggested the giraffe so I followed this with a Table on the ExDark night model, the latter is a work in progress with thanks to @JL-F1. Hopefully not too many Tables turn up to ruin my plans :clap:

I tried using the main stream if available option but didn't see much benefit, in fact less accurate, maybe due to lack of processing power. Also found setting the pre-trigger buffer to zero helped with accuracy also, previous I would have it set to maybe 1 or 2 seconds to make playback of events more convenient.

Removing the tick from 'Begin analysis with motion-leading image' in the daytime at least also gives better results generally. As with others I have found contrast either makes or breaks this Ai system, makes sense, twilight and night time are the real challenges but so far I have been impressed with the system in general. Lack of false alerts is great and it rarely misses in reasonable lighting, can always fall back on just motion at night if the Ai fails to be reliable enough.
 
Last edited:
I'm running the GPU version with an NVidia GTX970. Detection is on the order of 99%+ both day and night using the stock and dark models. Detection times are in the >300ms range all the time and frequently as low as 100ms. That's with a total of eight cameras using DS.
 
I'm running the GPU version with an NVidia GTX970. Detection is on the order of 99%+ both day and night using the stock and dark models. Detection times are in the >300ms range all the time and frequently as low as 100ms. That's with a total of eight cameras using DS.
That is interesting, my detection times can be anything from 200 to 450ms depending on conditions, that is with Ai running on medium, if I push it to high they can go into 1000+ms. I might go mad and get a GTX1050 or 1650 when stocks arrive to see how it handles the processing. Had hoped the GPU wouldn't max out but it seems from your observations that it probably will. Do you find the ExDark model pushes it more than the default model?
 
  • Like
Reactions: sebastiantombs
I really can't say with any authority, but I definitely noticed an increase in detection time apparently as DS sorts through the dark model. It seems a lot larger than the stock objects model so that sort of makes sense, at least to me.
 
I really can't say with any authority, but I definitely noticed an increase in detection time apparently as DS sorts through the dark model. It seems a lot larger than the stock objects model so that sort of makes sense, at least to me.
That would make sense, I will get a better idea this evening when the ExDark switches in. Should also get my new 5442 cameras installed in the next few weeks, see what I've been missing out on. Of course there will be some additional processing but I look forward to seeing how they compare to the existing 5231 models.
 
  • Like
Reactions: sebastiantombs
I think you'll really like the 5442. I have a 5231 and a 5442 in opposing views on the driveway side of the house. the 5231 is good but the 5442 is better by a large factor, especially detail.
 
I think you'll really like the 5442. I have a 5231 and a 5442 in opposing views on the driveway side of the house. the 5231 is good but the 5442 is better by a large factor, especially detail.
I was out of the loop for sometime and missed the 5442 release altogether so was pleased to see the glowing reviews, looking forward to installing them. Better detail and low light performance for essentially the same price as the 5231, Dahua have another winner. On a side note, is there a way to implement DeepStack in high mode on a schedule, it would appear medium isn't cutting the mustard in lower light and cloudy conditions.
 
I run 'high' with a 1660 super
objects only avg is 200ms
with dark it's 300-400ms

Surprisingly I do see a few 'tables' in the day ;) I have a bush that it thinks is a bear or sheep or table, so wierd
 
  • Like
Reactions: sebastiantombs
I run 'high' with a 1660 super
objects only avg is 200ms
with dark it's 300-400ms

Surprisingly I do see a few 'tables' in the day ;) I have a bush that it thinks is a bear or sheep or table, so wierd
Interesting, looks like a 1660 is in my future! ExDark is killing my processor at the moment, also not returning much. What kind of + real time image settings are you using, I've tried anything from 1 to 10 and not sure I see a lot of change!

Those tables get around a bit, I've got a train here that also happens to be a drain pipe :rofl:
 
Interesting, looks like a 1660 is in my future! ExDark is killing my processor at the moment, also not returning much. What kind of + real time image settings are you using, I've tried anything from 1 to 10 and not sure I see a lot of change!

Those tables get around a bit, I've got a train here that also happens to be a drain pipe :rofl:
If you don't already have a GPU, you may want to consider a Quadro P400, cheaper and far less power hungry. Others people mentioned using it with DS, so I bought one and it helps a lot.
 
If you don't already have a GPU, you may want to consider a Quadro P400, cheaper and far less power hungry. Others people mentioned using it with DS, so I bought one and it helps a lot.
Thanks for that, they were on my radar, possibly good for Blue Iris and DeepStack, not so much for gaming :idk:
 
Interesting, looks like a 1660 is in my future! ExDark is killing my processor at the moment, also not returning much. What kind of + real time image settings are you using, I've tried anything from 1 to 10 and not sure I see a lot of change!

Those tables get around a bit, I've got a train here that also happens to be a drain pipe :rofl:
What I have settled on, so far. today, lol:

I now have nothing in my 'cancel' and I have 999 in my extra real time images.

My issue was, and is now almost perfect:

I would miss people because they were only in clear enough view for the 40% min DS has, 15ish seconds after the motion trigger, so it would cancel. Now with 999 is looks every 500ms during the entire triggered time to find something.

My other issue was cars triggering and missing a walker, I have alerts sent to me for people only.

So now I have two clones: One that only looks for person and one that only looks for cars.

Those things helped my 'misses' alot
 
What I have settled on, so far. today, lol:

I now have nothing in my 'cancel' and I have 999 in my extra real time images.

My issue was, and is now almost perfect:

I would miss people because they were only in clear enough view for the 40% min DS has, 15ish seconds after the motion trigger, so it would cancel. Now with 999 is looks every 500ms during the entire triggered time to find something.

My other issue was cars triggering and missing a walker, I have alerts sent to me for people only.

So now I have two clones: One that only looks for person and one that only looks for cars.

Those things helped my 'misses' alot
Now I'm intrigued! what do you have your Analyze time set at? mine has changed from 1sec to 250ms and it seems happier closer to 1sec :idk:

In some respects I wish the CPU wasn't maxing itself out, this would surely help my case, hence why a Quadro or similar is on the shopping list. Quite a lot to take in, daytime is one thing but night-time is quite another!
 
  • Like
Reactions: sebastiantombs
Now I'm intrigued! what do you have your Analyze time set at? mine has changed from 1sec to 250ms and it seems happier closer to 1sec :idk:

In some respects I wish the CPU wasn't maxing itself out, this would surely help my case, hence why a Quadro or similar is on the shopping list. Quite a lot to take in, daytime is one thing but night-time is quite another!
500ms for both the people and car clones

I'd go for the cheaper quadrro mentioned above. My 1660S goes up to about 22-28% during a scan, so I assume it's not even being fully utilized
 
  • Like
Reactions: sebastiantombs
500ms for both the people and car clones

I'd go for the cheaper quadrro mentioned above. My 1660S goes up to about 22-28% during a scan, so I assume it's not even being fully utilized
Thanks, appreciate that. Quadro cards seem more available at the moment, cannot believe the 1660 prices right now, more than double compared to normal :wow: Will have a play with cloning also. I'm not totally sure putting objects:0 in the custom box is having the desired effect of forcing ExDark only which is odd, using objects only in the custom box worked fine for daytime.
 
I'm not totally sure putting objects:0 in the custom box is having the desired effect
One clue is whether you are seeing any of lower case objects detected (person, car, dog, etc.).

A better clue is whether the JSON box in the DeepStack Status Window is lacking a populated 'objects' section?
If only the custom models are being used, there should be no 'objects' section (or an unpopulated one), and only populated 'dark', 'openlogo' sections, etc.
CAVEAT: This is how it worked when I first tested - I'll admit that I haven't revisited this since the Ken first introduced the DeepStack Status Window. I'd recheck now, but cannot as I'm away from my server.
1628807264044.png

using objects only in the custom box worked fine for daytime
My interpretation of the help pdf is that this would:
1. cancel all custom models (because a custom model 'objects.pt' does not exist) and
2. use only the default (built-in) objects model ... and the built-in faces model (if enabled).

Therefore I'd expect that using 'objects' in the custom box is the same as entering any other non-existent custom model name.
This is why I enter 'none' in the custom box to cancel use of all custom models. It seems more explicit to me.
 
Last edited:
  • Like
Reactions: CamCrazy
1628810331842.png

so you can see the 'dark'

also, you can see the DS time with GPU and the almost 20 secs to find a people that is why I have the 999 in plus images....

That was an amazon truck that triggered and took 19+ secs for the driver to exit and be visible.

What would have happened before is the people clone would motion trigger and keep recording due to the trucks movement and never see the person with 10-15 in the + images