5.4.7 - May 25, 2021 - Support for DeepStack custom model files

I have sit back for the most part and watched this evolve. I finally took the leap yesterday and installed Deepstack and set it up, on 4 cameras.
Results have been very mixed, I can only get it to work on two of the four cameras reliably. And by work, I mean deepstack actually working to record a clip.
The two cameras I have issue's with, both show deepstack doing it's job when using test and fine tune, but for the most part, deepstack doesn't do anything when just letting it do it's thing.
deep stack should be used to trigger a recording. Use continuous + triggered. Ensure under alert tab you have it set to fire when triggered.
 
I had a similar experience when I first started. Obvious things wouldn't trigger in DS. Now it seems to work fairly well although it still misses an occasional obvious target. I guess it's in the 95% range in accuracy. It's running on most of my cameras so it's getting older 2MP and newer 4MP video which means I can't pin the fails on the cameras.

One other comment is that the target size and contrast seem to be on the critical side. Example being a light colored car in bright sunlight can be missed and a dark color car in late evening can be missed.

What version of BI are you running? I'm on the "latest and greatest" 5.4.7.11

Note; I am tired, had to edit out the typos three times, so far!
Started this part of the journey with 5.4.7.10, then went to 5.4.7.11 when it was released.
I'll mess with the contrast and target size some more.
 
I had a similar experience when I first started. Obvious things wouldn't trigger in DS. Now it seems to work fairly well although it still misses an occasional obvious target. I guess it's in the 95% range in accuracy. It's running on most of my cameras so it's getting older 2MP and newer 4MP video which means I can't pin the fails on the cameras.

One other comment is that the target size and contrast seem to be on the critical side. Example being a light colored car in bright sunlight can be missed and a dark color car in late evening can be missed.

What version of BI are you running? I'm on the "latest and greatest" 5.4.7.11

Note; I am tired, had to edit out the typos three times, so far!
It's working well on one 2mp, and one 4mp 5442.
Another 2mp, and another 5442, not so much.
 
deep stack should be used to trigger a recording. Use continuous + triggered. Ensure under alert tab you have it set to fire when triggered.

Any specific reason why you're suggesting continuous + triggered? Does it work better with continuous + triggered over continuous with set up alerts per camera?
 
Any specific reason why you're suggesting continuous + triggered? Does it work better with continuous + triggered over continuous with set up alerts per camera?
 
'Yeah I remember the update. I have one cam set up to see how small the recording can be, I just don't understand how this setting is better for deepstack. I just see the note about static images being scanned every 10 minutes.
 
I discovered something in my setup regarding alerts that might be applicable to some folks and might resolve problems...it's worth a check anyway.

With deepstack if you want a camera to trigger you put the required object in the To Confirm box in cam properties>trigger>AI. If you want to be alerted of that trigger, in cam properties>alerts>on alert>required AI objects you can either leave the box blank, or enter required objects. And as you can see, you can enter objects to skip the alerting.

What I found when going through each of my cameras is that some cameras had objects entered into the 'required objects' box that I don't remember entering. It's possible that I did and simply forgot, or it's possible data was somehow transferred from the trigger>AI tab during one of the updates...either way, what I found was that some cameras had the data and some did not--to make matters worse, the data on those few cameras did not match the data in the cam properties>trigger>AI>To Confirm box. The affected cameras were the ones giving me problems with trigger/alert confirmation.

Something else I discovered which bay be my own fault--let me know what you think: On all my cameras I use zone G on the whole FOV. I do this 1) to connect separated zones for multi-zone motion object detection rules and 2) during playback of video I'm able to see the entire FOV, not just the unmasked areas--I do this for all cameras regardless of whether or not I use multi-zone motion detection.

Well my driveway and front yard are covered by zone A with no other zones. The street out in front of my house is not in any zone except G, which is never part of my object detection rules. Yesterday and this morning I noticed that when a car drove by the house it caused an alert (with the car% and the orange car icon in the thumbnail), which is not supposed to happen because the car is in zone G, not A. When reviewing both videos of the car passing by, birds were also flying past the camera at the same time close enough to be picked up by the motion sensor. I always have a lot of birds flying around, so I have deepstack cancel alert confirmation by entering birds in the To Cancel box in cam properties>AI>To Cancel. So why did I get the alert? Not sure. During alert confirmation is deepstack ignoring the zones set up for the camera? OR is it because in the cam properties>alerts>on alert>motion zones...I have all zones checked including G? Again, not sure, but I unchecked G for all my cameras in this setting to find out because I don't want any alerts based on motion in zone G. Having used zone G in my settings this way for years, it seems I would have noticed this before...I wonder if deepstack has highlighted my incorrect setting or if the use of deepstack during alert confirmation needs to be tweaked.

Please let me know what you think?

Capture1.JPG
Capture4.JPG
Capture5.JPG
Capture6.JPG
Capture7.JPG
Capture8.JPG
 
Last edited:
Some may wonder why I have zone A drawn the way I do (see photo above). It seems that when using Edge Vector, BI likes for objects to pass through zones for a good 'hit' so-to-speak.
 
Last but not least, Unknown Faces. BI seems to have stopped saving them for quite some time now...anyone else experiencing this?
 
5.4.7.11 adds a nice thing where rich image notifications for Apple devices are no longer letterboxed with black bars, so you see a full clear picture.

+1 on unknown faces not being saved since about June 5.
 
Little off topic but seems a lot of experience here.
Trying to get Plate Recognizer to run script or do anything (have same action to make sound, send email etc etc) on detecting a plate that is in the required AI Object box.

If I remove that X40KBR so the box empty, it runs fine.
Top left & the LPR dashboard shows LPR is getting the correct number plate.

Fairly sure putting the LP in this AI box is how it is meant to work.
The fact it does not action with the LP there means it is reading it to some degree as leaving blank does?

I just started deepstack but think the ALPR is best left to this. It is great at picking up the LP as tested proper with cars coming & going.
(might use deepstack for preparing the image later)

If I use the Test trigger option inside the Alerts, On Alert, Action for that camera & type in X40KBR in the AI box it DOES run. If I type the wrong number, it does not run so is comparing here.
It is either in real world or pressing the Trigger button in the main camera view that somehow I think BI is not getting the LP to compare to the AI box.


1627904707624.png
 
Last edited:
Just received an email from Ken that 5.4.7.7 (today) will enable toggling of DeepStack models by profile and/or schedule. Yea!

Edit: I emailed him Sunday just to inquire if this was on his roadmap.
Did this feature get added to the latest release, not seeing it but could just be me being blind, thanks.
 
Thank you for the quick reply, sorry, I wasn't being very clear and also not explaining well. I was aware of that section but was trying too hard in finding a way to completely isolate the standard model and ExDark depending on day and night profile. Looks like this is the only way at the moment by using a non existent model in the Custom Model box during the day. I would like to avoid overloading the system and also stop getting a confirmation for both the standard model and also ExDark, ie stop seeing Car 81% and car 79% in the results, if that makes sense. I figured if you could only have one model loaded or being accessed it would free up system resources. Over thinking it possibly but any ideas gratefully received. Thanks again.
 
for day schedule you put:
'objects'
in the custom model box
then in the night schedule leave it blank and it will use both objects(default) and dark, or at night put 'objects:0' and it will use only dark

that's how I have mine setup, objects only in day and both night and objects at night, although it oddly has a higher person % than People % at night

note: don't put the quotes in the box
 
  • Like
Reactions: CamCrazy
for day schedule you put:
'objects'
in the custom model box
then in the night schedule leave it blank and it will use both objects(default) and dark, or at night put 'objects:0' and it will use only dark

that's how I have mine setup, objects only in day and both night and objects at night, although it oddly has a higher person % than People % at night

note: don't put the quotes in the box
Thank you kindly, I will try that and see what happens, learning curve this one. Will report back in the hope it helps others. Oddly I am finding sometimes during overcast conditions the ExDark gets a better percentage on cars but maybe this is to be expected.

Update...that is working a treat on the daytime profile, will experiment later when night falls and report back. Will also aim to post some screen grabs of settings if it helps someone else.

Out of interest to you see a bigger strain on the CPU or GPU when introducing the custom models? seems that way for me, hence I wondered if having the option to physically run one model or the other might help longer term.
 
Last edited:
Out of interest to you see a bigger strain on the CPU or GPU when introducing the custom models? seems that way for me, hence I wondered if having the option to physically run one model or the other might help longer term.
Open Task Manager and watch what happens when your DeepStack-enabled cameras trigger. Each model runs in its own Python instance. It adds up quickly!
 
  • Like
Reactions: sebastiantombs
Open Task Manager and watch what happens when your DeepStack-enabled cameras trigger. Each model runs in its own Python instance. It adds up quickly!
Agreed, I have monitored the task manager, with 6 cameras running on DeepStack and the processor only an i5 4690 it does peak briefly at 98% depending on the action when three cameras get triggered. Looking into swapping the Radeon card for an Nvidia to play with Cuda, but seem like mixed reports on accuracy. I do have a two year old high spec i5 machine which was built for Blue Iris initially, that found another purpose so will look to switch over at some point and put it to work. Got another three cameras being added so will be up to 9 in total, 6x 2MP and 3x 4MP. Both machines run Noctua CPU coolers so the processor has the best possible bite at the cherry so to speak :thumb: