Hell Yeah! Direct Deepstack Integration - 5.4.0 - March 31, 2021

Being a stubborn old coot and not wanting to give up I started fooling, yet again, with DeepStack. I think I got it working properly, finally. In the interest of saving someone, anyone, the same frustrations this is what I ended up doing.

The "confidence" level is set at a default of 50%. I have it down at 5%, one tenth the original setting. Now it's detecting 100% of the valid triggers during the day, at least for me. I also set the additional views to one s that it actually "looks" at two frames one second apart. With matching iframe and frame rates and because Blue Iris starts recording on a full frame it gets two frames of full data to look at. Don't be afraid to lower that "confidence" setting to really low numbers is the bottom line.

This hasn't seemed to have cured the problem at night though. I just checked and it started missing clips about 20 minutes after sundown. It's still missing vehicles and it also missed a herd of deer that passed by one of the cameras. I think this is a contrast and scene lighting issue. In full daylight it works very well, at night in B&W not well at all. Given that it's a free app I think the daytime utility it has is more than worth the time and effort needed to set it up. I will continue playing with the "confidence" setting at night to see if I can get it to detect reliably.
Have you tried manually launching deepstack in high mode to see if it improves the night performance?
 
I created a folder (I used Aux 8 and renamed it) for unknown faces so I could keep them separate.

When you capture faces/have facial recognition on do you see a delay in deepstack. It seems deepstack is still processing super quick but only receiving picture to analyze every 5 seconds and no sooner. No matter if it’s regular object detection or face. Even if multiple cameras are experiencing triggers at the same time.

if I turn of face recognition on the deepstack server and BI, and multiple cameras trigger, deepstack receives pictures every second or so
 
Have you tried manually launching deepstack in high mode to see if it improves the night performance?

Yes, I shut it down in BI then re-start it in PowerShell using "high" mode. I'm pretty convinced it gets confused by headlight bloom. Using HLC to control that can be self-defeating because the whole scene ends up getting too dark to do any real detection to happen.

I guess the big question is by shutting down in BI then re-starting in PowerShell make the settings stick in BI?
 
Last edited:
The learning continues.

I shut off "load DeepStact with Blue Iris" and shut it down. Opened a PowerShell window running it as administrator which probably isn't necessary. Started DeepStack using all modes, mode=high, all the the proper syntax, and left the PowerShell window open. Went back into the BI console and it showed DeepStack as running. Checked back on the PowerShell window and can see detections happening. The next step is to be able to interpret the number strings on each detection.
 
  • Like
Reactions: looney2ns
Anyone successfully using face recognition with "external" Deepstack? I'm running Deepstack in Docker in order to use MODE=High. Started Deepstack with both VISION-FACE=True and VISION-DETECTION=True. Enabled Face recognition in Blue Iris and told BI to save unknown faces. I don't see any actions in BI about faces (what do BI do with face recognition/where do we see "results"?). The folder with unknown faces is still empty.
 
I'm really excited that Ken has implemented native DeepStack support, that has eased my config by far!

But there are still some issues in the implementation. Most important are these two:

  • The Alarm action fires extremely delayed! In my hallway today BI confirmed me at 13:07:11 - this is the timestamp of the alarm picture with my person detected. The alarm action fires a TCP packet to my home control (for alarm functions) - that happened seven (!) seconds later at 13:07:18. This is not acceptable.

  • The setting, that a maximum of 5 pix in a fixed one second frame can be sent to the AI, kills alarms. Entered my hallway through the door today, the cam got temporary "blinded" by the incoming sun for a moment before it readapted to the situation. I've set the maximum of 5 pictures in the cam setting, this is 5 seconds and that did not make it. Afterwards I've walked 30 seconds (!) in my hallway and my pre-alarm was still not triggered. Checking BI the clip was fully recorded, so the motion was permanently retriggered - but the retriggers did not send new pictures to the AI for confirmation. So, under the bottom line I was moving around half a minute in my house, clearly visible as a person and the clip was never confirmed.
    I had my external tool configured in a way that every 3 seconds a picture was sent to the AI as long as the trigger is active - and never missed an alarm. I hope this will be fixed ASAP, at least that a re-trigger always restarts the AI confirmation sequence.
I wrote an email to Ken with these points today (because just mourning in a forum which is not read by the maintainer will not help) ;)
 
@netmax - this is mostly a computer issue in the delays.

What processor are you using?

Watch the monitor and have someone walk around and watch the CPU usage down at the bottom of BI - if it is bouncing up into yellow and/or maxing out, that is the reason for the delay.

Now I agree that the 5 second images and not on retriggers is something that I am sure Ken is working on, but it is a balance as well and he has to make sure it works for the most of people. People with an older processor will max out the CPU if it is constantly sending pictures to DeepStack.

As of right now, those with the computer horsepower are staying with the 3rd party add-on like AI Tools for that configurability.
 
  • Like
Reactions: looney2ns
@wittaj No CPU issues. BI is permanently around 18% CPU load, no peaks as AI is running separately in a dedicated VM on one of my R720, it can use 15 vCPU of the Dual Xeon E5-2650Lv2 setup of the machine. Runs on Linux, much better performance than on Windows (and I avoid the M$ crap where ever I can for servers). Picture process time is beween 250-300ms.

It's just the fact that BI stops to feed it with pictures after a maximum of 5 seconds right now :(
 
I'd love to see less than 1 second intervals but I suspect the timing is based on full frames which should be happening at 1 second intervals when recording starts.
 
The main stream for AI would help a lot I suspect. Varying the time from 1-5 seconds is nice as long as it's not whole numbers and allows tenths of a second. I'd also guess from an integration standpoint that it will be a little harder to add the main stream feature.
 
I'm getting 65-125ms image processing time in deepstack-gpu on an nvidia 1050 Ti. Running in a docker on linux. I'll join the choir: I'm hoping Ken will add a feature where I can throw more images at deepstack for more accurate recognition. One every second seems to be the maximum so far? I'm getting 15 key frames a second because I'm using MJPEG cameras
 
I e-mailed Ken earlier today with a suggestion: "[...] the ability to analyze real-time images every X seconds during trigger/activity, not just the first 5.".

I also requested an option to choose MODE when BI starts DS.

BTW, is there any way to bring up the Deepstack-console window when BIstarted it?
 
  • Like
Reactions: beandip
Man! Something is very wrong with the facial recognition with this thing. I tested it walking all over hitting different cameras and... I mean, C'mon man! Perusing the images that it's picked out, all I see is one UGLY looking guy! :rofl: :rofl: :rofl: