5.4.6 - May 17, 2021

fenderman

Staff member
Mar 9, 2014
36,891
21,407
5.4.6 - May 17, 2021
As requested by several customers, a continuous + triggered recording mode is now offered
separately from the continuous + alerts mode. This mode ensures that the main stream is
also recorded for cancelled alerts as well as confirmed alerts.
Replacing the Analyze image with DeepStack option on the viewer’s right-click menu is a
Testing & tuning menu with an Analyze with DeepStack option. This option pushes
BVR video frames through DeepStack as quickly as possible (may not always update in realtime unless you pause and step).
With dual-streaming and direct to disc enabled, main stream frames that are sent through
DeepStack when the camera is triggered are flagged in the BVR file. As these are played in
the viewer with the Analyze with DeepStack option enabled, the image border will be
shown in blue, allowing you to identify precisely which frames were used for the alert
confirmation. A catch-22 arises however if you are using the continuous + alerts recording
mode, as these frames are never actually recorded—use continuous or continuous + triggered if
you want to take advantage of this tuning feature.
When analyzing multiple frames with DeepStack against both “to confirm” and “to cancel”
object lists, an effort is made to choose the “best” confirmed image according to higher
confidences and the presence of (more) faces. By default, the alert will continue with a
single found “to confirm” object, but you can force continued analysis by placing any object
label in the “to cancel” box (even one that will never be found).
The “to confirm” and “to cancel” boxes may now contain labels ending with the * wildcard.
This is handy if you have multiple faces for a single person—you might use chris* for
example to match defined faces for chris_1, chris_2, chris_side, etc.
A new right-click option in the Alerts list allows you to manually cancel and confirm alerts.
 
Auto upgrade failed for me on this one. Luckily I had taken a fresh "export settings" right before as I do for all the major updates. Had to blow out the usual 2 spots in the registry, get it fired up, re-install the service and then re-import settings. All good now.

EDIT: Including the 2 keys because someone asked me privately:

HKEY_LOCAL_MACHINE\SOFTWARE\Perspective Software and
HKEY_CURRENT_USER\SOFTWARE\Perspective Software
 
Last edited:
Has anyone else had slower loading times loading the recent clips lately with the last few updates when using UI3?

As soon as I opened UI3 the clips used to load nearly instantly and now it takes awhile for the clips to load.
 
I really like the Test and Tune function. It really pushes my i5 CPU though. Just might be looking for an upgraded PC in the near future.
 
  • Like
Reactions: sebastiantombs
I really like the Test and Tune function. It really pushes my i5 CPU though. Just might be looking for an upgraded PC in the near future.

I noticed when I set Deepstack to medium or high its still using the same amount of CPU. if I have it test on substream it uses maybe 10% at the most less CPU testing. I'm running the new i3 -10100 and its using 70% cpu to test and tune. Set to 3 images a second..
 
  • Like
Reactions: sebastiantombs
Greetings back to you! Welcome aboard. You should introduce yourself in New Member Introductions

Great bunch of people her and TONS of information!
 
  • Like
Reactions: sebastiantombs
Feedback from today using the camera in post #4 as follows:-
1. I applied the setting changes indicated by fenderman
2 I reviewed my motion settings (attached), temporarily inverting zone A (which is the full image) so that all the other zones can be seen in one snapshot. If my settings are misguided, please advise accordingly.
Screenshot 2021-05-19 214231.pngScreenshot 2021-05-19 214413.pngScreenshot 2021-05-19 214811.png

3. For this proving exercise, I am only using the camera's main stream. It's running at 15 fps, key frames at 1s.
4. I updated BI to 5.4.6.1

Daytime results:-
59 confirmed alerts, all valid - Excellent!
40 alert triggers correctly cancelled (trees swaying wildly on a sunny but very windy day) - Excellent!
15 alert triggers cancelled for cars, rabbits and birds - perhaps all can be explained:

Rabbits - not in DeepStack's inventory. When using "Analyse with DeepStack", Deepstack changed it's mind from moment to moment - dog, cat, zebra. I'm guessing that if the type of animal changes in successive real time one second images, the alert is cancelled. Also, a rabbit's motion is stop/start, so the motion rectangle alternates between red and yellow.

Birds - as with rabbits, birds have a very quick/jerky/start/stop motion which again when analysing with Deepstack also thinks is a cat for some of the time.

Cars - 19 were confirmed but another10 were cancelled despite being triggered and in clear view by Blue Iris' motion detector. But they are mostly only in view for roughly one second. Is the timing of the one second real time images captured by/for Deepstack such that it's too hit and miss?

One more very significant comment is that it appears DeepStack does not confirm alerts for confidence levels below 40%. So my "Min Confidence %" setting of 5% is not valid. In an attempt not to miss valid alerts, I think that many of you guys are also using low percentage numbers. Unfortunately, DeepStack defaults this setting back to 40% minimum.
 
I'd written support last week about some suggestions for the AI and I don't know if Ken got the inspiration himself or from someone else or for me, but this is exactly what I had suggested. Theoretically fixes problem like "Person approaching camera, still too far to see his face, but DeepStack stops detecting as soon as it detects a person and ignores his face as he gets closer to the camera." It should be able to keep detecting (I put "zebra" in my "to cancel" box so it'll keep going) until it maximizes the confidences involved.

At the same time, makes me fear for the future. This is a huge step towards detecting anything and anyone at all times, now in the hands of all of us xD

@Dave Looking good! I like your zoning. I have similar zones for one of my cameras. But I have a background zone that covers the entire frame so that Blue Iris will keep tracking objects in between zones and/or have an easier time understanding what constitutes an object when they're in-between zones. Did you have any missed motion that you know of? If not, then maybe that background all-encompassing zone isn't that important to have.
 
Running clips through "tuning" shows me some evergreens that are "broccoli" and a coach light that's a "person". Entertaining at least :)