5.4.6 - May 17, 2021

fenderman

Staff member
Joined
Mar 9, 2014
Messages
36,897
Reaction score
21,250
5.4.6 - May 17, 2021
As requested by several customers, a continuous + triggered recording mode is now offered
separately from the continuous + alerts mode. This mode ensures that the main stream is
also recorded for cancelled alerts as well as confirmed alerts.
Replacing the Analyze image with DeepStack option on the viewer’s right-click menu is a
Testing & tuning menu with an Analyze with DeepStack option. This option pushes
BVR video frames through DeepStack as quickly as possible (may not always update in realtime unless you pause and step).
With dual-streaming and direct to disc enabled, main stream frames that are sent through
DeepStack when the camera is triggered are flagged in the BVR file. As these are played in
the viewer with the Analyze with DeepStack option enabled, the image border will be
shown in blue, allowing you to identify precisely which frames were used for the alert
confirmation. A catch-22 arises however if you are using the continuous + alerts recording
mode, as these frames are never actually recorded—use continuous or continuous + triggered if
you want to take advantage of this tuning feature.
When analyzing multiple frames with DeepStack against both “to confirm” and “to cancel”
object lists, an effort is made to choose the “best” confirmed image according to higher
confidences and the presence of (more) faces. By default, the alert will continue with a
single found “to confirm” object, but you can force continued analysis by placing any object
label in the “to cancel” box (even one that will never be found).
The “to confirm” and “to cancel” boxes may now contain labels ending with the * wildcard.
This is handy if you have multiple faces for a single person—you might use chris* for
example to match defined faces for chris_1, chris_2, chris_side, etc.
A new right-click option in the Alerts list allows you to manually cancel and confirm alerts.
 

cscoppa

Getting the hang of it
Joined
Dec 14, 2019
Messages
50
Reaction score
26
Auto upgrade failed for me on this one. Luckily I had taken a fresh "export settings" right before as I do for all the major updates. Had to blow out the usual 2 spots in the registry, get it fired up, re-install the service and then re-import settings. All good now.

EDIT: Including the 2 keys because someone asked me privately:

HKEY_LOCAL_MACHINE\SOFTWARE\Perspective Software and
HKEY_CURRENT_USER\SOFTWARE\Perspective Software
 
Last edited:

Dave Lonsdale

Pulling my weight
Joined
Dec 3, 2015
Messages
456
Reaction score
195
Location
Congleton Edge, UK
OK, thank you. I hope it works but will mention that some cars (moving faster) are confirmed and some cancelled. I will report back - perhaps it will have to be tomorrow.
 

fenderman

Staff member
Joined
Mar 9, 2014
Messages
36,897
Reaction score
21,250
OK, thank you. I hope it works but will mention that some cars (moving faster) are confirmed and some cancelled. I will report back - perhaps it will have to be tomorrow.
also remove the spaces after the commas in your list of objects. And remember that areas that are not covered by any zone are not analyzed by ai.
 

Dan111

n3wb
Joined
Mar 15, 2021
Messages
27
Reaction score
9
Location
19382
Has anyone else had slower loading times loading the recent clips lately with the last few updates when using UI3?

As soon as I opened UI3 the clips used to load nearly instantly and now it takes awhile for the clips to load.
 

105437

BIT Beta Team
Joined
Jun 8, 2015
Messages
1,995
Reaction score
881
I really like the Test and Tune function. It really pushes my i5 CPU though. Just might be looking for an upgraded PC in the near future.
 

wittaj

IPCT Contributor
Joined
Apr 28, 2019
Messages
24,436
Reaction score
47,561
Location
USA
The test and tune picks up all the AI you don't have keyed in either, which for tuning is kinda cool to see it pick up stop signs, toothbrush, etc. Yeah I played around lol.
 

BORIStheBLADE

Getting comfortable
Joined
Feb 14, 2016
Messages
739
Reaction score
2,066
Location
North Texas
I really like the Test and Tune function. It really pushes my i5 CPU though. Just might be looking for an upgraded PC in the near future.
I noticed when I set Deepstack to medium or high its still using the same amount of CPU. if I have it test on substream it uses maybe 10% at the most less CPU testing. I'm running the new i3 -10100 and its using 70% cpu to test and tune. Set to 3 images a second..
 

Dave Lonsdale

Pulling my weight
Joined
Dec 3, 2015
Messages
456
Reaction score
195
Location
Congleton Edge, UK
Feedback from today using the camera in post #4 as follows:-
1. I applied the setting changes indicated by fenderman
2 I reviewed my motion settings (attached), temporarily inverting zone A (which is the full image) so that all the other zones can be seen in one snapshot. If my settings are misguided, please advise accordingly.
Screenshot 2021-05-19 214231.pngScreenshot 2021-05-19 214413.pngScreenshot 2021-05-19 214811.png

3. For this proving exercise, I am only using the camera's main stream. It's running at 15 fps, key frames at 1s.
4. I updated BI to 5.4.6.1

Daytime results:-
59 confirmed alerts, all valid - Excellent!
40 alert triggers correctly cancelled (trees swaying wildly on a sunny but very windy day) - Excellent!
15 alert triggers cancelled for cars, rabbits and birds - perhaps all can be explained:

Rabbits - not in DeepStack's inventory. When using "Analyse with DeepStack", Deepstack changed it's mind from moment to moment - dog, cat, zebra. I'm guessing that if the type of animal changes in successive real time one second images, the alert is cancelled. Also, a rabbit's motion is stop/start, so the motion rectangle alternates between red and yellow.

Birds - as with rabbits, birds have a very quick/jerky/start/stop motion which again when analysing with Deepstack also thinks is a cat for some of the time.

Cars - 19 were confirmed but another10 were cancelled despite being triggered and in clear view by Blue Iris' motion detector. But they are mostly only in view for roughly one second. Is the timing of the one second real time images captured by/for Deepstack such that it's too hit and miss?

One more very significant comment is that it appears DeepStack does not confirm alerts for confidence levels below 40%. So my "Min Confidence %" setting of 5% is not valid. In an attempt not to miss valid alerts, I think that many of you guys are also using low percentage numbers. Unfortunately, DeepStack defaults this setting back to 40% minimum.
 

m_listed

Getting the hang of it
Joined
Jun 11, 2016
Messages
176
Reaction score
57
I'd written support last week about some suggestions for the AI and I don't know if Ken got the inspiration himself or from someone else or for me, but this is exactly what I had suggested. Theoretically fixes problem like "Person approaching camera, still too far to see his face, but DeepStack stops detecting as soon as it detects a person and ignores his face as he gets closer to the camera." It should be able to keep detecting (I put "zebra" in my "to cancel" box so it'll keep going) until it maximizes the confidences involved.

At the same time, makes me fear for the future. This is a huge step towards detecting anything and anyone at all times, now in the hands of all of us xD

@Dave Looking good! I like your zoning. I have similar zones for one of my cameras. But I have a background zone that covers the entire frame so that Blue Iris will keep tracking objects in between zones and/or have an easier time understanding what constitutes an object when they're in-between zones. Did you have any missed motion that you know of? If not, then maybe that background all-encompassing zone isn't that important to have.
 
Top