Thanks for tagging me in
@wittaj and the kind words. Always happy to help and certainly (hope it goes without saying at this point) if anyone here needs it, just DM me and I'll setup a remote connect to help you dial in and calibrate your cams for your specific FOV and target requirements.
Yes as I mentioned when I worked with you initially, leaving at 0,0 is the best way to ensure that the AI algorithms make the target decision of a correctly configured / dialed in FOV. For most cases the 0,0 will yield the best results. There are some circumstance where I recommend or will dial in specific target sizes for improved target caps. Example of this are crowded FOV's (think highly populated foreground objects with distant target requirements) or FOV's where you want to hyper focus a target cap based on requirement. Other examples are PTZ rollouts (based on install location, FOV, desired target, time on target etc) where as I always show the key is consistent tracking throughout a given time or location window. This is important to ensure that even in snow or low contrast conditions on site that you can track a target as I've shown in my videos. With the addition of further improvements to AI algorithms in both the security profiles used by Dahua/Hik etc + improvements in standard Motion Detection such as SMD (SMD 4.0 is on the horizon for release this year) etc, these newer cams are attempting to take the guess work out of target config. Does this mean the cams
don't need proper dialing in, absolutely not. To be honest I find they need more now than before BUT the goal for successful future SecTech devices is really a combination of AI + ML (on cam) to truly learn and be trained from the environment these are deployed in vs just ML training completed in a lab, embedded onto SOC's and shipped as statically programmed devices.
Specific to this current FW, I
have seen improvements in the backend code in the latest release (been testing since late Dec) which have assisted in reducing environmental false triggers (snow, wind, rain etc) as
@looney2ns and
@wittaj also commented AND I've also seen improved speed of target acquisition. There are however still some tweaks that could improve 'real' target identification as well as address some edge case installs/FOV's and cam releases as I mentioned a few pages back that still need integration here. Don't get me wrong, its improving but a little bit more work and this could be as close to perfect as we'll see (will never be 100%). Lets not forget though that SmartIR is currently not part of this current release being discussed here (21-11-05) and of course also plays a role for target acquisition (in darker and/or at night scenarios) and correct AI identification against algorithm which I believe we'll see around end of Feb / March timeframe (due to the coming new year in the JAPAC region). This will impact the performance of this FW (in a positive way) too. I continue to feedback to Dah and other engineering / dev teams almost weekly on what I am seeing across different FW, code and device releases to get these improvements integrated ASAP. This is also to try and ensure that we all (along with manufacturers too) benefit from consistent quality in releases going forward and in the case of FW, with the right base code being used where cams share chipsets on a release (i.e. VOLT etc). An example of this is SmartIR / SMD so that rather than re-inventing the wheel on each release (FW or cam) that work already completed (e.g. the SmartIR re-work that I proposed and worked with Dahua on last year) become the default which reduces frustration for us users, installers, integrators and ideally reduces day 0 issues. This is also important to ensure that manufacturers have a 'pause for thought' on trying to use the same code when cams are very different (think 5442 vs Color4K-X both sharing the same base code and AGC algorithms originally) therefore leading to issues when utilized in the real world.
Glad we're seeing these improvements and there will be more to come. We'll also see the benefit of more 1/1.2" sensors this year which by improving clarity (without boosting to ridiculous MP's) also improve the hit ratio of AI/ML. For those that still see issues, I definitely recommend throwing an SD card into your cam, look at what's recorded locally/directly by the cam rather than through a standalone NVR or NVR platform. This will help you identify issues both in config and in FOV as well as isolate anything introduced by the recording platform itself.
HTH, look forward to the next code release. My thanks as always to
@EMPIRETECANDY too as he also helps put pressure and 'motivate' the manufactures to stand up and pay attention.
As you know, the manuals for these cams are basically non-existent, so much of it is learned and trial and error.
Back when I was a NOOB and was having trouble with false triggers,
@Wildcat_1 offered to TeamViewer in and help me out. If you are not aware, he knows the ins and outs of these cameras like nobody else and is always providing input to Dahua engineers on their code. The first thing he did was changed my min object size to 0,0 and said let AI do it's thing and only put in min sizes to knock out false triggers like a dog being mistaken as a human. And the min object size of 0,0 has worked with every camera I have added ever since.
I have posted this before how IVS did not trigger all night during a snow storm except for when there really was a person.
View attachment 116276
And here is a clip from another camera from snow we had last night, so some similar floaters like you had that sent yours off. BI motion picked it up, but not a single IVS trigger except for when a person triggered it:
View attachment 116277