False positive tuning

heyho

n3wb
Jun 10, 2024
13
5
England
I know its been asked to death but I have done a lot of searching and have not found any real answer but has anyone managed to tweak Blue Iris (amd/or CodeProject) and can say they have noticed an improvement and reduction in false positives that are caught.

The usual suspects - rain/snow, clouds etc. Oh and this samned spiders and their webs lol
 
Yes LOL.

Many of us eliminated most of the false triggers prior to any AI being available.

Mine got to the point prior to AI that if triggered and sent me a push, I knew a person or vehicle was on my property.

AI just took it the next step further and eliminated the few additional false triggers, AND made it less reliant on taking the time to dial in the motion settings.

The biggest thing is increasing make time - most field of views only Superman is getting thru it in under 4 seconds LOL and yet many have it set to 0.1 seconds LOL

Pay some neighborhood kids to walk thru the field of view and run thru the field of view and time it. That can give you a baseline time for motion detection.

Min object size is the next. Don't get greedy with a field of view trying to identify people way out in the distance.

Zone crossings are the next thing to work on.

For some checking the box about an object moves X number of pixels helps.

Edge Vector instead of simple.

There are ways, it just isn't done in 2 seconds. It takes some time to dial each camera in, but if you take the time, you can make it very reliable.

Or just update cameras to ones with AI and be done with it LOL.
 
For "make time" = the amount of time motion has to take place before it is consider motion, so a 1.0 sec make time means it will not trigger if motion stops at 0.9 sec, right?

While it is in make time, are the video frames queued to be sent to AI? If your camera is running at 25fps, does that mean that there'd be 25 frames sent to AI once BI goes into triggered state, or does it send frames starting at the end of make time?
 
Correct on the motion of 0.9 seconds.

Incorrect on what is sent to AI. On a trigger, it sends the most recent frame that triggered to AI and then is done unless you tell it to send more at a specified time interval, but it does not send every frame to AI.
 
Most recent frame would mean after make time, so if your make time was 1.0 sec, then the first frame it would send would be 1 sec after motion started, correct?

I was wondering why for my LPR, I kept missing the plate because make time and zone trigger were capture the vehicle leaving the frame.
 
That is correct unless you tell it to analyze pre-trigger.

So under the AI is the min confidence you want it to say yes or no to did it find what you want.

+ pre trigger images is how many you want it to analyze prior to the trigger

+post trigger images is how many images you want it to analyze after it is triggered.

analyze one image each Xms is how how long you want between images sent to AI for processing.

1737512736892.png


For LPR, there is a little more nuance to make sure that the plate is getting captured. That is very field of view dependent.
 
I do tell it to pre-trigger:

Screenshot from 2025-01-21 19-07-46.png

However, the first 3 images are exactly the same (T-685, T-358, T0 are identical images only the yellow box gets changes):

Screenshot from 2025-01-21 19-06-43.png