AI Capture Fails!

Philip Gonzales

Getting comfortable
Joined
Sep 20, 2017
Messages
697
Reaction score
551
I found this alert a bit comical and I've heard of some people getting some wonky AI alerts, so I thought it would be cool to start a thread where we can share our AI capture fails!

Don't get me wrong, I love AI! I just recently learned how to install DeepStack and then Code Project's SenseAI, and before then I was paying for Sentry. I'm very thankful for being able to filter out the noise and only get alerted on things I find important. I'm no AI expert but it seems to work pretty darn well! But it can be a bit funny when something is misclassified.

I don't know about you, but I've never seen a person with wings before!

1664839877049.jpeg

Stray cat! lol this one is a more understandable but still good nonetheless!

1664840045068.jpeg

Another common one, but one that makes me chuckle a bit... When using the default model, I would get a lot of cars and trucks classified as boats! Would be even more funny if it was an old Cadillac or something like that.
1664840198082.jpeg

Anyway, hope to see your AI Capture Fails next! Keep it going! :)

Regards,

Philip
 

Philip Gonzales

Getting comfortable
Joined
Sep 20, 2017
Messages
697
Reaction score
551
Excuse the mess...

A few from my backyard. I get alerts from my backyard sent to an email inbox dedicated to my BlueIris Alerts, which automatically marks the emails as read. I usually just review the alerts after the fact, especially since it keeps thinking my doggos are people.

Just for fun I ran one of the images through the animal model... Thinks my mop bucket is a bird lol.

BD.20221004_191824_99478827.jpgBD.20221004_185754_318997350.jpgBD.20221004_160308_311741724.jpgBD.20221004_144446_234622355.jpg
Screenshot_20221004-201606.png
 

Keyboard

Getting comfortable
Joined
Oct 25, 2016
Messages
278
Reaction score
531
Location
Owings Mills, MD
Think of it this way ...
It's close to Halloween, and the Trick-O-Treaters are trying out their costumes. And thus, the AI software wasn't fooled by their costumes.

My favorite costumes are the person disguised as a kid's tricycle, and the deer disguised as a wheelbarrow
 
Last edited:

looney2ns

IPCT Contributor
Joined
Sep 25, 2016
Messages
15,521
Reaction score
22,657
Location
Evansville, In. USA
Lol wrong company. The AI even put a square around the hidden arrow in the logo. I feel like the AI is trolling me for making this post! :wow:

View attachment 142318
Try setting your brightness to 40 leaving contrast at 50 in that camera. Or try lowering the gamma a little.
 

Zz44332211

n3wb
Joined
Jul 24, 2020
Messages
26
Reaction score
8
Location
Us
I know that this thread is probably focused on the amusement of classification failures, but aspects of these are probably expected, assuming that the "confidence interval" of the match is the percentage that BI reports.

The example percentages (which I have to assume are confidence intervals) are all way less than 90%, and people that do AI (==machine learning) for a living would try to categorize the effectiveness of a detection algorithm using a ROC curve where you take all your correctly categorized matches (the true positive rate) and graph them against your incorrect matches (the false positive rate). In a ROC graph, the closer your graph is to the left and top axis, the better your classifier is (and that's usually >95% to be useful).

A detector that is at 80% confidence or less is basically useless because you have too many incorrect matches (false positives).

I don't use deepstack, senseai, or BI, but I wonder if BI has a minimal confidence interval threshold where matches are ignored or given less weight? I also wonder if there are other matches to these objects, but at lower confidence intervals, and they are not shown at all. If there was a way to train based on the false positives, that could also help the classifier be more effective.

For example, the real bear matches the features of deer at 72% but might also match the features of bear at <=71%. BI probably chooses to report the greatest confidence interval match, or doesn't also get the other feature matches as an option to report.
 
Last edited:

robpur

Getting comfortable
Joined
Jul 31, 2014
Messages
274
Reaction score
1,322
Location
Washington State
A detector that is at 80% confidence or less is basically useless because you have too many incorrect matches (false positives).
This depends on the use case and expectation. I find DeepStack and CodeProject.AI Server quite useful even though they are not highly accurate. I don't care if AI calls a Coyote a sheep, a dog or whatever. As long as AI can differentiate between things that I'm interested in, such as people, vehicles and critters, and things that I'm not interested in such as blowing trees and moving shadows, then I'm good.

A common false positive that I see is that bees and wasps on the lens of a camera are often called bears. They do look a bit like the profile of a bear, but a human would not make that mistake based on size. I've seen numerous false alerts that would have been avoided if size, and perhaps position were taken into consideration. For example, I sometimes receive alerts of a train in one of my trees, and I've received alerts about boats in my driveway where a boat would not fit.

AI built into my Dahua cams works quite well and it's extremely rare that I get a false alert, but the cams only detect people and vehicles which is less challenging than trying to differentiate between a long list of objects. I believe that the cams can be calibrated for size to increase accuracy, but I've not felt the need so I haven't messed with it.

It would be great to have high AI detection accuracy, but the current state of AI with BI is much better than plain motion detection, at least for me.
 

Zz44332211

n3wb
Joined
Jul 24, 2020
Messages
26
Reaction score
8
Location
Us
This depends on the use case and expectation. I find DeepStack and CodeProject.AI Server quite useful even though they are not highly accurate. I don't care if AI calls a Coyote a sheep, a dog or whatever. As long as AI can differentiate between things that I'm interested in, such as people, vehicles and critters, and things that I'm not interested in such as blowing trees and moving shadows, then I'm good.
That is an excellent point, often categories of interesting activity will be more important than the perfect detection.


AI built into my Dahua cams works quite well and it's extremely rare that I get a false alert, but the cams only detect people and vehicles which is less challenging than trying to differentiate between a long list of objects. I believe that the cams can be calibrated for size to increase accuracy, but I've not felt the need so I haven't messed with it.
For security cameras, I really only have experience using the Dahua (well, Andy’s) camera with a Dahua NVR. The Dahua IVS implementation (once adjusted using the amazing guidance on this forum and DahuaWiki) performs well for people and cars in my deployment. BI and the various “AI” backends are trying to report more detailed matches than Dahua. Even if their machine learning models were similar, it would be hard to compare, since BI is exposing so much more info to the user.

And, to go back to your original point, alerts for human consumption need not be perfect detections, they just need to “raise the alarm” so to speak for potential review.
 
Top