This is my biggest complaint about CPAI, usually it's a cat or dog marked as a bear. Or rain marked as a person. But it does significantly cut down on false alerts when NOTHING is there. But you get what you pay for and since CPAI is free, it is what it is.
Here’s my take on this. To clear thing up I think this is the biggest mis conception of these AI tools like CPAI. They think that because it’s marking something wrong it’s their fault. When in reality it’s just an engine. They are using YOLO models by Ultralytics. If you are using MikeLuds modified ones it was originally made by Deepstack (I could be wrong) I believe and was modified to only include the tags necessary for Surveillance systems.
So when things are being marked wrong like rain being a person or dog as a bear, it’s due to the models being used which is mainly YOLO. And those models don’t generally use images from Surveillance systems. It’s more general images. Like the Coco dataset has. So unfortunately it will mark things wrong cause it’s seeing something like a shadow reflecting it based on the model.
Only way to solve that is someone has to take the time and create a model with thousands of pictures and mark them (create a data set) using only images from surveillance cameras. Its angles and field of view affects this a good deal. To create such a dataset would take a good bit of time for sure. First got to curate all those images from various systems to get a broad spectrum of them. Then make sure you have hundreds if not thousands of them for each class (like person, car, truck, etc). Then obviously mark/tag each one to show for example what is a person and also what is not a person to help the accuracy.
Right now almost majority of the AI engines are using models like YOLO or DETR.
I’m no expert in this but this is what I’ve gathered. Feel free to correct me where I’m wrong.