Cameraguy
Known around here
- Feb 15, 2017
- 1,502
- 1,159
I am getting a number of false positives with animals, mostly recognizing shadows and sun spots as various animals, or keeps saying the bunny that runs into our yard is a squirrel. Yesterday, it said a deer was in our yard, but it was a bobcat! I have up the confidence percentage from 50% to 60%. I will up it again as I am getting some at 61% lol. The latter is not a big deal as it does alert me with important recognitions. The former are a little annoying
I am wondering if BI could add a feature that when users review and find a false positive recognition, BI provides some UI for users easily label the object in question or mark as false and upload the capture plus labeling anonymously, and then we can collect all the contributed picture samples to train for better custom models. It will have to be opted in for sure. I think BI is in a unique position with its AI integration and IP cam enthusiasts to do this and make BI even better.
And of course we have @MikeLud1 that is doing the very thing you suggested.
And of course the problem with too much data in a system like this is it then could start producing even more false positives. If these platforms get junk field of view pictures that people are submitting where they are trying to do too much with a field of view, it will significantly increase the number of false alerts.
I do not believe more labeled pictures would make more false positives. More properly labeled pictures, specifically pictures taken from high-mounted wide angle lens IP cameras we all have, will make for better custom models for IP cameras. the pictures we used to train the current custom models are mostly from cameras taken at eye level, straight on to the subjects.
If you just consume everything that is submitted indiscriminately, of course it would be garbage in garbage out. Just like any pull requests on github, we wouldn't just take them as they come. If we do have a pool of labeled pictures contributed by users, we would have trusted volunteers like yourself and others long timers here to review them before we put them into the training pool.
I am sure there are not enough West highland terriers in the combined model, but I'm happy with catdog LOL
Hey is that directed towards my turkeys??? HahahaYou have been around here long enough to know that people would submit total junk field of view images with trying to IDENTIFY a freaking turkey from 120 feet away with a 2.8mm camera LOL
There is a thing in coding called GARBAGE IN = GARBAGE OUT and it would certainly apply in this instance. If unrealistic images are being sent to the model to be trained it will result in more false positives.
Maybe if the images are within the DORI Recognize and Identify distances, but we both know people would submit stuff that is far beyond the capabilities of the camera.
Those of us with ideal field of views with cameras at basically eye level could certainly see a compromise and more false positives from people submitting 2.8mm field of view images from 2nd story locations.
Many of us here often comment on posts where someone complains that the camera isn't triggering, etc. and it is pointed out they are trying to do too much with one camera and/or they have mounted it too high.
Here is one such example that was an image from another post where someone was upset that they thought the 5442 was a better camera and thought he should be able to identify this animal is a chipmunk:
View attachment 131806
Now if all of a sudden this was labeled as chipmunk and put into the model to train, then almost any animal on four legs cat size or smaller would likely be labeled as a chipmunk....
Using images from a camera too high up will result in way more false positives to other users...
heh. I’ve got a Boston terrier and it picks it up as a “person” lol
My poor mailman has been called a "DOG" by DS Custom forever. Shhhhhh, I won't tell if you don't.I wouldn't be surprised if the dog actually believes they are. I'm sure there would be someone out there willing to train a custom model that recognises their pets as "the fur babies" Deepstack nearly started a fight when it recognised my wife as a "horse"
Back to serious business now. Does anyone have a tutorial they could recommend for a beginner looking to train a custom AI model?
I‘m running last fall’s version of Deepstack (CPU) with the last stable version of Blue Iris on a 2nd gen i7 with 8GB RAM with no issues, and have verified that the default Deepstack model is not running. What versions are you using?So I have been using an Nvidia GT 1030 for Deepstack with the "general" model for a few weeks and it has been great. Fast and accurate but very memory hungry using ~95% of the card's 2GB.
Yesterday, I swapped the 1030 for a new Nvidia T400 - also 2GB memory - and now deepstack refuses to start and throws an "out of memory" exception. I can get deepstack to work but only if I remove the "general" model and go back to using just the default one. With just the default model, deepstack uses 75% of the T400 memory. I suspect that deepstack always loads the default model regardless of whether it is used and loading an additional custom model puts the T400 just over the memory limit (even though the 1030 could handle it). I have tried the MODE options of High/Medium/Low to see if that would help but without success.
Anyone had memory issues when using these custom models ? Or know if you can get deepstack to use a custom model without also loading the default one?
If you are not using DeepStack's default model you can uncheck Default object detection and the default model will not load. You need to stop then restart DeepStack for the change to take effect.So I have been using an Nvidia GT 1030 for Deepstack with the "general" model for a few weeks and it has been great. Fast and accurate but very memory hungry using ~95% of the card's 2GB.
Yesterday, I swapped the 1030 for a new Nvidia T400 - also 2GB memory - and now deepstack refuses to start and throws an "out of memory" exception. I can get deepstack to work but only if I remove the "general" model and go back to using just the default one. With just the default model, deepstack uses 75% of the T400 memory. I suspect that deepstack always loads the default model regardless of whether it is used and loading an additional custom model puts the T400 just over the memory limit (even though the 1030 could handle it). I have tried the MODE options of High/Medium/Low to see if that would help but without success.
Anyone had memory issues when using these custom models ? Or know if you can get deepstack to use a custom model without also loading the default one?
He's running DS on a docker container, probably another machine. I think he still needs to shut down the default objects in that instance. BI may not look for default objects but the docker instance won't know that.