IP Cam Talk Custom Community DeepStack Model

Futaba

Pulling my weight
Joined
Nov 13, 2015
Messages
199
Reaction score
105
I am getting a number of false positives with animals, mostly recognizing shadows and sun spots as various animals, or keeps saying the bunny that runs into our yard is a squirrel. Yesterday, it said a deer was in our yard, but it was a bobcat! I have up the confidence percentage from 50% to 60%. I will up it again as I am getting some at 61% lol. The latter is not a big deal as it does alert me with important recognitions. The former are a little annoying :D

I am wondering if BI could add a feature that when users review and find a false positive recognition, BI provides some UI for users easily label the object in question or mark as false and upload the capture plus labeling anonymously, and then we can collect all the contributed picture samples to train for better custom models. It will have to be opted in for sure. I think BI is in a unique position with its AI integration and IP cam enthusiasts to do this and make BI even better.
 

wittaj

IPCT Contributor
Joined
Apr 28, 2019
Messages
13,450
Reaction score
25,083
Location
USA
I am getting a number of false positives with animals, mostly recognizing shadows and sun spots as various animals, or keeps saying the bunny that runs into our yard is a squirrel. Yesterday, it said a deer was in our yard, but it was a bobcat! I have up the confidence percentage from 50% to 60%. I will up it again as I am getting some at 61% lol. The latter is not a big deal as it does alert me with important recognitions. The former are a little annoying :D

I am wondering if BI could add a feature that when users review and find a false positive recognition, BI provides some UI for users easily label the object in question or mark as false and upload the capture plus labeling anonymously, and then we can collect all the contributed picture samples to train for better custom models. It will have to be opted in for sure. I think BI is in a unique position with its AI integration and IP cam enthusiasts to do this and make BI even better.
It is a great idea, but not something I think BI would undertake. That would be more either in DeepStack or SenseAI court as they are the ones developing the model and AI. BI simply plugs in their program.

And of course we have @MikeLud1 that is doing the very thing you suggested.

And of course the problem with too much data in a system like this is it then could start producing even more false positives. If these platforms get junk field of view pictures that people are submitting where they are trying to do too much with a field of view, it will significantly increase the number of false alerts.

It is always best to take your own field of view and create your own model if the available or custom models are not working for a particular field of view.
 

Futaba

Pulling my weight
Joined
Nov 13, 2015
Messages
199
Reaction score
105
And of course we have @MikeLud1 that is doing the very thing you suggested.
the pictures and labeling collected would be specifically for Mike to train for better models.

And of course the problem with too much data in a system like this is it then could start producing even more false positives. If these platforms get junk field of view pictures that people are submitting where they are trying to do too much with a field of view, it will significantly increase the number of false alerts.
I do not believe more labeled pictures would make more false positives. More properly labeled pictures, specifically pictures taken from high-mounted wide angle lens IP cameras we all have, will make for better custom models for IP cameras. the pictures we used to train the current custom models are mostly from cameras taken at eye level, straight on to the subjects.
 

wittaj

IPCT Contributor
Joined
Apr 28, 2019
Messages
13,450
Reaction score
25,083
Location
USA
I do not believe more labeled pictures would make more false positives. More properly labeled pictures, specifically pictures taken from high-mounted wide angle lens IP cameras we all have, will make for better custom models for IP cameras. the pictures we used to train the current custom models are mostly from cameras taken at eye level, straight on to the subjects.
You have been around here long enough to know that people would submit total junk field of view images with trying to IDENTIFY a freaking turkey from 120 feet away with a 2.8mm camera LOL

There is a thing in coding called GARBAGE IN = GARBAGE OUT and it would certainly apply in this instance. If unrealistic images are being sent to the model to be trained it will result in more false positives.

Maybe if the images are within the DORI Recognize and Identify distances, but we both know people would submit stuff that is far beyond the capabilities of the camera.

Those of us with ideal field of views with cameras at basically eye level could certainly see a compromise and more false positives from people submitting 2.8mm field of view images from 2nd story locations.

Many of us here often comment on posts where someone complains that the camera isn't triggering, etc. and it is pointed out they are trying to do too much with one camera and/or they have mounted it too high.


Here is one such example that was an image from another post where someone was upset that they thought the 5442 was a better camera and thought he should be able to identify this animal is a chipmunk:


1656217033341.png

Now if all of a sudden this was labeled as chipmunk and put into the model to train, then almost any animal on four legs cat size or smaller would likely be labeled as a chipmunk....

Or this one that is a 2.8mm mounted too high and wanting to be able to AI the person walking in the red shirt....if this were included in the training model, it would certainly give a lot of false positives...

1656349131929.png

Using images from a camera too high up will result in way more false positives to other users...
 
Last edited:

Futaba

Pulling my weight
Joined
Nov 13, 2015
Messages
199
Reaction score
105
If you just consume everything that is submitted indiscriminately, of course it would be garbage in garbage out. Just like any pull requests on github, we wouldn't just take them as they come. If we do have a pool of labeled pictures contributed by users, we would have trusted volunteers like yourself and others long timers here to review them before we put them into the training pool.
 

wittaj

IPCT Contributor
Joined
Apr 28, 2019
Messages
13,450
Reaction score
25,083
Location
USA
If you just consume everything that is submitted indiscriminately, of course it would be garbage in garbage out. Just like any pull requests on github, we wouldn't just take them as they come. If we do have a pool of labeled pictures contributed by users, we would have trusted volunteers like yourself and others long timers here to review them before we put them into the training pool.
And that is key is that someone is vetting the images properly.

But while AI is good, there are also so many variables that come into play and field of view and depth of that field of view is a key component. I think that is why animal identification is so difficult for these because a deer in your field of view may look a lot different than the deer in my field of view based on height of camera and distance to the deer.

Personally, if it was mission critical or I wanted a higher percentage to be correct, I would train a model with just my field of view so I am not depending on other field of views that are not similar to mine that could cause false triggers or miss triggers.

But I do agree that it would be nice to have a mechanism to submit photos that someone could use to decide if it is worthy of being part of the training model.
 
Last edited:

Cameraguy

Known around here
Joined
Feb 15, 2017
Messages
1,405
Reaction score
1,019
You have been around here long enough to know that people would submit total junk field of view images with trying to IDENTIFY a freaking turkey from 120 feet away with a 2.8mm camera LOL

There is a thing in coding called GARBAGE IN = GARBAGE OUT and it would certainly apply in this instance. If unrealistic images are being sent to the model to be trained it will result in more false positives.

Maybe if the images are within the DORI Recognize and Identify distances, but we both know people would submit stuff that is far beyond the capabilities of the camera.

Those of us with ideal field of views with cameras at basically eye level could certainly see a compromise and more false positives from people submitting 2.8mm field of view images from 2nd story locations.

Many of us here often comment on posts where someone complains that the camera isn't triggering, etc. and it is pointed out they are trying to do too much with one camera and/or they have mounted it too high.


Here is one such example that was an image from another post where someone was upset that they thought the 5442 was a better camera and thought he should be able to identify this animal is a chipmunk:


View attachment 131806

Now if all of a sudden this was labeled as chipmunk and put into the model to train, then almost any animal on four legs cat size or smaller would likely be labeled as a chipmunk....

Using images from a camera too high up will result in way more false positives to other users...
Hey is that directed towards my turkeys??? Hahaha
 

SyconsciousAu

Getting comfortable
Joined
Sep 13, 2015
Messages
872
Reaction score
821
heh. I’ve got a Boston terrier and it picks it up as a “person” lol
I wouldn't be surprised if the dog actually believes they are. I'm sure there would be someone out there willing to train a custom model that recognises their pets as "the fur babies" Deepstack nearly started a fight when it recognised my wife as a "horse"

Back to serious business now. Does anyone have a tutorial they could recommend for a beginner looking to train a custom AI model?
 

kc8tmv

Getting the hang of it
Joined
May 27, 2017
Messages
163
Reaction score
82
Location
Cincinnati, Ohio
I wouldn't be surprised if the dog actually believes they are. I'm sure there would be someone out there willing to train a custom model that recognises their pets as "the fur babies" Deepstack nearly started a fight when it recognised my wife as a "horse"

Back to serious business now. Does anyone have a tutorial they could recommend for a beginner looking to train a custom AI model?
My poor mailman has been called a "DOG" by DS Custom forever. Shhhhhh, I won't tell if you don't.
 

PeteB

n3wb
Joined
Sep 11, 2016
Messages
3
Reaction score
4
So I have been using an Nvidia GT 1030 for Deepstack with the "general" model for a few weeks and it has been great. Fast and accurate but very memory hungry using ~95% of the card's 2GB.

Yesterday, I swapped the 1030 for a new Nvidia T400 - also 2GB memory - and now deepstack refuses to start and throws an "out of memory" exception. I can get deepstack to work but only if I remove the "general" model and go back to using just the default one. With just the default model, deepstack uses 75% of the T400 memory. I suspect that deepstack always loads the default model regardless of whether it is used and loading an additional custom model puts the T400 just over the memory limit (even though the 1030 could handle it). I have tried the MODE options of High/Medium/Low to see if that would help but without success.

Anyone had memory issues when using these custom models ? Or know if you can get deepstack to use a custom model without also loading the default one?
 

Swampledge

Pulling my weight
Joined
Apr 9, 2021
Messages
101
Reaction score
171
Location
Connecticut
So I have been using an Nvidia GT 1030 for Deepstack with the "general" model for a few weeks and it has been great. Fast and accurate but very memory hungry using ~95% of the card's 2GB.

Yesterday, I swapped the 1030 for a new Nvidia T400 - also 2GB memory - and now deepstack refuses to start and throws an "out of memory" exception. I can get deepstack to work but only if I remove the "general" model and go back to using just the default one. With just the default model, deepstack uses 75% of the T400 memory. I suspect that deepstack always loads the default model regardless of whether it is used and loading an additional custom model puts the T400 just over the memory limit (even though the 1030 could handle it). I have tried the MODE options of High/Medium/Low to see if that would help but without success.

Anyone had memory issues when using these custom models ? Or know if you can get deepstack to use a custom model without also loading the default one?
I‘m running last fall’s version of Deepstack (CPU) with the last stable version of Blue Iris on a 2nd gen i7 with 8GB RAM with no issues, and have verified that the default Deepstack model is not running. What versions are you using?
 

PeteB

n3wb
Joined
Sep 11, 2016
Messages
3
Reaction score
4
Latest docker version of Deepstack GPU as well as latest BI (5.5.9.3).

OK, so I think I have worked out what is going on -> User error!

I was running the default docker run command (i.e. docker run -e VISION-DETECTION=True -v /mnt/user/appdata/deepstack-gpu:/datastore -v /mnt/user/appdata/deepstack-gpu/detection/:/modelstore/detection -p 80:5000 deepquestai/deepstack-gpu). This will result in the default model and all of your custom models to be loaded.

However, if you set "VISION-DETECTION=False" then the default model is not loaded. To be honest, I had completely misunderstood the VISION-DETECTION variable. I had assumed it was some sort of global enable flag rather than specific to the default model. But thankfully setting it to false does indeed defer loading of the default model and allows the custom "general" model from here to be used on a T400.

Hopefully, helpful for anyone else that hits up against this problem.
 

MikeLud1

IPCT Contributor
Joined
Apr 5, 2017
Messages
1,057
Reaction score
2,253
Location
Brooklyn, NY
So I have been using an Nvidia GT 1030 for Deepstack with the "general" model for a few weeks and it has been great. Fast and accurate but very memory hungry using ~95% of the card's 2GB.

Yesterday, I swapped the 1030 for a new Nvidia T400 - also 2GB memory - and now deepstack refuses to start and throws an "out of memory" exception. I can get deepstack to work but only if I remove the "general" model and go back to using just the default one. With just the default model, deepstack uses 75% of the T400 memory. I suspect that deepstack always loads the default model regardless of whether it is used and loading an additional custom model puts the T400 just over the memory limit (even though the 1030 could handle it). I have tried the MODE options of High/Medium/Low to see if that would help but without success.

Anyone had memory issues when using these custom models ? Or know if you can get deepstack to use a custom model without also loading the default one?
If you are not using DeepStack's default model you can uncheck Default object detection and the default model will not load. You need to stop then restart DeepStack for the change to take effect.

1656692416974.png
 

PeteB

n3wb
Joined
Sep 11, 2016
Messages
3
Reaction score
4
He's running DS on a docker container, probably another machine. I think he still needs to shut down the default objects in that instance. BI may not look for default objects but the docker instance won't know that.
Yep, that was what caught me out. Took awhile to work out the interaction between the BI settings and docker flags (i.e. what in reality the VISION-DETECTION flag was all about). Thanks to all that responded.
 

Futaba

Pulling my weight
Joined
Nov 13, 2015
Messages
199
Reaction score
105
Seriously, Deepstack's barrier of entry is so high, it makes sense for BI to switch over to the easier-to-install Code Project AI.
 
Top