IP Cam Talk Custom Community DeepStack Model

Should I run DS in medium or high mode? I assume the accuracy is better in high mode? The time listed on DS goes from 500ms to 1000ms, but I'm not sure that matters functionally?
 
I have found mine is just as accurate in low mode and is faster. YMMV

Ok, I added the objects:0 etc - still it's not detecting anything but people.
The blue iris log shows that Deepstack cancelled the alert in like 17ms (even though the DS log shows it takes 500ms)
Here is a sample .DAT file
 

Attachments

@MikeLud1 - Great work on this! However, the latest "combined" doesnt seem to get birds or cats. In the attached images, deepstack got a Bird at 63% and a Cat at 58%, but "combined" doesnt see anything.
I confirmed it does get cars and people near the same detection percent as deepstack, but maybe a little lower.
I tested on medium and high, same result.
I'm using the latest Deepstack Windows GPU version.
 

Attachments

  • LOREXBACK.20211226_112629187.jpg
    LOREXBACK.20211226_112629187.jpg
    2.5 MB · Views: 50
  • sunroom.20211226_132655646_AITOOLTEST_53732.605424.jpg
    sunroom.20211226_132655646_AITOOLTEST_53732.605424.jpg
    386.2 KB · Views: 49
Last edited:
@MikeLud1 - Great work on this! However, the latest "combined" doesnt seem to get birds or cats. In the attached images, deepstack got a Bird at 63% and a Cat at 58%, but "combined" doesnt see anything.
I confirmed it does get cars and people near the same detection percent as deepstack, but maybe a little lower.
I tested on medium and high, same result.

Strang, it thinks my dog is usually a bird, and on occasions a cat. Then a dog is in third place. To be fair she does kind of look like a bird / cat in the burned screen shots. LOL I am pretty sure I am running the latest combined model, but I will go back and make sure. I've been meaning to send in a few pics of her to bed added to the next version of animal/combined.

Backyard_1.20211226_120000.239115.4099-65.16564.18696.jpgBackyard_1.20211226_120000.1331260.4099-65.10642.12692.jpg
 
Last edited:
  • Like
Reactions: looney2ns
@MikeLud1 - Great work on this! However, the latest "combined" doesnt seem to get birds or cats. In the attached images, deepstack got a Bird at 63% and a Cat at 58%, but "combined" doesnt see anything.
I confirmed it does get cars and people near the same detection percent as deepstack, but maybe a little lower.
I tested on medium and high, same result.
I'm using the latest Deepstack Windows GPU version.
I am hoping to issue a new combined, general, and animal by Friday. Hopefully the next version will have better results.
 
Strang, it thinks my dog is usually a bird, and on occasions a cat. Then a dog is in third place. To be fair she does kind of look like a bird / cat in the burned screen shots. LOL I am pretty sure I am running the latest combined model, but I will go back and make sure. I've been meaning to send in a few pics of her to bed added to the next version of animal/combined.

View attachment 113491View attachment 113492
Send me some images and I will add them to the next version.
 
  • Like
Reactions: sebastiantombs
Send me some images and I will add them to the next version.
Will do. I will have to go back and reread the label creation part of this thread. I think my ADHD was really bad that day when I originally read your instructions. I will try to get them labeled and ready to send to you by Monday or Tuesday. How many images should I shoot for? How many would you need to make a difference in training the models?
 
How should one configure BlueIris to call the "general" custom model when deepstack is run from Docker/Jetson Nano?

I tried (in the camera's "Trigger/Artificial Intelligence" form):
  • "objects:0,general" or "objects:0,custom/general": deepstack does not get called (confirmed with wireshark)
  • "general" : default detection gets called (/v1/vision/detection)
The general model runs fine when I call it directly (using /v1/vision/custom/general instead of /v1/vision/detection), so there is no problem on the deepstack side.

This how I load deepstack:
I put general.pt in ~/aimodels/
and started deepstack with:
sudo docker run -d --log-driver syslog --runtime nvidia --name deepstack --restart unless-stopped -e VISION-DETECTION=True -e MODE=High -v /home/myuser/aimodels:/modelstore/detection -p 80:5000 deepquestai/deepstack:jetpack-2021.09.1

I cannot run the combined model, as I found that loading that model provokes an out of memory on my 4GB Jetson Nano.

BI: 5.5.3.7 on Win10
 
I noticed this training set is built on the Coco image files? I wonder if it will be more accurate if people contributed CCTV images instead since CCTV cameras often have odd angles, focal lengths, and color (i.e. night vision)?

Response times are good - 120ms or so with a Nvidia T400, however times sometimes spike.
 
Last edited:
I would ask that you keep "mouse". One of the things that I use BI for is to try to find
out how mice are getting in my motorhome. I can imagine that a lot of people would
use BI for these types of things.
 
I would ask that you keep "mouse". One of the things that I use BI for is to try to find
out how mice are getting in my motorhome. I can imagine that a lot of people would
use BI for these types of things.
The mouse is not the the type of mouse, it is the computer mouse.
 
Hey

are you otherwise satisfied with the Jetson nano in conjunction with Deepstack? What are your response times

Thanks and best regards
Jan
How should one configure BlueIris to call the "general" custom model when deepstack is run from Docker/Jetson Nano?

I tried (in the camera's "Trigger/Artificial Intelligence" form):
  • "objects:0,general" or "objects:0,custom/general": deepstack does not get called (confirmed with wireshark)
  • "general" : default detection gets called (/v1/vision/detection)
The general model runs fine when I call it directly (using /v1/vision/custom/general instead of /v1/vision/detection), so there is no problem on the deepstack side.

This how I load deepstack:
I put general.pt in ~/aimodels/
and started deepstack with:
sudo docker run -d --log-driver syslog --runtime nvidia --name deepstack --restart unless-stopped -e VISION-DETECTION=True -e MODE=High -v /home/myuser/aimodels:/modelstore/detection -p 80:5000 deepquestai/deepstack:jetpack-2021.09.1

I cannot run the combined model, as I found that loading that model provokes an out of memory on my 4GB Jetson Nano.

BI: 5.5.3.7 on Win10
 
Response times are between 150 and 300msec on 2Mpixel images, it is silent, low power, and recognition quality (false positives/negatives) is not great but just acceptable for my basic case: people detection.
My driver for using it: I wanted to free up CPU for other work, and that it did.
So: it does the job, but would like to see something better if possible. I'd like to put my Coral TPUs to work on this. Deepstack + Jetson was just easier to set up.
 
Awesome work!
I did a bit of research and couldn't find much on this, so if one were running DeepStack in Docker, how would you point DS to this community model?
I'm assuming I'd have to copy the .pt to the Docker host, then add an env variable for it?