DeepStack Case Study: Performance from CPU to GPU version

That's not a BI thing, it's what DS needs to see. I think it downsizes to 720 or 1080, more than likely 720 level. Faulty, old, memory so I can't say for certain.
It downsize it to 640 if deepstack is set to High, 416 if set to Medium, and 256 if set to Low.

DeepStack Code
Code:
PROFILE_SETTINGS = {
        "desktop_cpu": Settings(
            DETECTION_HIGH=640,
            DETECTION_MEDIUM=416,
            DETECTION_LOW=256,
            DETECTION_MODEL="yolov5m.pt",
            FACE_HIGH=416,
            FACE_MEDIUM=320,
            FACE_LOW=256,
            FACE_MODEL="face.pt",
            SUPERRESOLUTION_MODEL="bebygan_x4.pth",
        ),
        "desktop_gpu": Settings(
            DETECTION_HIGH=640,
            DETECTION_MEDIUM=416,
            DETECTION_LOW=256,
            DETECTION_MODEL="yolov5m.pt",
            FACE_HIGH=416,
            FACE_MEDIUM=320,
            FACE_LOW=256,
            FACE_MODEL="face.pt",
            SUPERRESOLUTION_MODEL="bebygan_x4.pth",
 
I have DS set to HIGH right now, maybe I should try medium. I figure the extra resolution would help identify smaller animals since my cameras are about 15 feet above ground.

Also side note, thank you guys for being so damn responsive.
 
  • Like
Reactions: sebastiantombs
I have DS set to HIGH right now, maybe I should try medium. I figure the extra resolution would help identify smaller animals since my cameras are about 15 feet above ground.
The DeepStack default model is optimized for High setting, also my custom models. If you set it to Medium or Low the detection will not be that good.
 
  • Like
Reactions: JNDATHP
I have DS set to HIGH right now, maybe I should try medium. I figure the extra resolution would help identify smaller animals since my cameras are about 15 feet above ground.

Also side note, thank you guys for being so damn responsive.
I have mine on HIGH and get 50ms approx for the gtx970.
1 custom model also which only runs from sunset to sunrise.
 
  • Like
Reactions: sebastiantombs
That's what I've observed. Oh well, its not like my current performance is bad, but knowing it could be way better is grinding my gears.
 
The DeepStack default model is optimized for High setting, also my custom models. If you set it to Medium or Low the detection will not be that good.

Definitely one of those YMMV statements!

I run mine on low and have been spot on. I ran a typical week with it on high and another week on low and compared up comparable instances (people going to/from work, walking their dog, etc.) and the only difference was the high was a longer response time.

Field of view, objects wanting detection on, and object size on the field of view are certainly variables that can influence whether high, med, or low needs to be ran.
 
I'm using two models at night, dark and combined. Detection times are <300ms. During the day, combined only and detection times are <75ms, frequently <50ms. Keep in mind I'm running DS on about a dozen cameras.
 
sebastiantombs also has the gtx970. Just an FYI.

Darn, my main concern for the T400 was for HEVC processing. I wanted to use it for converting my media library, but ran into problems so figured why not use it for DS.
Will definitely keep an eye out for a 970 or something similar that's low profile and not insanely expensive right now.

Definitely one of those YMMV statements!

I run mine on low and have been spot on. I ran a typical week with it on high and another week on low and compared up comparable instances (people going to/from work, walking their dog, etc.) and the only difference was the high was a longer response time.

Field of view, objects wanting detection on, and object size on the field of view are certainly variables that can influence whether high, med, or low needs to be ran.

I'm using Amcrest IP5M-T1179EW-28MM cameras and they are about 15 feet above the ground. I've got a lot more testing and tuning to do it seems.
I'm also really curious about running custom models, are there any guides y'all recommend for that?
 
I also ditched the IR light mode on my cameras, have them fixed on full colour and have installed a few 10w mini floodlights (cold light as this seems to work much better with the cameras), illuminating my gardens and driveway.

Detection at night is now spot on(probably better than day as the light is constant without changes in sun angles and shade), no false alerts and not affected by heavy rain or fog due to the 6000k+ lights.
 
Last edited:
I'm using Amcrest IP5M-T1179EW-28MM cameras and they are about 15 feet above the ground. I've got a lot more testing and tuning to do it seems.
I'm also really curious about running custom models, are there any guides y'all recommend for that?

Yeah a 2.8mm camera 15 feet above the ground is too high and can definitely contribute to bad Deepstack detection.

To identify someone with the wide-angle 2.8mm lens that most people opt for (and what are popular in the box kits), someone would have to be within 13 feet of the camera to IDENTIFY them, but realistically within 10 feet after you dial it in to your settings. You have lost all of that in the vertical direction. These distances are assuming a camera is at roughly 7 feet high. The higher the camera, the more of a "Dot" a person appears as the camera is looking down on the person instead of at the person.


1642128622427.png


People think DeepStack/AI is the cure all, but garbage in = garbage out. A problematic field of view and trying to do too much with a camera field of view will result in false DeepStack detection. Heck even sometimes perfect field of view give funny DeepStack returns.

Custom models are simple - it is just a matter of finding one that meets your needs and put it in the Deepstack\MyModels folder and then going into each camera and telling it which model to run.

Like @Pentagano I have found that DeepStack works better with color images at night.
 
Yeah a 2.8mm camera 15 feet above the ground is too high and can definitely contribute to bad Deepstack detection.

To identify someone with the wide-angle 2.8mm lens that most people opt for (and what are popular in the box kits), someone would have to be within 13 feet of the camera to IDENTIFY them, but realistically within 10 feet after you dial it in to your settings. You have lost all of that in the vertical direction. These distances are assuming a camera is at roughly 7 feet high. The higher the camera, the more of a "Dot" a person appears as the camera is looking down on the person instead of at the person.


1642128622427.png


People think DeepStack/AI is the cure all, but garbage in = garbage out. A problematic field of view and trying to do too much with a camera field of view will result in false DeepStack detection. Heck even sometimes perfect field of view give funny DeepStack returns.

Custom models are simple - it is just a matter of finding one that meets your needs and put it in the Deepstack\MyModels folder and then going into each camera and telling it which model to run.

Like @Pentagano I have found that DeepStack works better with color images at night.

I'm definitely new to all this even with a few years of having this setup. The 'n3wb' tag on my profile is apt lol
That's fantastic information though and I appreciate it. Its going to help a lot on deciding my camera upgrade path.

I definitely don't expect DS to resolve those issues, but it does a pretty good job on detection during the day. Night time is a crap shoot, but I'm only worried about people on my drive way which it seems to get pretty well. Having a blindingly white drive way seems to help.
Its the small animals that are an issue now, but I had a feeling it was due to the cameras more than DS here.

I'll check out custom models, in terms of finding one that meets my needs, what do you mean exactly? I figured the difference between models would be the amount/accuracy of the training data and the specific objects the model can identify. Is there something else I should be looking for?
Also what's the possibility of training my own model using data I've captured through BI? I figure giving DS model data derived from the cameras would be ideal.
 
I'm definitely new to all this even with a few years of having this setup. The 'n3wb' tag on my profile is apt lol
That's fantastic information though and I appreciate it. Its going to help a lot on deciding my camera upgrade path.

I definitely don't expect DS to resolve those issues, but it does a pretty good job on detection during the day. Night time is a crap shoot, but I'm only worried about people on my drive way which it seems to get pretty well. Having a blindingly white drive way seems to help.
Its the small animals that are an issue now, but I had a feeling it was due to the cameras more than DS here.

I'll check out custom models, in terms of finding one that meets my needs, what do you mean exactly? I figured the difference between models would be the amount/accuracy of the training data and the specific objects the model can identify. Is there something else I should be looking for?
Also what's the possibility of training my own model using data I've captured through BI? I figure giving DS model data derived from the cameras would be ideal.

Custom models are all about a model designed for a specific purpose to improve the accuracy of that task at hand.

For example, there is a dark.pt model that was designed and trained just with night time images. Many of us have found that works better than the Deepstack default.

@MikeLud1 has created several custom models for this community - one to read plates, one that took out all the stuff in the default Deepstack we didn't need like toothbrush and toilet, one that added animals people were interested in, etc.

And yes, training your own model with your own images from your field of view is the best way to improve the accuracy.
 
I created my own custom model from photos that had been taken by the cameras. Was a specific cat that I had issues detecting in some areas of my garden. The actual model took ages to build. Was a bit complicated and didn't quite understand the steps initially from what was on the deepstack site.
 
I created my own custom model from photos that had been taken by the cameras. Was a specific cat that I had issues detecting in some areas of my garden. The actual model took ages to build. Was a bit complicated and didn't quite understand the steps initially from what was on the deepstack site.

I need to find the time to do this, and the quality of the deepstack wiki does not help. Its not the worst, but it could use some clean up and clarification. Just getting the GPU version working on docker was way more difficult than it needed to be.
 
I need to find the time to do this, and the quality of the deepstack wiki does not help. Its not the worst, but it could use some clean up and clarification. Just getting the GPU version working on docker was way more difficult than it needed to be.
My first 2 attempts were wrong, and each time I ran the build for my custom model it takes about 12 hours for a small number of photos and uses almost 100% of the CPU processing them to train the model. Did in my case.
 
  • Like
Reactions: gouthamravee
I must be doing something wrong, no idea hwere to start
27 cameras x 4mp
430 MP/s

Dell R3930
i9-9900k cpu
with quadro p2200

still get cpu 100%
and gpu 30%


any ideas?


I was much better at cmaeras with intell and not nvidia




task manager.JPG
 

Attachments

  • total mp.JPG
    total mp.JPG
    10.1 KB · Views: 1
Are you using sub streams in Blue Iris? DeepStack doesn't much care if you're using main or sub stream and the easiest way to drop CPU utilization is by using sub streams in Blue Iris.
 
  • Like
Reactions: tech101