Hell Yeah! Direct Deepstack Integration - 5.4.0 - March 31, 2021

What's the trick to get it to not recognize static objects each time? It keep labeling my parked car, neighbors etc anytime there is movement.

Ideally of course it just highlights the new object to the scene. I realize this is exists, I'm just missing it.

Thank you!

I wasn't aware you can exclude static objects. I have the same issue since my cameras face a city street (always cars lined up and parked), so I get a car classification 100% of the time.

Since ds analyzes static images, and not movement, I'm not sure it's possible to exclude the parked cars.
 
I wasn't aware you can exclude static objects. I have the same issue since my cameras face a city street (always cars lined up and parked), so I get a car classification 100% of the time.

Since ds analyzes static images, and not movement, I'm not sure it's possible to exclude the parked cars.
Bi automatically excludes a static objects after the first detection if it sees it has not moved. This is stated in the help file.
 
  • Like
Reactions: beandip
I am using AiTool with DeepStack and I have a few suggestions that worked for me to improve object identification in DeepStack. First some information about my hardware. The BI PC is an I7-6700 with 32GB RAM. My cameras are IPC-T2431T-AS 3.6mm from EmpireTech, 4 in total. Below are my camera settings.

View attachment 86300

I am running 5 instances of the CPU version of DeepStack in a Docker container. Windows is configured for WSL2 for the Docker container. DeepStack MODE=High with processing times below.

View attachment 86304

Below is a typical resource usage without any motion. Docker is the VMMEM process and it is using most of the resources. The jpegs are sent to a 3GB RAM Disk and that is part of the 37% memory usage below. With all 4 cameras triggered the CPU usage gets up to 70%.

View attachment 86301
I have BI configured using both the main and sub stream. A cloned camera is detecting motion and generating the jpeg. I trigger the Clone Master with my Home Automation via url for event recording. The clone is using the sub stream to detect motion but generating a jpeg with the main stream. You accomplish jpeg generation using the main stream by enabling "Pre-trigger video buffer" I set the buffer to 2 seconds but I could lower that closer to 1 second. The jpegs are generated every 3 seconds with a break time of 6 seconds under the trigger settings so 3 images are taken and sent to DeepStack for each event. During daylight I have the image quality of the jpeg set to 50%. My minimum confidence is set at 42% for a person. In the daytime persons are identified with 70%-80% confidence. Occasionally my dog will be identified as a person in the daytime with a confidence as high as 40% which is why the confidence level for the person is at 42%.

View attachment 86302

After dark the BI profile changes and I boost the jpeg quality to 100%. The images at night are black and white so the jpeg size is actually smaller and processes about the same at night. The Home Automation system is set differently at night in that a push notification and text will not be sent out unless a person has been identified twice by any combination of cameras within 5 minutes. This is because DeepStack will identify the dog as a person with a high confidence level at night but very rarely twice in the 5 minute interval. Using the main stream for jpeg generation, increasing the image quality at night and running DeepStack in MODE=High improved my results.
What kind of cameras you use? MP? Any PTZ?
 
Can I make a new folder called faces?
Yes, create a folder in your BI location called Faces (where New, Stored, etc are located). Now go to General Settings -> Clips and Archiving and on the left side, click one of the Aux folders. Point the folder location to your faces folder and click the Aux and rename it.
 
  • Like
Reactions: beepsilver
For those of you who think in an organized, alphabetical, manner here's the current list of objects detected in alphabetical order. At least I think it's current.

airplane
apple
backpack
banana
baseball bat
baseball glove
bear
bed
bench
bicycle
bird
boat
book
bottle
bowl
broccoli
bus
cake
car
carrot
cat
cell phone
chair
clock
couch
cow
cup
dining table
dog
donot
elephant
fire hydrant
fork
frisbee
giraffe
hair dryer
handbag
horse
hot dog
keyboard
kite
knife
laptop
microwave
motorcycle
mouse
orange
oven
parking meter
person
pizza
potted plant
refrigerator
remote
sandwich
scissors
sheep
sink
skateboard
skis
snowboard
spoon
sports ball
stop_sign
suitcase
surfboard
teddy bear
tennis racket
tie
toaster
toilet
toothbrush
train
truck
tv
umbrella
vase
wine glass
zebra
 
For those of you who think in an organized, alphabetical, manner here's the current list of objects detected in alphabetical order. At least I think it's current.

airplane
apple
backpack
banana
baseball bat
baseball glove
bear
bed
bench
bicycle
bird
boat
book
bottle
bowl
broccoli
bus
cake
car
carrot
cat
cell phone
chair
clock
couch
cow
cup
dining table
dog
donot
elephant
fire hydrant
fork
frisbee
giraffe
hair dryer
handbag
horse
hot dog
keyboard
kite
knife
laptop
microwave
motorcycle
mouse
orange
oven
parking meter
person
pizza
potted plant
refrigerator
remote
sandwich
scissors
sheep
sink
skateboard
skis
snowboard
spoon
sports ball
stop_sign
suitcase
surfboard
teddy bear
tennis racket
tie
toaster
toilet
toothbrush
train
truck
tv
umbrella
vase
wine glass
zebra

I've always wondered why these objects were chosen. Can't understand why it is important a sandwich is identified...
 
Bi automatically excludes a static objects after the first detection if it sees it has not moved. This is stated in the help file.

I also saw this in help file but my experience has been the opposite. Is there anything I can do? My BI server reboots every Sunday on schedule. I have manually restarted it several times this week.

It sees motion, analyzes the image and tags the exact same parked cars each time.

Here is the images in the alert list. Easy to see the red truck in my driveway continually getting tagged.

Capture.PNG
 
Last edited:
  • Like
Reactions: tdwilli1 and Bosty
I tried an experiment last night and shut off sub streams in four cameras that are clones for my AI efforts. CPU utilization was pretty heavily impacted, going from under 20% to 45-50%. Two of them are 4MP, 20FPS. 8192CBR and the other two are 2MP, 20FPS, 4096CBR. It made little noticeable difference in detection. The camera that gets broadside shots of the street did pick up roughly six out of 75 vehicles. They all were still missing obvious targets during full light conditions as well. I noticed no change in that at all. Everything is back to normal with sub streams enabled again since there was no noticeable difference at full resolution. The sub streams are all D1, 1024CBR, 20FPS.
 
I also saw this in help file but my experience has been the opposite. Is there anything I can do? My BI server reboots every Sunday on schedule. I have manually restarted it several times this week.

It sees motion, analyzes the image and tags the exact same parked cars each time.

Here is the images in the alert list. Easy to see the red truck in my driveway continually getting tagged.

View attachment 86356
Same here. It has never worked.
 
I do have one camera that looks at our parked cars, behind the house in our driveway, and DS would ID them if the camera triggered for a raccoon or whatever passing by and that would happen multiple times during the night. I took that camera off of the DS list for now. There's no street parking here so that hasn't been a problem at all.
 
  • Like
Reactions: beandip and tech101
This is great stuff! I really appreciate it.

I have one question though... When an alert occurs, I have set up BI to publish a MQTT message. I would like to use the DeepStack "detection text" or whatever it's called, e.g. "person:82%, cat :46%" as the MQTT message payload. It would be easy if there was a macro (variable text that is substituted based on context) holding that specific value (like the &PLATE macro containing the license plate captured with ALPR if configured) but I haven't found an entry for that in the documentation yet. Any ideas?

Ken told me it's already implemented. Maybe it's an undocumented feature, I can't tell . Anyway use &MEMO
:)
 
  • Like
Reactions: beandip
I'm being patient, I swear! Super nice feature, glad its being implemented natively.

I did finally get the standalone original version working.
 
I also saw this in help file but my experience has been the opposite. Is there anything I can do? My BI server reboots every Sunday on schedule. I have manually restarted it several times this week.

It sees motion, analyzes the image and tags the exact same parked cars each time.

Here is the images in the alert list. Easy to see the red truck in my driveway continually getting tagged.

View attachment 86356
It will keep getting tagged because it's part of the image but it won't trigger an alert.
 
The plate ID in a MQTT would be nice lmk what you get back on this I would go create a automaton to arm and disarm with this feature. Do you use home assistant by chance?
I use openHAB at the moment . I depend heavily on MQTT as a central message bus. That's a good idea (Arming and disarming) , face recognition would also work I guess :)
 
I use openHAB at the moment . I depend heavily on MQTT as a central message bus. That's a good idea (Arming and disarming) , face recognition would also work I guess :)
I do as well. I use home assistant which most of my communicating is MQTT. I was thinking if BI/DS could recognized a Lic. plate such as my wife's or mine it could then be used as an automation to disarm my alarm.
 
I do as well. I use home assistant which most of my communicating is MQTT. I was thinking if BI/DS could recognized a Lic. plate such as my wife's or mine it could then be used as an automation to disarm my alarm.
I’m not sure how the face recognition engine works with Deepstack but could you put a License Plate in as a face?