Deepstack picture resolution

Cameraguy

Known around here
Joined
Feb 15, 2017
Messages
1,486
Reaction score
1,132
Is there a way to adjust the resolution of the images that are sent to deepstack?
 

wittaj

IPCT Contributor
Joined
Apr 28, 2019
Messages
24,972
Reaction score
48,690
Location
USA
Up the bitrate and resolution of the substream or use mainstream only IF you have a powerful enough machine.
 

Cameraguy

Known around here
Joined
Feb 15, 2017
Messages
1,486
Reaction score
1,132
Up the bitrate and resolution of the substream or use mainstream only IF you have a powerful enough machine.
So it's definitely pulling image from sub stream if they are enabled
 

wittaj

IPCT Contributor
Joined
Apr 28, 2019
Messages
24,972
Reaction score
48,690
Location
USA
Yes, which is problematic for some field of view for faces. For almost anything else it is ok within reason.
 

Cameraguy

Known around here
Joined
Feb 15, 2017
Messages
1,486
Reaction score
1,132
Yes, which is problematic for some field of view for faces. For almost anything else it is ok within reason.
I might have to pick and choose which cameras use sub stream from now on. One that detect alot of people and faces, maybe I'll go to main stream
 

nickh66

n3wb
Joined
Mar 21, 2021
Messages
17
Reaction score
12
Location
Australia, NSW, Parramatta
Perhaps this will help.
Buried in the "camera settings>trigger tab>motion sensor, configure & advanced, there is a 'high definition' option.
Quote from the help pdf. "By default, to save CPU and smooth-out noise, the image is reduced by considering it in
blocks. The High definition option actually increases the number of motion detection blocks
that are used by typically 4x."
 

Spuds

n3wb
Joined
Nov 12, 2018
Messages
20
Reaction score
12
Location
TN, USA
I use Trigger, when Triggered, High Res JPEG files ... that is what gets sent to DS to analyze. You should not have to mess with your sub or main streams,
 

cb8

Getting comfortable
Joined
Jan 16, 2017
Messages
111
Reaction score
64
On thing to keep in mind is that DeepStack will scale the image to the required input size of the model, that being 256x256, 416x416 or 640x640 depending on whether you have selected low, medium or high settings in DeepStack. Using the sub-stream as input to DeepStack therefore makes sense as it already matches the required input size fairly closely, while using the main stream wastes significant resource to first decode the main stream and the downsize the image afterwards. After an object has been identified though, saving hi-res alerts is certainly appealing.
 
Top