What you see in the snapshot capture and what DeepStack may see real time are two different things. Contrast, as wittaj mentioned, is another consideration. When the object is at the edge of the image contrast gets more important. DeepStack is not human and your computer isn't a computer brain. There is a limited number of examples in the DeepStack "objects" file. If it doesn't see something close to the capture it can't detect.
Another hint is to not use "high resolution" for analysis captures.
Blue Iris downsizes the images to 1080 anyway so high res isn't really much of a help IMHO, just a little more CPU load needed to downsize a 2K or 4K image.
I'd also suggest to try using the "combined" model rather than the default "objects" model. It has far fewer objects, no giraffe, elephant, zebra and so on, so it works much more quickly.
I figured out how to download the images that DeepStack used to train the object model. With these images I can start to make the community custom DeepStack model, below are the steps to create the custom model. The first step would be to create a new DeepStack custom model using the same image...
ipcamtalk.com
One more comment. This is real world, not Hollywood, and it takes some tuning and tweaking to get things working well.