Theoretically the motion detection agorithm should give better results from higher quality streams even if it would use the same amount of pixels from a lower and from the higher quality streams.
Yes, but only if the lower quality stream was
really bad quality. Like, so bad that the compression artifacts still had a significant impact on the image after it was downscaled to the ridiculously low resolution
Blue Iris uses for motion detection.
Blue Iris's help file hints at this.
By default, to save CPU and smooth-out noise, the image is reduced by considering it in
blocks. The High definition option actually increases the number of motion detection blocks
that are used by typically 4x.
I've noticed that the "High definition" option shrinks the size of the blocks in the motion zone editor. This implies that the grid in the motion zone editor is equal to the resolution of the frames fed into the motion detector. If that is true, then the frames received by the motion detector would look something like these:
Just to prove a point, one of these images was captured from a 4K source, and the other from its D1 sub stream. Then I downscaled to 120x68 using linear scaling. Then upscaled to 640x363 using nearest neighbor so you could more easily see the details without using browser zoom. Unfortunately it is snowing right now so the falling snowflakes are different in each frame, but otherwise they are virtually identical because the downsampling has masked what used to be a very substantial difference in image quality.
That said, there is also a difference in motion zone editor grid size between 4K resolution and D1 sub stream resolution, so it IS likely that Blue Iris is feeding smaller frames to the motion detector when you use a sub stream. If that is concerning, you could enable the "High definition" option, and that should more than make up for the loss of detail fed into the motion detector.