CPU choice for Blue Iris server (2000 MP/s)

natureboy

n3wb
Nov 19, 2020
4
1
USA
I read through some of the guides and posts here and was wondering if I could get CPU advice for my Blue Iris server build. I plan on having 19 cameras (8MP) recording continuously in H.265. According to my calculations, at 10fps, I will hit 1520 MP/s; at 15 fps, I will hit 2280 MP/s; at 20fps I'm will top 3040 MP/s. I noticed that bp2008 tested the 3950X. If I understand correctly, at 1900 MP/s, CPU usage was still under 50%. If I bought the 3960X, would it be reasonable to assume that it could handle 2280 MP/s without any hiccups? If I had to choose between a Threadripper, EPYC, or i9 at a similar price range, can I expect the one that has the highest PassMark score to perform the best (lowest CPU usage at highest MP/s)? For example, if the 3960X and EPYC 7402P were similarly priced, can I expect the 3960X to perform better?
 
The numbers change significantly if you use substreams. Just a SWAG guess, but if you use minimal resolution substreams you probably won't get over 30% with a 7th generation or higher.
 
  • Like
Reactions: Flintstone61
see
Blue Iris Update Helper

for system that are currently running. I would strongly recommend use the sub streams for motion detection in BI,
 
  • Like
Reactions: sebastiantombs
see
Blue Iris Update Helper

for system that are currently running. I would strongly recommend use the sub streams for motion detection in BI,
Yeah I recently learned how to apply substream command/script to my 11 cameras. So with a 3rd gen i7 I’m now drifting around 6-9% Cpu usage. versus the 32-45% of Mainstream settings.
 
  • Like
Reactions: sebastiantombs
Yeah I recently learned how to apply substream command/script to my 11 cameras. So with a 3rd gen i7 I’m now drifting around 6-9% Cpu usage. versus the 32-45% of Mainstream settings.

I have to look into substream. Is it applicable even if I want to record continuously on all 19 cameras as opposed to motion detection?
 
  • Like
Reactions: Flintstone61
Sure, the main stream is recorded and used for single camera display and clip play back. The substream is used for motion detection and multiple camera display on the console. The main stream, full resolution, is what is recorded while the substream is what is processed for motion. That's where the saving of CPU utilization comes from.

I've got a 6700K with 14 cameras. Four are 4MP, nine are 2MP, for a total of about 55 megapixels per second and the CPU rarely hits 10%.
 
Last edited:
  • Like
Reactions: Flintstone61
What’s the updated consensus on server PC setup for h.265 encoding and decoding? Are we still better off having an Intel chip with quicksync versus Nvidia video card or something? Planning a server with 20-25 8MP cams
 
Nvidia is nice but a total power hog. Figure 100 watts for a "lower end" like a 1060 and 175 watts for a "higher end like a 2070 when they are loaded.
 
Sure, the main stream is recorded and used for single camera display and clip play back. The substream is used for motion detection and multiple camera display on the console. The main stream, full resolution, is what is recorded while the substream is what is processed for motion. That's where the saving of CPU utilization comes from.

I've got a 6700K with 14 cameras. Four are 4MP, nine are 2MP, for a total of about 55 megapixels per second and the CPU rarely hits 10%.
great description sebastiantombs
 
  • Like
Reactions: sebastiantombs
I noticed that bp2008 tested the 3950X. If I understand correctly, at 1900 MP/s, CPU usage was still under 50%.

Yes.

If I bought the 3960X, would it be reasonable to assume that it could handle 2280 MP/s without any hiccups?

Assuming you used decent memory and had it configured properly, yeah I think so.

If I had to choose between a Threadripper, EPYC, or i9 at a similar price range, can I expect the one that has the highest PassMark score to perform the best (lowest CPU usage at highest MP/s)? For example, if the 3960X and EPYC 7402P were similarly priced, can I expect the 3960X to perform better?

I haven't seen enough high-end builds to really know for sure, but my gut tells me the winner between those two would be whichever could get the most bandwidth out of the memory. If the EPYC could run 8 memory channels all at the same MHz as a Threadripper build running 4 channels, then my guess is the EPYC would win despite having lower CPU clockspeeds and consequently a lower CPU Mark score.

That said, I agree with the others recommending you just use sub streams and run a regular desktop system that is cheaper and less power-hungry. Intel Quick Sync is still the best hardware acceleration option for efficiency -- when I tested it, it actually reduced overall power consumption. Nvidia's implementation on the other hand raises power consumption (by a lot!). You shouldn't use Nvidia acceleration except if the CPU is underpowered and can't handle the load otherwise.

As noted in the sub stream guide, one 704x480 resolution sub stream is only about 4% of the load of one 3840x2160 main stream. If you enable sub streams for most of the cameras, you could get away with even a relatively low-end 10th gen Intel CPU.
 
I'm thinking of a possible, alternative route: server with Xeon CPU. According to intel's site, the newest Xeon CPUs that support QSV are the W-10855M (passmark score of 17353) and the W-1270 (passmark score of 19704). They're pretty small sample sizes for those 2 CPUs though. I've looked at the Blue Iris Update Helper database and looked at some of the Xeon systems. If I were to go with Xeon (due to handful of logistical reasons), would it be better to choose a higher end Xeon without Intel QSV but has a similar passmark score as the i9-10900k, or would it be better to choose one that has QSV but a passmark score (limited sample size) of about half the i9-10900k?
 
W-10855M is a mobile CPU for notebooks, only 6 cores. W-1270 has only 8 cores.

Fastest CPUs with Intel iGPU are:
Intel Xeon W-1290P, 10C/20T, 3.70-5.30GHz
Intel Core i9-10900K, 10C/20T, 3.70-5.30GHz
Both are for socket 1200, identical CPUs, Xeon just has ECC memory support enabled.

Real Xeon CPUs (for multi socket systems or workstations, all without iGPU) have more cores but are crazy expensive and AMDs new CPUs are simply faster but also pricy. I would also go for a desktop system, enable sub streams and add a low power Nvidia card later if necessary. In BI stats you'll find a i9-9900k with Nvidia GTX 970 that handles 49 cams@3572.7 MP/s.
 
FWIW, using sub-streams makes a significant difference in CPU usage, at least, on my system. 12 cameras, 24x7x365 recording (D2D), with motion detection on an older Intel i5-4950 @ 3.30GHz CPU. Since moving to BI v5, CPU usage had climbed, hovering in the ~70+% range. I enabled sub-streams, and it brought it down to a rather consistent ~35%.

I have a mix of cameras - 1.3MP, 2MP, and 5MP. Three have audio feeds, and one of those is a PTZ.
 
  • Like
Reactions: Flintstone61