So I decided to build a 1000MP/s build. 20 cameras, some 4MP and some 2.1MP running @ 15FPS and some running at 25 FPS, in total 20 cameras.
I don't have a lot of evidence to back this up, but I'm suspicious that higher FPS (e.g. 25 FPS) can result in disproportionately worse performance. What I'm saying is that maybe one camera at 30 FPS creates more of a load for
Blue Iris than two otherwise identical cameras at 15 FPS. I'd suggest running everything at 15 FPS or below.
I work in IT Security for a defence contractor so they have all these new parts and I decided to build with what ever was lying around and found interesting results.
Decided to use an i7 8700 (4 cores, 8 threads) 16 GB ram, and for OS used Samsung 960 NVME 256GB M2, for storage WD Purple drives connected directly to SATA ports. I have gone through many of Fenderman's tips and most of them were of use to me, direct to disc only, reduce to 15 fps etc etc. However, even with at 15 fps and only at 780 - 800 MP/s the system was getting choked with 100% utilization even with cpu/hardware quicksync acceleration enabled in Windows 10 Pro. Every change I made to the system, would not help reduce the utilization and pressure from the CPU.
You've got something configured wrong. Check this guide, though it sounds like you've attempted all the important stuff already I'm betting something isn't working as you expected.
Optimizing Blue Iris's CPU Usage | IP Cam Talk
i7-8700 is 6 cores, 12 threads, not 4/8. Double-check which model you used. Though I would expect even a 4 core to be able to handle 800 MP/s better than that.
Was it reaching 100% even with BI in service mode?
Because my i7-8700K here is running 820 MP/s at just... well... let me show you...
That is with BI running in service mode, console closed. I don't know why usage always shows so much higher on the Processes tab, but it always does. Drives me nuts honesty because you never know where someone else is reading their % numbers from.
Below is with the console open. Still way below your usage levels.
Note the GPU usage statistics too (requires a fairly recent Win10 feature update). GPU 0 in this case is the Intel GPU that is built in to the CPU. Seeing a load on the "Video Decode" engine indicates that Quick Sync is working. GPU 1 here is an Nvidia GT 1030 card I added specifically to offload rendering work while the console is open, and it makes a substantial difference in CPU usage when Blue Iris's console is open at 4K resolution. That is the only benefit of a discrete GPU, and it doesn't appear to matter how fast the GPU is, so I suggest using the lowest power GPU you have available for that purpose. E.g. a GT 710 or 1030. Some low-end AMD card may work the same, but I haven't tested one.
One other thing, make sure you've done the hack to fix quick sync. On 8th gen, it is a little more complicated than just having the right driver installed.
Memory Leak: Quick Sync (Hardware Acceleration) | IP Cam Talk
Oh and it couldn't hurt to check Windows's power options advanced settings, under the "Processor power management" section, to make sure there isn't somehow an artificial limit on your CPU speed there.
I feel GPU CUDA acceleration is the future of Blue Iris later on. So against all the wisdom provided here, I went to the shelf and built another server using an i7820x.
Used a i7 7820x (8 cores, 16 threads) with 16GB Ram, LSi Controller Raid 5 -> 7 x 3TB Purple WD Drives for storage, for OS used Samsung 960 256GB NVME M2. Got past slightly 1000MP/S at 66% utilization when not minimised and shows 86% CPU usage in Blue Iris. Was using 77W at this point. When minimised and just running as a service, it dropped down to 38%. Also decided to run OS using Windows Server 2016 instead of Windows 10 etc because the i7 7820x doesnt have quicksync support anyway. When minimised CPU was using 69W. The powersupply used is a HX850 Platinum Spec power supply, and is currently running at 91% efficiency.
This does kind of make it sound like hardware acceleration wasn't working on your other system. You should be aware that H.265 acceleration doesn't work currently, and you need to use H.264.
However, I'm really eager to see if I can get a i7 8700 running >1000MP/s with no problems. Going to try to use different camera manufacturers and see if I can get there.
Different camera manufacturers shouldn't make any difference. They're all producing basically the same thing. Make sure any vendor-specific encoding extensions are disabled though. Things like "Smart codec" or "Smart coding" or "H.264+". "H.264H" however is fine if you see that in the options since that just means high profile which is pretty standard.
Also will be testing with i9 7980XE, 18 core, 36 thread CPU for kicks.
That should easily blow away everything else, though it is overkill for sure.