Hardware Req for 16-20 3MP Cams

I need to Google \ better understand "Hardware Accelleration". Is that what "intel HD with quicksync" discussions refer to here?...Some chips have "IRIS..."...same \ newer thing?
...If not fully utilizing the CPU and no graphics card, where's the magic processing power coming from?
I'm planning a setup isfor approx 12 4MP cams @ 15-30 fps recording full time (prob don't need that fps rate but...) and motion notification..would be nice to be able to use a NUC if it could handle it so I could tuck it away but assumed it could not handle it.

Iris Pro is better than the HD 530 graphics in the latest desktop CPUs. It supports Quicksync and will handle the h.264 streams in Blue Iris perfectly.

To decode video streams, the GPU-part of the CPU has hardware designed to quickly process the h.264 video stream. The CPU-part can only run software applications to process h.264, doing so much less efficiently. It is much more power efficient to process data with job-specific hardware than general hardware that has code it must execute to process the data.

"12 4MP cams @ 15-30 fps" I don't think there are reasonably-priced 4MP surveillance cameras that do 30fps. Anything can be done with enough money, but you're probably looking at 4MP 20fps cameras.

The i7 6770hq would also be overkill for your setup, but it would work. I'd suggest 8GB of memory. You're stuck with m.2 ssds for local storage - expect to spend at least $800 on hardware to get started ($650 NUC + $100-200 SSD + $30 for memory).

Do not use h.265 video streams. Quicksync will be unable to help, and video quality may be lower. Stick with h.264, which has support to be processed within the Iris Pro graphics.
 
A person should be able to run at least 30 cameras on that CPU, yes. I'd estimate roughly 50-60% full-core usage on that CPU (25-30% in Windows task manager, showing hyperthreading). 20x 3MP@ 15fps D2D utilizing hardware acceleration. I wouldn't be surprised if 40 cameras were possible.

My older i7-3770 averages 8% usage (16% full-core) with my 13 cameras at 10fps and max bitrate. Hardware acceleration makes the latest and greatest CPUs no longer necessary. Gone are the years of my overclocked Core2Quad running at 35% CPU usage with 3 cameras. People can't complain about BI's CPU usage anymore - that was probably the biggest reason many people stayed away from BI. I want to add dozens more cameras now just because it's possible. Excellent software and support, that's for sure!
There is no way 30 high resolution cameras (2-4mp) will run on that cpu unless the frame rates are set really low. Particularly when they start recording or if the user keeps BI open and displaying.
 
Yes you need mirrored if you don't want issues to appear/lower system uptime. I'm not using an SSD for storage - I've seen many SSDs die at work in workstations and SLC-based SSDs in servers in under 1-year of duty.
You have a serious problem if you are seeing many ssd failures. Either garbage drives or something else going on. My experience is not one off. I manage many pc's. 50-100 at any given time.
 
There is no way 30 high resolution cameras (2-4mp) will run on that cpu unless the frame rates are set really low. Particularly when they start recording or if the user keeps BI open and displaying.

OK, good to know. So we back to NOT a good idea to use one of these NUCs?
OP wants 16-20 3MP cams, I'm looking at up to 12 4MP cams (max scenario).

NUC form factor is not a must have for me, just gives me a lot more hiding place options if it is feasible.
 
You have a serious problem if you are seeing many ssd failures. Either garbage drives or something else going on. My experience is not one off. I manage many pc's. 50-100 at any given time.

Personally I don't see high failure rates on my home systems, and I don't mirror SSDs. At work it's another matter. Around 10% over 2-years in workstations, 50% in datacenter over 1-year. The workstation don't have 20% usage - they are upwards of 70-95% filled. The servers, the webservers do better, but we have had failure. Databases, high failures with SLC SSDs. Went back to platter with huge cache. The performance improvement was drastic, but the reliability on the servers just isn't there.

We mostly have Intel and Samsung branded SSDs.

Bigger issue still, TLC SSDs that people are buying up for non-read-heavy usage will have high reallocation counts not far into their life. A cheap Samsung EVO or equiv is fine for a home system, but Blue Iris will not be kind to it and the warranty is void when they see the number of writes tracked by the SSD. A SSD may work fine for years if someone only has a handful of cameras, with no heavy shadow cover of windy vegetation.

Top of my wishlist right now is better shadow-detection - we have huge oak trees and partly cloudy days that many times record for 10 hours a day on all cameras, and other days I only get one or two clips a day. Forever tweaking the motion triggering...
 
There is no way 30 high resolution cameras (2-4mp) will run on that cpu unless the frame rates are set really low. Particularly when they start recording or if the user keeps BI open and displaying.

Why not? I can spin up BI on my 6700K if you want, and clock it down to the same speeds if we want some raw data.

My i7-3770 system runs great with my cameras. I'd need to add the system to performance metrics to get averages, but with most of the CPU time showing 6-8%, averaging 7%, on a stock 4 core, 8 thread CPU, running 13 cameras, there's no reason the 6770HQ would be unable to perform well if we interpolate some data.

Let me set all my cams to max frame rate and I'll grab some more applicable data, since 9 of mine are at 10fps (still from back when HA wasn't working). The 6770HQ is a beast that can sustain 3.1GHz with all cores loaded. Not far from the 3.7GHz sustained all-cores-loaded on the i7-6700. 6700 to 6770HQ aren't much different - a little less cache and 20% lower frequency?

Now out of curiosity, I'll give it a try tonight.
 
Just tried three sets of frame rate tests. Went into each camera and changed the frame rate, iframe interval, checked framerate in BI, and restarted BI service. Running on Windows 10 as a service.

i7 3770 stock, all cams set to max bitrate each supports. Not sure if it matters, but all cams are set to high-def motion detection.

22% CPU usage (over 8-threads) 11x 3MP@25fps, 1x2MP@30fps, 1x1MP@30fps (max res+fps settings on each camera)
17% CPU usage (over 8-threads) 11x 3MP@20fps, 1x2MP@20fps, 1x1MP@20fps
12% CPU usage (over 8-threads) 11x 3MP@15fps, 1x2MP@15fps, 1x1MP@15fps

And with just 10-cameras enabled:
11-12% CPU usage (over 8-threads) 10x 3MP@20fps

Double this figure to interpolate for twice the number of cameras and you get 24% CPU usage with twenty 3-Megapixel cameras at 20fps with entire-frame motion detection, on an older i7-3770. Lower the frame rate to 15fps, increase number of pixels to 4MP and you have effectively the same number of pixels to track each second.

Skylake is faster per clock than my 3-generation-older Ivy Bridge CPU. The 6770HQ is close to the 3770 in raw processing power, maybe behind just a tiny bit based on lower frequency. I do have my 6700K workstation I can test with later, say with 10 cameras set to 3MP@20fps and others disabled.

As fenderman had mentioned though, the UI will take up some CPU time. When I'm viewing cameras via unrestricted Remote Desktop at a 1440P window size, I see about 10% higher CPU usage. Closer to 5% higher CPU if just viewing via web interface. I never view locally - system is tucked away and running headless. This is where some CPU headroom is needed.
 
Last edited by a moderator:
Personally I don't see high failure rates on my home systems, and I don't mirror SSDs. At work it's another matter. Around 10% over 2-years in workstations, 50% in datacenter over 1-year. The workstation don't have 20% usage - they are upwards of 70-95% filled. The servers, the webservers do better, but we have had failure. Databases, high failures with SLC SSDs. Went back to platter with huge cache. The performance improvement was drastic, but the reliability on the servers just isn't there.

We mostly have Intel and Samsung branded SSDs.

Bigger issue still, TLC SSDs that people are buying up for non-read-heavy usage will have high reallocation counts not far into their life. A cheap Samsung EVO or equiv is fine for a home system, but Blue Iris will not be kind to it and the warranty is void when they see the number of writes tracked by the SSD. A SSD may work fine for years if someone only has a handful of cameras, with no heavy shadow cover of windy vegetation.

Top of my wishlist right now is better shadow-detection - we have huge oak trees and partly cloudy days that many times record for 10 hours a day on all cameras, and other days I only get one or two clips a day. Forever tweaking the motion triggering...
What brand of SSD? There is something VERY wrong if you have a 50 percent failure rate. Either with the SSD or the server.
You can use the new motion zones to eliminate most false alerts.
You can get lots years worth of recordings on a cheap ssd. For example the crucial mx300 is rated at 220TB written. Even at 2tb per month you are looking at 9 years. SSD's have also been tested and proven to exceed their rated write capacity.
 
Just tried three sets of frame rate tests. Went into each camera and changed the frame rate, iframe interval, checked framerate in BI, and restarted BI service. Running on Windows 10 as a service.

i7 3770 stock, all cams set to max bitrate each supports. Not sure if it matters, but all cams are set to high-def motion detection.

22% CPU usage (over 8-threads) 11x 3MP@25fps, 1x2MP@30fps, 1x1MP@30fps (max res+fps settings on each camera)
17% CPU usage (over 8-threads) 11x 3MP@20fps, 1x2MP@20fps, 1x1MP@20fps
12% CPU usage (over 8-threads) 11x 3MP@15fps, 1x2MP@15fps, 1x1MP@15fps

And with just 10-cameras enabled:
11-12% CPU usage (over 8-threads) 10x 3MP@20fps

Double this figure to interpolate for twice the number of cameras and you get 24% CPU usage with twenty 3-Megapixel cameras at 20fps with entire-frame motion detection, on an older i7-3770. Lower the frame rate to 15fps, increase number of pixels to 4MP and you have effectively the same number of pixels to track each second.

Skylake is faster per clock than my 3-generation-older Ivy Bridge CPU. The 6770HQ is close to the 3770 in raw processing power, maybe behind just a tiny bit based on lower frequency. I do have my 6700K workstation I can test with later, say with 10 cameras set to 3MP@20fps and others disabled.

As fenderman had mentioned though, the UI will take up some CPU time. When I'm viewing cameras via unrestricted Remote Desktop at a 1440P window size, I see about 10% higher CPU usage. Closer to 5% higher CPU if just viewing via web interface. I never view locally - system is tucked away and running headless. This is where some CPU headroom is needed.
You cannot simply extrapolate like that.. Consider that when you have more cams you are also likely have more simultaneously recording cameras that will raise cpu considerably. This will also raise the cpu when the console is open. Most folks run BI with the console open. If you run as a service, you actually need more headroom because of the added cpu consumption when you remote in. You are also assuming all the cameras will be 2mp. The higher res cameras will have a significant impact.
You will also find that the 2.6ghz clock speed of the hq series processor will have a significant performance impact on BI. If you give it a try, let us know how it turns out.
 
What brand of SSD? There is something VERY wrong if you have a 50 percent failure rate. Either with the SSD or the server.
You can use the new motion zones to eliminate most false alerts.
You can get lots years worth of recordings on a cheap ssd. For example the crucial mx300 is rated at 220TB written. Even at 2tb per month you are looking at 9 years. SSD's have also been tested and proven to exceed their rated write capacity.

I'd have to check, but I believe the last ones were Intel P3600-series pci-e SSDs. Crazy fast, but we still had issues. Went back to traditional drives and just expanded the replication to 12 servers to handle the load.

Don't get me wrong - I love SSDs, but I wouldn't yet trust one SSD more than two (different-batch) platter disks. You can see why that with my experiences I have up to 4 copies of data at home - I've seen many times how data integrity can fail. I'm surprised magnetic storage doesn't go bad more often. I expect when SSD prices come down just a little more, I will be replacing my BI disks with mirrored SSDs., but not at twice the price of a decent disk drive.
 
You cannot simply extrapolate like that.. Consider that when you have more cams you are also likely have more simultaneously recording cameras that will raise cpu considerably. This will also raise the cpu when the console is open. Most folks run BI with the console open. If you run as a service, you actually need more headroom because of the added cpu consumption when you remote in. You are also assuming all the cameras will be 2mp. The higher res cameras will have a significant impact.
You will also find that the 2.6ghz clock speed of the hq series processor will have a significant performance impact on BI. If you give it a try, let us know how it turns out.

I'll set it up for a few minutes. I don't know under which circumstances the CPU runs at base clock vs all-core-turbo so I will grab data at both 2.6GHz and 3.1GHz. The 6770HQ should sit at 3.1GHz with all cores loaded.

It's easy enough to move the license between systems. Another thing BI does much better than other small-company software I've used.

If I run 5 cameras and get 5% CPU, then 10 hit 10% CPU, then it scales quite well. Direct-to-disk isn't too CPU intensive for recording. If I trigger a bunch of cameras manually, I don't notice CPU usage increases, but the files are recorded.

As far as I've seen everyone here is talking about using D2D.
 
I'll set it up for a few minutes. I don't know under which circumstances the CPU runs at base clock vs all-core-turbo so I will grab data at both 2.6GHz and 3.1GHz. The 6770HQ should sit at 3.1GHz with all cores loaded.

It's easy enough to move the license between systems. Another thing BI does much better than other small-company software I've used.

If I run 5 cameras and get 5% CPU, then 10 hit 10% CPU, then it scales quite well. Direct-to-disk isn't too CPU intensive for recording. If I trigger a bunch of cameras manually, I don't notice CPU usage increases, but the files are recorded.

As far as I've seen everyone here is talking about using D2D.
There is a significant spike even when using direct to disk.
EDIT: You are correct, I tested recording 8 3mp cameras simultaneously and the usage did not increase significantly (maybe 10% of the current use, 23 to 26). This was not the case even when D2D was introduced. It may be related to the hardware acceleration...will have to test. Edit2, its not related to HA...but there must have been an improvement at some point. I had not tested continuous record in a long time.
Trust me you will have issues. Its important to never use the number of cameras as a metric....there is a significant difference between 30 2mp cameras and 30 4mp cameras. You also cannot compare different processors by simply under-clocking one of them.
 
...Double this figure to interpolate for twice the number of cameras and you get 24% CPU usage with twenty 3-Megapixel cameras at 20fps with entire-frame motion detection, on an older i7-3770. Lower the frame rate to 15fps, increase number of pixels to 4MP and you have effectively the same number of pixels to track each second...

These are not valid assumptions/extrapolations imo. Physically test twice the number of cameras is you want real data. smiley-twocents.gif
 
These are not valid assumptions/extrapolations imo. Physically test twice the number of cameras is you want real data.

If you test 5-cameras and find half the load as 10-cameras, in general 20-cameras will scale similarly. Yes there are other variables that may come into play, but in an identical system with the only difference being doubling identical cameras, my past experience has showed it scale. If you guys are pulling values from "high-load servers quickly shoot from 40% CPU usage to 90%+", that is hyperthreading for you - useless.

I just triggered all cameras for 120-seconds - CPU usage with 10 cameras, all at 3MP/20fps, went up 3%. Triggering cameras with D2D has little impact. I've been using D2D since release due to this - we used to see huge CPU spikes when recording a clip. D2D took care of everything.

Earlier I accidentally had 11 cameras running, not 10 - one was hiding in a desktop window, not snapped into the BI interface.

i7 3770 stock, all cams set to max bitrate each supports. Not sure if it matters, but all cams are set to high-def motion detection.
1-2% CPU usage (over 8-threads) 2x 3MP@20fps
4-5% CPU usage (over 8-threads) 5x 3MP@20fps
9% CPU usage (over 8-threads) 10x 3MP@20fps
15% CPU usage (over 8-threads) 10x 3MP@20fps with BI interface open

Here's some data for everyone to chew on:
3% CPU usage (over 8-threads) 10x 1MP@20fps
9% CPU usage (over 8-threads) 10x 3MP@20fps

6% CPU usage (over 8-threads) 10x 1MP@20fps with BI interface open
15% CPU usage (over 8-threads) 10x 3MP@20fps with BI interface open

8% CPU usage (over 8-threads) 10x 1MP@20fps with BI interface open, all cameras triggered and recording
18% CPU usage (over 8-threads) 10x 3MP@20fps with BI interface open, all cameras triggered and recording

Appears to scale well to me. 720P seems to move between 3-4%, so the scaling is damn near perfect.

I am:
- Logging into each camera and updating resolution. Bitrate is set to max in all cases. Only the resolution is modified.
- Enabling or disabling cameras in BI UI
- Restarting the BI service (when I'm enabling or disabling cameras, sometimes the BI CPU usage permanently skyrockets to 100% forever, until service is restarted)
- Looking at the BI service CPU usage
- Starting the BI UI
- Looking at the BI service CPU usage
- Manually triggering all cameras with a 120-second post-trigger recording

I'll get it setup soon on my Skylake CPU running at 2.6GHz and 3.1GHz for roughly-equivilant numbers as a 6770HQ.
 
There is a significant spike even when using direct to disk.
there is a significant difference between 30 2mp cameras and 30 4mp cameras. You also cannot compare different processors by simply under-clocking one of them.

Yes, the difference is twice the number of pixels to check. I never mentioned 2MP cameras, other than I have one 2MP cam that I disabled when testing the 10-cam setup.

Yes, you can underclock and compare to the same architecture CPU and get identical results. The only major difference between the HQ and my 6700k is 6MB vs 8MB cache, and it likely has almost zero affect on BI. If someone wants to buy me a 6770HQ, I'll prove it ;)

The lower-power and higher-power Skylake CPUs are very close. It's not the same as years ago when the mobile chips were a small fraction of what the desktop had.
 
Something is wrong with your numbers. How are your measuring cpu consumption?
Also remember that you cannot simply lower the clock speed on the desktop cpu and extrapolate performance to the hq...
 
Something is wrong with your numbers. How are your measuring cpu consumption?
Also remember that you cannot simply lower the clock speed on the desktop cpu and extrapolate performance to the hq...

100% - cpu idle % = total cpu usage time. This is what BI shows - system total CPU usage, and it's what I'm pulling from the task manager.

System idle at 96% = 4% total CPU usage. This is also boosted, so for real comparisons to higher loads, one must cap it and disable turbo boost.

I forgot that I need to run the IGP to get assisted decoding running. Will be a bit longer on the skylake comparison - integrated doesn't support UHD/60...
 
100% - cpu idle % = total cpu usage time. This is what BI shows - system total CPU usage, and it's what I'm pulling from the task manager.

System idle at 96% = 4% total CPU usage. This is also boosted, so for real comparisons to higher loads, one must cap it and disable turbo boost.

I forgot that I need to run the IGP to get assisted decoding running. Will be a bit longer on the skylake comparison - integrated doesn't support UHD/60...
There is something wrong. For example, with console open 10 cameras at 3mp and 20fps you should be in the upper 20's at least with that processor.
You numbers would indicate that you can run 30 3mp cameras at 45 percent console open which i can assure you is not the case.
 
Whew - have some numbers. Heading out so I will review them later.

I also found that I don't have the "cpu-to-100%-when-disabling-or-enabling-cameras" issue on this system. Will try a complete wipe and reinstall on my dedicated camera system to see if it resolves itself.

This 6700k system has a lot of software, so I disabled all startup processes on it and unnecessary services before pulling these figures. Msconfig.exe. It has dedicated dual-port PCI-e Intel NIC similar to my other system, 64GB of RAM, and nothing but a 250GB and 960GB SSDs. Probably a dozen USB devices attached. Videocard was removed for IGP use. I sized the window to 1440 using Remote Desktop window as a guide. Monitor itself is UHD. Did not test UI in full UHD res.

----------------

All cams set to max bitrate each supports. Not sure if it matters, but all cams are set to high-def motion detection.

i7 6700K at 4.8GHz
1% CPU usage (over 8-threads) 2x 3MP@20fps
3% CPU usage (over 8-threads) 5x 3MP@20fps
9% CPU usage (over 8-threads) 10x 3MP@20fps
11% CPU usage (over 8-threads) 10x 3MP@20fps with BI interface open

Here's some data for everyone to chew on:
2% CPU usage (over 8-threads) 10x 1MP@20fps
9% CPU usage (over 8-threads) 10x 3MP@20fps

4% CPU usage (over 8-threads) 10x 1MP@20fps with BI interface open
11% CPU usage (over 8-threads) 10x 3MP@20fps with BI interface open

5% CPU usage (over 8-threads) 10x 1MP@20fps with BI interface open, all cameras triggered and recording
14% CPU usage (over 8-threads) 10x 3MP@20fps with BI interface open, all cameras triggered and recording


i7 6700K at 3.1GHz
1% CPU usage (over 8-threads) 2x 3MP@20fps
3% CPU usage (over 8-threads) 5x 3MP@20fps
12% CPU usage (over 8-threads) 10x 3MP@20fps
15% CPU usage (over 8-threads) 10x 3MP@20fps with BI interface open

Here's some data for everyone to chew on:
4% CPU usage (over 8-threads) 10x 1MP@20fps
12% CPU usage (over 8-threads) 10x 3MP@20fps

6% CPU usage (over 8-threads) 10x 1MP@20fps with BI interface open
15% CPU usage (over 8-threads) 10x 3MP@20fps with BI interface open

7% CPU usage (over 8-threads) 10x 1MP@20fps with BI interface open, all cameras triggered and recording
19% CPU usage (over 8-threads) 10x 3MP@20fps with BI interface open, all cameras triggered and recording


i7 6700K at 2.6GHz
1% CPU usage (over 8-threads) 2x 3MP@20fps
4% CPU usage (over 8-threads) 5x 3MP@20fps
13% CPU usage (over 8-threads) 10x 3MP@20fps
16% CPU usage (over 8-threads) 10x 3MP@20fps with BI interface open

Here's some data for everyone to chew on:
5% CPU usage (over 8-threads) 10x 1MP@20fps
13% CPU usage (over 8-threads) 10x 3MP@20fps

7% CPU usage (over 8-threads) 10x 1MP@20fps with BI interface open
16% CPU usage (over 8-threads) 10x 3MP@20fps with BI interface open

8% CPU usage (over 8-threads) 10x 1MP@20fps with BI interface open, all cameras triggered and recording
18% CPU usage (over 8-threads) 10x 3MP@20fps with BI interface open, all cameras triggered and recording
 
I've noticed that with my original settings restored (11x 3MP@10fps, 1x 2MP@15fps, 1x 1MP@15fps), the 3770 often downclocks to 2GHz for a second before shooting back up to 3.7-3.9GHz.. The load really is low.

I also remembered that I have memory-compression disabled on Windows 10. Not sure if it matters, but I do this on most systems if there is a lot of RAM. I noticed at work that a VM with 4GB has high CPU usage on memory compression, so it's something I tend to disable when I can.