Another Blue Iris CPU post... opinion

Jaceon

Young grasshopper
Joined
Mar 28, 2015
Messages
77
Reaction score
15
I know this is an over talked subject but wouldn't mind a quick opinion.

I have a.. well.. the system listed in my signature and I really need to upgrade the CPU... my CPU.. although a 8core/16thread cpu is some cheap Chinese server version or something and is only clocked at 1.6GHZ.

I'm currently at about 36 cameras and I've had to seriously cut fps/res back to keep the system from being tacked out. Most are also currently running at 265+ (hik & dahuas) which is nice for storage and this cpu use HA anyway. I would like to be able to crank most of them back up to 4mp/8fps. Doing this would destroy this cpu though...

Obviously an i97980xe wouldn't be a dream but that's kinda $$$$$$$

The main setups I was debating for the $ I want to spend are... i9-9900k, i7-6950X, or maybe a ryzen 1950x with maybe a CUDA GPU?....

Any of these stand out to anyone as being an obvious choice? The Ryzen although power hungry seems the most capable? Some of the Ryzen results I've seen on the BIHelper stats don't seem very impressive though. Also from managing a bunch of PC's I have become a little Intel biased as Intels have always seemed to outperform, so I'm dealing with that mental bias lol... The AMD chipset is probably also a little more future proof than the others I'd think... Idk.. giving me a headache so figured I'd seek an opinion =)
Thanks
 

bp2008

Staff member
Joined
Mar 10, 2014
Messages
12,666
Reaction score
14,005
Location
USA
1.6 GHz you say. That is well below the official spec. Hmm.

i9-9900K should be a strong CPU for Blue Iris. It may come with some challenges though.

First, I don't recall seeing anyone try one yet with BI so it is unknown if there are any hardware acceleration problems like memory leaks or steadily rising CPU usage. I would assume H.264 would work as usual but not H.265.

Second, that CPU is known to run quite hot due to how many cores they run at such high frequencies. For a continuous load like BI would create, you will want good cooling. I can't say how expensive of a cooler would be justified but I certainly wouldn't use something very cheap.

I'd skip the old HEDT processors like i7-6950X. If you're going to spend that kind of money on a CPU that can't do hardware acceleration, you might as well make it a threadripper.

If using a CPU without integrated graphics, or if spending a lot of time rendering the BI console at 4K resolution, I'd go with a very low power GPU like a GT 1030. (or GT 710 if not rendering @ 4K). CUDA acceleration is by far the least efficient way to decode video in Blue Iris so I wouldn't use that except as a last resort to get more performance out of a single box.
 

Jaceon

Young grasshopper
Joined
Mar 28, 2015
Messages
77
Reaction score
15
Yes you read that right.... 1.6GHz.. def blame some of my issues on that =)

Anyways, i9-9900k was my first choice. I'd assume my only hope to get that to work well with so many cameras would be to switch all my cameras back to H264 and invest a little more in storage eh? It's the way I'd prefer, plus maybe there's a super small chance BI could technically support h265 decoding in the future if the devs wanted to?

I currently have a liquid cooler I run outside the server rack to dump heat so I think I can handle heat...

I guess I was just feeling tempted by the fact the threadripper has 16 cores instead of 8 for a similar price... which with BI seemed tempting...
 

bp2008

Staff member
Joined
Mar 10, 2014
Messages
12,666
Reaction score
14,005
Location
USA
I think a 16 core threadripper will outperform an 8 core i9-9900k especially for H.265 video. Probably for H.264 too but likely not by a very wide margin, and I'd be shocked if it was on par with the Intel option in terms of efficiency, given the lack of hardware acceleration for video decoding.

There's always a small chance that either Intel or Blue Iris will find a way to make H.265 decoding acceleration work. I've spoken to the BI dev about it and he believed everything was being done properly on his end and that it must be Intel's problem. I'd try to speak to Intel about it but yeah right, they'd just ignore me about like they do everyone else. You can reasonably count on H.264 decoding acceleration working since that has been working for years.
 

Jaceon

Young grasshopper
Joined
Mar 28, 2015
Messages
77
Reaction score
15
CUDA acceleration is by far the least efficient way to decode video in Blue Iris so I wouldn't use that except as a last resort to get more performance out of a single box.
I had asked Ken awhile ago his thoughts on using CUDA or intel for acceleration and he just replied now... this is part of his answer

"My personal experience with Nvidia has been excellent ... there are only very limited support instances. Intel decoding on the other hand seems problematic ... you have to find a driver version that is stable for your chipset.
I would recommend a nice Nvidia card and i7/i9 processor. That way, you can use both actually."

So maybe Nvidia is less efficient but more stable? If I combine both your responses lol.. anyways I guess that gives me an idea.. I could just spend $70 for a card and try it out to buy me some time to shop for a CPU..

Or maybe I should just wait till 2020 when intel plans on jumping into the video card market and then I can have a cheap threadripper with an intel video card haha maybe
 

bp2008

Staff member
Joined
Mar 10, 2014
Messages
12,666
Reaction score
14,005
Location
USA
Nvidia's acceleration tends to "just work" with both H.264 and H.265 whereas Intel drivers are often buggy and people struggle with finding a good version and getting it installed and keeping Windows from replacing it with another bad version. Ken doesn't need to spend as much time answering emails from frustrated customers when they use Nvidia acceleration instead of Intel, so of course he recommends it.

But an Nvidia GPU for hardware acceleration just isn't a good value. The GPUs are expensive to buy and expensive to run due to high power consumption. Every video stream you decode with an Nvidia GPU costs you significantly more energy than if you decoded it with Intel Quick Sync or with the plain old un-accelerated software decoder.

The only time an Nvidia GPU makes sense is if you can afford the inefficiency and just want your existing system to handle more video without choking. Only you can make that decision. I wouldn't recommend a $70 GT 1030 card if you're going to use it to decode a lot of cameras though. The 1030 isn't very powerful and won't be able to handle more than a few hundred megapixels per second, plus it can't do hardware accelerated encoding (only decoding). The most cost-effective model at this time is probably the GTX 1050 Ti, followed by the GTX 1060.
 

Jaceon

Young grasshopper
Joined
Mar 28, 2015
Messages
77
Reaction score
15
As a follow up.. for now I ended up just doing an in place update of my 8 core 1.6GHz CPU to a E5-2658 v2 10 core 2.4GHz processor (finding one on eBay for $120 convinced me) and WOW...

Of course I still have my cameras somewhat on lower settings, but I was surprised that it cut my CPU usage in half. About 80% down to around 40% and things are much snappier. running the GUI on a 1080 display via RDP used to max out my CPU now it only adds about 3%. The down side is I'm running about 180W.....

I decided to gamble that with Intel rumored to be coming out with a new 10 core next year, and the sudden increase in a competitive market, I should be able to wait and build an actual efficient machine for much less in a year or two.
 
Joined
Apr 26, 2016
Messages
1,090
Reaction score
852
Location
Colorado
For the first time I had started looking at AMD CPU's mostly due to cores/HT/frequency vs cost proposition. I know in the BI world the Intel acceleration is a big contributor to performance, but will be exciting to see if AMD keeps up the march towards high-cores/low-cost and at least puts some pressure in Intel to bring their rebranded, same-basic-technology i9 processors down to a reasonable price.

Seriously tempted to do next machine as AMD just to see if there is finally a reason, more as a project than any serious actual need (test with gaming and BI and folding@Home, minecraft/ARK Survival server or something). For YEARS AMD has just been a giant heater with crappier performance for about the same or slightly less price, feels like things might be changing in their favor as Intel continues to struggle with their die shrink.

I know the argument against Nvidia cards for BI was initial cost/watt cost to do the same thing as Quicksync, but if software encoding is improving, at what point is cores enough to solve the problem? I guess Radeon H264 HEVC (AMDs QS alternative) might be another scenario but I read that it is garbage.

However, some of the stupid big builds some guys here have, I wonder if they will ever benefit from AMD chasing cores (32 cores/64 threads!) if cost isn't a primary concern.

If leaks are to be believed more performance per watt and cores otw from AMD at a lower price than Intel.
 
Last edited:

MrGlasspoole

Young grasshopper
Joined
Mar 10, 2018
Messages
66
Reaction score
17
I really don't understand that BI CPU thing.
If i look at Ubiquiti people seem to run 20 cams on Dual-Core Celerons with there video software.
 

fenderman

Staff member
Joined
Mar 9, 2014
Messages
36,897
Reaction score
21,250
I really don't understand that BI CPU thing.
If i look at Ubiquiti people seem to run 20 cams on Dual-Core Celerons with there video software.
That is because the ubiquit servers does not decode the video or analyze it - just like NVR's. It uses the cameras inferior motion detection. This can also be done in blue iris using the limit decoding feature. It also does not display main stream of the cameras in a matrix view like blue iris.
Ubuquiti is overpriced locked down crap. Most blue iris setups can be run on a 100 dollar i5-3570. If you are willing to spend extra, there are WAY better options than overpaying for inferior cameras. Look at network optix/digital watchdog vms, $70 per camera license.
 

MrGlasspoole

Young grasshopper
Joined
Mar 10, 2018
Messages
66
Reaction score
17
It uses the cameras inferior motion detection.
Are you sure? Because if you run the cams standalone there is no motion detection.
There are no settings in the cams.

It also does not display main stream of the cameras in a matrix view like blue iris.
You mean if you view multiple cams that they use 1024x576 or 640x380?

Would check that but I'm to lazy to install there software. After one day looking at it i had enough after i realized there is to much that can't be done. I only keep there cam for playing around.

So lets say somebody does not need motion detection. Then what CPU is enough for BI?
 

fenderman

Staff member
Joined
Mar 9, 2014
Messages
36,897
Reaction score
21,250
Are you sure? Because if you run the cams standalone there is no motion detection.
There are no settings in the cams.

You mean if you view multiple cams that they use 1024x576 or 640x380?

Would check that but I'm to lazy to install there software. After one day looking at it i had enough after i realized there is to much that can't be done. I only keep there cam for playing around.

So lets say somebody does not need motion detection. Then what CPU is enough for BI?
Im positive. The detection is done in the camera. If it does have serverside motion, you would need way more horsepower.
Yes the matrix view would be the substream just like on NVR's
CPU for BI vary by many different factors see wiki. pc's are cheap enough that you should get one that will allow you to use BI to the fullest. Otherwise why use blue iris at all?
 

Jaceon

Young grasshopper
Joined
Mar 28, 2015
Messages
77
Reaction score
15
I needed a GPU for a build anyway, so found me the cheapest, used evga 1050ti I could find and put it in my server to test. I just flipped to use CUDA in settings and left it this weekend. When I got back around to checking I realized Blue Iris went back to just using the CPU. I assume the GPU crashed, so I started over and just added cameras one at a time. Eventually the GPU crashed again at about 28 cameras. (I currently have fps pretty low on most.. proly 6fps@1080). Anyways I left for awhile and then tried adding more cameras. I noticed this time although they would add without crashing, the CPU/GPU loads weren't changing anymore. So I started taking cameras back off CUDA and when I got back down to 28 cameras, CPU/GPU loads started changing again.

Long story short, I'm assuming CUDA only works until the GPU's memory is all used up? Not knowing much about GPU's, I assumed once on board memory was used up it would just start using the system memory, but then again idk why I assumed this...

Just confirming this is correct since this GPU really doesn't seem to be working to hard yet, although memory usage is high.

Also, besides watching CPU/GPU usage, is there a way to tell what a camera is being decoded with?

When I was adding cameras again to see when the card would crash, instead of crashing, for some odd reason it let me keep changing cameras to CUDA, although it was apparent watching CPU/GPU usage that nothing was changing.

I attached a pic to show the GPU's levels
 

Attachments

Joined
Apr 26, 2016
Messages
1,090
Reaction score
852
Location
Colorado
By your own admission you grabbed the cheapest CUDA card you could find. :D

In this thread: 4.7.8 - August 28, 2018 - Support for the Nvidia CUDA hardware decoding @bp2008 got to about 11 cameras before (I believe) they determined he had hit a memory limit (his 1030 card has 2GB). Assuming you have a 1050TI, that card should have a 4GB DDR5 (here: GeForce GTX 1050 Ti | Specifications | GeForce ) which wouldn't put 28 cameras out of the realm of reasonably hitting the memory cap. I wonder if you wouldn't have been better off with a 6GB card like the GTX1060-6GB (or GTX1080-8GB yummy!).

*edit*I see you have a Xeon you are working with. You'd probably be money ahead just to get an i5 or i7 refurb, you'd save a LOT of power and you'd add Quicksync which would further improve your capacity and efficiencies. Not to beat the dead horse, but that $120 you spent on the Xeon might have been better spent towards a $500 refurb.
 
Last edited:

Jaceon

Young grasshopper
Joined
Mar 28, 2015
Messages
77
Reaction score
15
I meant the cheapest 1050 ti I could find =) But I needed a decent 4k GPU anyway so decided to throw it in our camera server to play with. I would definitely say the bigger the BI system is, the less impressive CUDA encoding seems compared to processor core power. Like I said earlier jumping up two more cores/faster CPU dropped my CPU usage 40%. This 1050 ti, which didn't cost much less than that,.. maybe 10%. So pretty much.. just confirming everything Brian and ya'll have been saying about these cards =)

Still not sure why I was thinking the GPU could steal system memory :facepalm:.. Since it apparently can't, that def seems to be a big limiting factor with these powerful, expensive cards, for this use. The GPU was only showing 33% via task manager when it ran out of memory.

Thanks
 
Joined
Apr 26, 2016
Messages
1,090
Reaction score
852
Location
Colorado
. I would definitely say the bigger the BI system is, the less impressive CUDA encoding seems compared to processor core power.
Fair enough regarding the "cheap" 1050TI, but definitely this appears to support others who have tested and shown that from an efficiency standpoint (not to mention a power consumption standpoint), that CUDA implementation isn't the way to go except in rare cases (I think @bp2008 mentioned in one of his posts) "if you need to add a little capacity to an existing system that is already heavily loaded".

You probably assumed all GPUs could use system memory, because that's how the Intel GPUs typically work.

Any idea what your power consumption from the wall is (with and without the graphics card?) - that is one aspect of the Blue Iris database that is missing, how much these different CPU's burn in electricity to accomplish what they do. I wonder if your 12 core Xeon is consuming significantly more power than say an i7-8700K which scores about the same Passmark, but also has Quicksync and fewer (6) cores? And I guess the power consumption with the 1050Ti is probably higher still, albeit if the GPU is only running 20-30% it isn't maxing out current draw for sure.
 

Jaceon

Young grasshopper
Joined
Mar 28, 2015
Messages
77
Reaction score
15
I was hovering around 180w with my old radeon r7
With the EVGA 1050ti and using about as much as I can via CUDA about 200w

So kinda surprised it's only showing about 20w more...

This system isn't very efficient but I plan on cranking my cameras back up.
From what I've been able to tell.. I don't think an 8700K even with quicksync could get close to what this system could handle.. albeit less efficiently.
So since I can get by with this current setup... I'm hoping for some fun new processors to play with in the upcoming years =)
 
Joined
Apr 26, 2016
Messages
1,090
Reaction score
852
Location
Colorado
I guess I don't know what your maximum MP/s rate will be after you "crank up your cameras", but there are plenty of examples in the Blue Iris Helper database (here: Blue Iris Update Helper) of 8700K running with 30 or more cameras, and 1400+ MP/s so you might be surprised. Especially if my math was right and you only got a few hundred MP/s going via the CUDA card (~26 cameras at 1080p at low FPS).
 
Top