Anyone running 8th gen Intel without memory leak problems in Blue Iris?

I'll play devils advocate, are these problems really laying on Intels doorstep, or is it possible that it's something Ken needs to change so BI plays nice?
Just curious.
 
  • Like
Reactions: kasper-pa-
I'll play devils advocate, are these problems really laying on Intels doorstep, or is it possible that it's something Ken needs to change so BI plays nice?
Just curious.

I have no way of knowing that, unfortunately. It could be either way.

Fortunately, I have found a (hacky) solution that appears to be working to solve this problem!! It involves manually copying some files from the older .4664 driver release.

I've updated the wiki page with the details and a zip file containing the necessary files: Memory Leak: Quick Sync (Hardware Acceleration) | IP Cam Talk
 
I'd love to see BI support Nvidia CUDA/HEVC encoding/decoding. I assume Intel will address the problem in their chips but if we had support for Nvidia we'd have a lot more flexibility. Intel hasn't been great about addressing problems with their video drivers. I bought one of the Skull Canyon NUCs and had nothing but problems getting it to properly run 4k video. Moved to an Nvidia Shield TV and have never looked back. It's my daily driver for media playback.
 
I'll play devils advocate, are these problems really laying on Intels doorstep, or is it possible that it's something Ken needs to change so BI plays nice?
Just curious.
There's probably a few ways to look at it. Since the leak appeared after Intel updated their drivers then it's more likely their problem. Since BI hasn't been updated to "fix" the leak, and I assume it would be if it was relatively easy (and assuming Intel doesn't have a bug), then my guess is it is an Intel issue and there's no way for BI to address it other than to turn off Intel Quicksync support.

Another possibility is that Intel changed the set of APIs that that they publish and BI uses for QuickSync services. If Intel changed those APIs they would, we'd expect, have documented those changes. However based on the threads on their forums they can't figure out what is wrong. This leads me to believe they introduced a bug or bugs in their code that is causing the leak and it's not BI improperly implementing the APIs.

Given how great BI is (I'm completely sold on it over Synology btw, so many features!) I have to believe he'd update it if he could to address the issue.
 
I'd love to see BI support Nvidia CUDA/HEVC encoding/decoding. I assume Intel will address the problem in their chips but if we had support for Nvidia we'd have a lot more flexibility.

I'm all in favor of more hardware acceleration choices, but I'd place more priority on fully utilizing Quick Sync's capabilities. Dedicated graphics cards can add quite a bit to power consumption, and most of the really low-power cards are several generations old.
 
  • Like
Reactions: fenderman
I'm all in favor of more hardware acceleration choices, but I'd place more priority on fully utilizing Quick Sync's capabilities. Dedicated graphics cards can add quite a bit to power consumption, and most of the really low-power cards are several generations old.
For me it's a bit of a selfish reason :) My current server (power hog that it is) is a Xeon platform (dual E5-2650 v3) that doesn't support HD graphics. I'll likely be setting up a dedicated lower power system for NVR though so it will likely be moot at that point.
 
Hiya Folks...I thought I would chime in here (again)! :lol:

After a bit of back and forth with bp2008 and some others on another thread (talking about my i9-7980XE build), I decided to take bp's advice and try my luck with an 8th gen core i7 (Core i7-8700K) and 32 gigs of ram to power my Blue Iris setup with 33 cameras. After doing a fresh install of Windows 10 (64bit), and installing the graphics drivers following all of the threads here to solve the memory leak issues, I am sad to say that this Core i7 cannot handle the load. At first i thought it was the graphics driver issue, so I meticulously reviewed all of the relevant posts on the forum, and was able to remove that as a variable. But in the end, it seems pretty clear that Quick Sync may be perfectly fine for a decent number of cameras, but once you go above a certain number it just falls short. Now....to be clear, I am forced to use the "limit decoding" feature on 20 cameras on both the i9 and the i7 setups to run all 33 cameras. But, the difference in CPU usage is DRAMATIC. On the core i9, CPU usage is 25%. On the core i7, it's around 78% (with hardware acceleration/Quick Sync).

So, my conclusion remains the same, I believe I've hit the practical limit to the number of cameras I can have (actually exceeded it...if not for limit decoding), and that the number of cores really does seem to make a difference! I'm no expert, but just going by what I see and can readily compare against between to the two systems.

Anyway, hopefully this is helpful. What I WILL do with this new i7 build is relocate it to my place in Arizona where I'm only running about a dozen cameras. I'm sure it handle that just fine!

Cheers!
 
Hiya Folks...I thought I would chime in here (again)! :lol:

After a bit of back and forth with bp2008 and some others on another thread (talking about my i9-7980XE build), I decided to take bp's advice and try my luck with an 8th gen core i7 (Core i7-8700K) and 32 gigs of ram to power my Blue Iris setup with 33 cameras. After doing a fresh install of Windows 10 (64bit), and installing the graphics drivers following all of the threads here to solve the memory leak issues, I am sad to say that this Core i7 cannot handle the load. At first i thought it was the graphics driver issue, so I meticulously reviewed all of the relevant posts on the forum, and was able to remove that as a variable. But in the end, it seems pretty clear that Quick Sync may be perfectly fine for a decent number of cameras, but once you go above a certain number it just falls short. Now....to be clear, I am forced to use the "limit decoding" feature on 20 cameras on both the i9 and the i7 setups to run all 33 cameras. But, the difference in CPU usage is DRAMATIC. On the core i9, CPU usage is 25%. On the core i7, it's around 78% (with hardware acceleration/Quick Sync).

So, my conclusion remains the same, I believe I've hit the practical limit to the number of cameras I can have (actually exceeded it...if not for limit decoding), and that the number of cores really does seem to make a difference! I'm no expert, but just going by what I see and can readily compare against between to the two systems.

Anyway, hopefully this is helpful. What I WILL do with this new i7 build is relocate it to my place in Arizona where I'm only running about a dozen cameras. I'm sure it handle that just fine!

Cheers!

I'm sure the i7 would handle the load if you cut the frame rates down to the 10-15 FPS area (particularly for those 8 MP cameras!). My experimentation shows quick sync to double or triple the amount of video data you can process at any given CPU usage %, but I also have evidence that my ~800 MP/s load might be almost 50% of the capabilities of Quick Sync on this CPU.

The yellow line (in the mid-40s) is, as far as I can tell, a % usage of the Intel GPU's decoding capabilities. I don't know why it steps down and up on this graph though. The significantly reduced CPU usage parts are when I had the GUI closed. (and the spikes occurred whenever I added or removed a performance counter)



Since your load was ~3400 MP/s, that is probably about double what Quick Sync on the i7-8700K is capable of.

By the way, I notice in your biudpatehelper data you didn't have direct to disc enabled on any of the cameras. I assume that has been remedied since 10 days ago then?
 
Hey there! Thanks as always for responding so quickly. I have the (Global) FPS set to 15, but are u saying I need to do a similar setting on each camera?

Regarding direct to disc, I should say that for this test (and the i9 setup) I’m not doing any recording just yet! The system is merely displaying the cameras as I run the tests.
 
What global FPS are you talking about? Blue Iris cannot control the frame rate of video that it receives from cameras. That can only be controlled by the cameras themselves.

Lower frame rates are an immediate and very significant reduction in CPU usage. Of course higher frame rates are more pleasing to the eye, so usually around here we recommend a maximum of 15 FPS since that is generally considered smooth enough, and it is still at least double what is really necessary for decent video surveillance.

I believe direct to disc has a meaningful impact on CPU usage even when you are not recording, though that impression is from a long time ago and I admit I have not tested it recently.
 
Last edited:
  • Like
Reactions: fenderman
Sorry bp, I never seemed to put two and two together (after all this time), kind of embarrassed to admit that!

I’ll do that as soon as I get home and report back!
 
  • Like
Reactions: bp2008
Hey bp...well, I ran the tests and have some results to share. And before I go any further, I don't want you to think I'm trying to get you to "IT" support my issues! I'm really not, but it seems like you are 'somewhat" interested, so it's in that spirit that I share this stuff with you.

As a reminder, on both my i9 and i7 systems, I have 20 cameras set to limit decode...just want to establish that as a baseline (for now)

So...the first thing I did, was cherry pick 11 of the 33 cameras in my system to modify. I went into each of the cameras web-interfaces and lowered them all down to 15FPS. Please note that some of them were set at 20 but about 4 of them were set to 30 fps.

The resulting change in usage on my i9 system was negligible! It went from 26% to 24%. However on the i7 it went from 80% down to 35%! A very big change and certainly a huge improvement! So thanks for that!!!!

Part 2 of this test was to start unchecking "limit decode" (on the core i7) to see how far I could go with this. As I started unchecking the first 3-4 cameras, I noticed a "measurable" bump, but it's not like it spiked way up. But around the 5 camera it was back up into the 70's, and by the time I unchecked the 7th camera, the system was back in the 99% range and holding there! So I had to go back and start re-enabling the feature to bring it back down to a reasonable load.

So that's where I'm at for now! The frame rate change was a big help, and something I will manage carefully going forward. The next thing I will try is to take all the cameras down to 15 FPS and measure that. Will see how I fare from there!
 
  • Like
Reactions: anijet and awsum140
I'm very interested in better understanding all the performance implications of Blue Iris, so by all means your information is welcome.

Some months ago I was trying to help someone with an i7-6700 whose system was inexplicably getting bogged down with just 7 cameras, 6 of them at 4 MP and 1 at 2 MP I think. With only 4 of those cameras enabled, CPU usage would be around 20%. But it seemed like if it went much further it would just shoot up to 100% and the system would become barely responsive. It didn't matter which cameras were enabled and which disabled -- somewhere around the 400 MP/s mark it just went crazy. In the end setting all the cameras to 15 FPS was an effective solution. To this day I don't know what was going on with that system. And I don't know what would happen if more 15 FPS cameras were to be added.
 
Last edited:
  • Like
Reactions: Philip Gonzales
Many Thanks bp! So...I just published Performance Stats using BiUpdate Helper. I'd be curious to hear what you think!

Edit: Just noticed another i7 stats that seems to be working much better than mine! I am super puzzled now, as to why they are at 62% and I'm at 97%, and they have 35 cameras! WTH?????
 
Last edited:
Well you still haven't turned on direct to disc.

Also if any cameras are duplicated in Blue Iris, the duplicates will use minimal resources but BiUpdateHelper's report won't know they are duplicates, so that could make the numbers deceiving.
 
It also looks like all the cams you set to 15 FPS are also using Limit Decode so I'm surprised you saw an effect at all.
 
Direct to disc is of the utmost importance because of its effect on CPU usage and also because without it your recording quality suffers as Blue Iris recompresses the video stream. Do note however that direct to disc does not work if you haven't activated BI yet.
 
Well...that's true, but I had no choice because the system became completely unusable if I didn't use limit decode.

But you're right...I need to try the direct to disc setting. But to be clear, there are no duplicate cameras, so we can eliminate that.
 
Well...that's true, but I had no choice because the system became completely unusable if I didn't use limit decode.

The current implementation of limit decode can't really make your system capable of handling more cameras. The moment you open a mobile app or web interface to the all cameras view, limit decode is effectively turned off. For this reason you should tune your system to be stable even without limit decode.

While limit decode is turned on and active for a camera, changing its frame rate won't have much of an impact on CPU usage.
 
Hey bp, so I enabled direct to disc on every camera and refreshed the performance stats online. No change in the CPU usage though.

Is BI activated on that machine? As I said earlier, direct to disc doesn't work when BI is in evaluation mode because Ken didn't want you to be able to record without the evaluation notice being embedded into the clips.
 
  • Like
Reactions: fenderman