The last i knew Blue Iris only supports Nvidia and Intel QuickSync GPU acceleration, NOT AMD

cdoublejj

Young grasshopper
Apr 19, 2019
66
5
MO
The last i knew Blue Iris only supports Nvidia and Intel QuickSync GPU acceleration, NOT AMD GPU acceleration. has this changed?
 
That is right. AMD's acceleration never got implemented as far as I know. There are some generic methods in the dropdown list, "DirectX VA2" and "Direct3D11 VA", but it is not clear to me that those ever did anything.

For a few years now, hardware acceleration is not really worth the trouble and extra bugs it brings. Or in the case of nvidia, the extra power consumption. At least that is my opinion and the opinion of many others here. As long as you are running sub streams properly, the CPU usage should be quite low without hardware accelerated decoding..
 
Yet open source software like jelly fin has reliable HW transcoding on all 3 GPU brands. perhaps BI team is cooking up a whole new BI in the background. i know uibiquiti lagged behind to sucking arse for a few years before they dropped some bomshell game changer updates.
 
The BI team is just one guy and I do not think hardware acceleration is a priority anymore. Unlike something like Jellyfin where a bunch of passionate people work on it and want it to transcode as efficiently as possible.
 
  • Like
Reactions: jrbeddow
As mentioned, hardware acceleration isn't really needed anymore and it can be an energy hog.

Around the time AI was introduced in BI, many here had their system become unstable with hardware acceleration (hardware decode) (Quick Sync) on (even if not using DeepStack or CodeProject). Some have also been fine. I started to see errors when I was using hardware acceleration several updates into when AI was added.

This hits everyone at a different point. Some had their system go wonky immediately, some it was after a specific update, and some still don't have a problem, but the trend is showing running hardware acceleration will result in a problem at some point.

However, with substreams being introduced, the CPU% needed to offload video to a GPU (internal or external) is more than the CPU% savings seen by offloading to a GPU. Especially after about 12 cameras, the CPU goes up by using hardware acceleration. The wiki points this out as well.

Plus substreams opens up the possibility for older machines to be just fine, along with non-intel computers.

My CPU % went down by not using hardware acceleration.

Here is a recent thread where someone turned off hardware acceleration based on my post and their CPU dropped 10-15% and BI became stable.

But if you use HA, use plain intel and not the variants.

Some still don't have a problem, but eventually it may result in a problem.

Here is a sampling of recent threads that turning off HA fixed the issues they were having....

No hardware acceleration with subs?


Hardware decoding just increases GPU usage?


Can't enable HA on one camera + high Bitrate


And as always, YMMV.