I appreciate it varies by resolution, frame rate and cpu, but wonder what type of drop in CPU use people get when enabling hardware decoding where supported? The reason is I am planning the next generation of servers for my home infrastructure. Currently I have two PCs in a server role running VMs for core services such as file server, media streaming, version control, network controller/firewall and BI. The use of VMs prevents the use of hardware decoding in tests I've tried to date. Hence I am wondering whether to add a third dedicated PC for BI but in practice the drop in CPU would need to be significant to justify the overhead of yet another device. I am not CPU limited with BI currently so that's not a factor either.