Hello guys,
I have been sometime looking around the forums, posts about why virtualization is not recommended for blue iris, or any other CCTV system. But I want to ask about my point of view in this.
Up to what I can see, virtualization is just not simply recommended because of not having the ability to use hardware acceleration. But that seems not to be entirely true, I will give you 3 examples of virtualization I have been thinking about:
1. It seems on various hypervisors you can passthrough the intel igpu, to a virtual machine to use intel quick sync for the hardware acceleration. Now, this seems to result in an issue, which might limit you the ability to manage the hypervisor through VGA console access, but that seems to be solved, if you just buy another internal cheap GPU for that, which doesn't seem to be a great issue.
2. Another setup I have seen around, is buying a quadro P2000 that supports multiple transcoding streamings and pass it through again to a VM to deal with it, and leave the intel gpu to deal with the console.
3. And the last setup I have been seeing around is to unlock customer GTX GPUs with hacky drivers, to unlock the multiple streams at the same time, and pass it through to the VMs as in the before example I just mentioned.
Talking about electricity costs, first example seems to be the most cheaper one, so just wanted to ask here, why should we have a dedicated hardware which can end up in more electricity, rather than use virtualization, if we already have a dedicated device for that.
Kind regards
I have been sometime looking around the forums, posts about why virtualization is not recommended for blue iris, or any other CCTV system. But I want to ask about my point of view in this.
Up to what I can see, virtualization is just not simply recommended because of not having the ability to use hardware acceleration. But that seems not to be entirely true, I will give you 3 examples of virtualization I have been thinking about:
1. It seems on various hypervisors you can passthrough the intel igpu, to a virtual machine to use intel quick sync for the hardware acceleration. Now, this seems to result in an issue, which might limit you the ability to manage the hypervisor through VGA console access, but that seems to be solved, if you just buy another internal cheap GPU for that, which doesn't seem to be a great issue.
2. Another setup I have seen around, is buying a quadro P2000 that supports multiple transcoding streamings and pass it through again to a VM to deal with it, and leave the intel gpu to deal with the console.
3. And the last setup I have been seeing around is to unlock customer GTX GPUs with hacky drivers, to unlock the multiple streams at the same time, and pass it through to the VMs as in the before example I just mentioned.
Talking about electricity costs, first example seems to be the most cheaper one, so just wanted to ask here, why should we have a dedicated hardware which can end up in more electricity, rather than use virtualization, if we already have a dedicated device for that.
Kind regards