BlueIris on vmware ESXi

Because BlueIris runs on windows, it is VERY important to make sure that in your VM, you have selected 1 CPU socket and 8 cores per socket.
Standard ESXi make the VM with 8 sockets and 1 core per socket.
My CPU utilisation went from 100% to 25% after that (4 core, 1 socket in stead of 4 sockets with 1 core).
Hope this helps.

Yep, silly Windows licensing restrictions. I typically set up all VMs in ESXi as 1 socket, many cores. Even if they are going to be linux VMs. As I understand it, there is no performance difference either way.
 
  • Like
Reactions: wpiman
Because BlueIris runs on windows, it is VERY important to make sure that in your VM, you have selected 1 CPU socket and 8 cores per socket.
Standard ESXi make the VM with 8 sockets and 1 core per socket.
My CPU utilisation went from 100% to 25% after that (4 core, 1 socket in stead of 4 sockets with 1 core).
Hope this helps.

Thanks, this made on big difference for me as well, using ESXi 6
 
  • Like
Reactions: wpiman
Because BlueIris runs on windows, it is VERY important to make sure that in your VM, you have selected 1 CPU socket and 8 cores per socket.
Standard ESXi make the VM with 8 sockets and 1 core per socket.
My CPU utilisation went from 100% to 25% after that (4 core, 1 socket in stead of 4 sockets with 1 core).
Hope this helps.
I've just come across this thread, and was curious about this comment. I currently run BI v5 (10 cams) on a Windows Server 2022 ESXi 6.7.0 VM. The CPU is a Xeon E3-1265L, with 8GB of the available 16GB allocated to this VM. I have 4 vCPUs allocated to it. I also run two other VM's - Lubuntu 20.04LTS and Windows 10.

BI has been running fine - it chugs along at <40% CPU usage, periodically jumping to 70-80% for relatively short periods. However, since the Windows 10 VM has always struggled with high CPU and disk usage, I thought I'd try this tip, so I reduced the BI VM to a single vCPU. This basically rendered it un-usable, with CPU locked at 100% and BI's share of that around 80%.

I've reverted to the original 4 x vCPU allocation and everything is fine again, but I was wondering if anyone had any theories for why my experience of allocating a single vCPU was so different to that of others here?
 
Last edited:
I've just come across this thread, and was curious about this comment. I currently run BI v5 (10 cams) on a Windows Server 2022 ESXi 6.7.0 VM. The CPU is a Xeon E3-1265L, with 8GB of the available 16GB allocated to this VM. I have 4 vCPUs allocated to it. I also run two other VM's - Lubuntu 20.04LTS and Windows 10.

BI has been running fine - it chugs along at <40% CPU usage, periodically jumping to 70-80% for relatively short periods. However, since the Windows 10 VM has always struggled with high CPU and disk usage, I thought I'd try this tip, so I reduced the BI VM to a single vCPU. This basically rendered it un-usable, with CPU locked at 100% and BI's share of that around 80%.

I've reverted to the original 4 x vCPU allocation and everything is fine again, but I was wondering if anyone had any theories for why my experience of allocating a single vCPU was so different to that of others here?

I think you misread the settings. In VMware you have 2 options - # sockets and # cores. johanpm said that instead of having 4 sockets/1 core he found that 1 socket/4 cores was much faster.

It sounds like you just set your VM to have 1 core/1 socket and that would be pretty bad. My settings are attached, this is a mac mini with only one physical CPU so the settings make sense for my environment.
 

Attachments

  • Screenshot 2024-08-21 at 8.50.38 AM.png
    Screenshot 2024-08-21 at 8.50.38 AM.png
    229.3 KB · Views: 4
I think you misread the settings. In VMware you have 2 options - # sockets and # cores. johanpm said that instead of having 4 sockets/1 core he found that 1 socket/4 cores was much faster.

It sounds like you just set your VM to have 1 core/1 socket and that would be pretty bad. My settings are attached, this is a mac mini with only one physical CPU so the settings make sense for my environment.
Hmmm...I don't seem to have the same options:
esxi cpu.png
 
My image was from the main screen, not the individual VM configuration. It's the overall settings.
It seems that you have more control over cores and sockets. I only have the option to change the number of vCPU's in the ESXi interface, not the number of sockets. So if I allocate 1 vCPU I get 1 CPU core and 1 socket. If I allocate 2 vCPU's I get 2 CPU cores and 2 sockets. The number of cores per socket is always 1 - I don't see a way of changing that.
???