Blue Iris in a VM? Yup no problem!

biggen

Known around here
May 6, 2018
2,688
3,060
I have 13 cameras (12 Axis and 1 Dahua) in my business. The Axis cameras were purchased years ago before I found this site (and Andy). The one Dahua I just bought from Andy a couple weeks ago. I have been running the standard and free Axis Camera Companion software for years and just letting the cameras record to a NAS VM I setup on my virtualization server. While this works, the ACC software is pretty terrible and very limited. It takes ages to load the cameras and looking up clips, fast forwarding, and rewinding through recordings is painfully slow. It was time to find another way...

So I found this site and started doing some reading. Blue Iris seems universally recommended. I quickly spun up a test Win 10 VM, installed the demo version of BI, and attached a couple cameras to the software. I was a bit disappointed in the performance however. With two cameras attached, the VM was using about 50% of the two vCPUs I allotted to it. The virtualization host was running a very old Ivy Bridge i3 chip that has only 2C/4T available to it. I had whiteboxed together this host many many years ago. Doing some further digging, I found out the high CPU utilization was because the VM guest has no access to the CPU quicksync that Blue Iris can utilize for H.264 decoding. I knew my host CPU would not be able to handle 13 cameras being thrown at it plus the other VMs I run on the host for my other uses.

I had two options: I could either dedicate a physical machine for just BI or rebuild the virtualization host to a beefier setup and keep one host for everything. Building a dedicated machine just for camera usage was not a path I wanted to go down so I opted for the latter. The i3 host was getting a bit long in the tooth for some other services I host anyway so I thought it this was the perfect time to do a "forklift" upgrade on the whole system.

New Virtualization host:
  • xcp-ng (Hypervisor)
  • Intel I9 9900 (8C/16T)
  • 32 GB RAM
  • RAID 1 500GB SSDs (They contain the VM storage repos plus the xcp-ng OS)
  • 12 TB Exos X 7200 drive (Re-used from the older i3 host)
  • 6 TB WD Red 5400 drive (Re-used from the older i3 host)
I created a new Win 10 VM and assigned 4 vCPUs, 6GB of RAM, 40GB of hard drive space, and then passed through both the 12TB and 6TB disks. Passing through the large spinning disks gives that specific VM full access to those drives. That way I can setup BI to record directly to them and not to my SSDs.

I begun adding cameras to the demo version of BI and was extremely surprised and happy with how the new host handled the VM load. Here is a screen shot of the VM load with all 13 cameras:

9ZdeNWz.png


Those four vCPUs hover around 30% CPU usage with all 13 cameras recording. Very impressed with this setup! For some reference:
  • Six of the cameras are setup for continuous recording @720p 25fps 24x7x365.
  • The remaining cameras are setup for motion only @720p 25fps.
  • I have BI set to "Direct to Disc Recording" for all cams.
  • I have BI set for "No overlays" on all cams.
  • I have BI set to record to the 12TB drive first (New) until full.
  • I have BI set to move clips to the 6TB drive (stored) after New gets full. That gives 18TB of clip storage.
  • I have BI set to run as a service
  • I have BI "Limit Decoded unless required" unchecked for all cams
  • The host is headless. I use RDP to configure BI software/Win 10. I view the cameras and clips only through the Web/Mobile interfaces.
I still don't believe I'm using quicksync since, again, I don't think VMs have direct access to the CPU hardware. But I really don't think it matters. The processor has so much headroom here that I could double the cameras and still have more room to grow by adding vCPUs. I'm more limited by storage space at this point as the server case has only so much room inside.

Its amazing how snappy the web/mobile interfaces are. Cameras pull up extremely quickly and moving through clips is an absolute breeze.

At any rate, I wanted to share that, sure, its possible to run BI in a VM so long as you have enough horsepower under the hood. I'm not sure I'd recommend the setup to just anyone. Its pricey of course. I think I threw ~$1K at it and that is with re-using the large spinning disks I already had. I think if you need to host other VMs then its a no brainer to consolidate everything onto one host But if all you are doing is running some cams at home, then a physical host of BI is probably the way to go.

FYI, I had considered using the newer Ryzen 3 CPUs. The Ryzen 7 3700X for example has the same core count that the i9 9900 has for about $100 or so less. However my hypervisor (xcp-ng) is based of off CentOS. Its been my extensive experience that Intel seems to play nicer with various flavors of *nix than AMD does. So that is why I went that route.

If you have any questions please ask. I just wanted to share a success story with virtualizing BI since most stories are about failures when going this route.
 
Hardware acceleration might be possible if you are able to pass through the Intel GPU to the Blue Iris VM, but this often proves difficult and of course it means your Hypervisor won't be able to output any video of its own unless you added another GPU just for that. With your current load I agree it is not necessary. 13 cameras at 1280 x 720 @ 25 FPS is only 300 megapixels per second, but it is still pretty amazing that you are managing to process that with barely more than one CPU core without hardware acceleration.

I suggest reconfiguring your recording folders.
  • Change the "New" folder to delete when full.
  • Configure some of your cameras to record directly to the Stored folder instead of New.
By doing this, you will be reducing the load for both disks, and you will be gaining a little bit of redundancy. In the event that a disk fails the way you currently have it configured, you will lose ALL video from the timeframe that was stored on that disk. With my suggested configuration, you will lose all the video from some of the cameras, while retaining all video from other cameras. If you are strategic about which cameras you set to each drive, you can help ensure that you still have something usable no matter which disk fails.
 
  • Like
Reactions: biggen
For what its worth, it has also been my experience that *nix systems are more stable on Intel. I'm speaking from fairly recent experience with Ryzen 7 1800X. Back when it was new I tried to run Ubuntu desktop on one. It crashed every few hours and the system logs suggested problems with SMT (hyperthreading on AMD). I wanted something usable NOW so I put Win10 on there and that just worked fine for the last 2+ years. Just 2 days ago I upgraded the RAM to ECC and the motherboard to ASRock X470D4U, and tried to run unRAID on there. The latest unRAID. Yesterday the unRAID OS crashed 4 times. Sadly unRAID keeps its logs in memory and even using a local log server on persistent storage, there was no indication of what was wrong. I'm guessing if some dying gasp kind of thing was trying to log itself, it never made it to persistent storage before it was too late. So I booted it back into Win10 and it has been running fine on there. Now this particular motherboard is riddled with issues right now anyway, so I don't know if it is the motherboard's fault, the CPU's fault, or the OS's fault. But I do know that Windows doesn't crash on this machine but *nix crashes every time I've tried.
 
  • Like
Reactions: biggen
Hardware acceleration might be possible if you are able to pass through the Intel GPU to the Blue Iris VM, but this often proves difficult and of course it means your Hypervisor won't be able to output any video of its own unless you added another GPU just for that. With your current load I agree it is not necessary. 13 cameras at 1280 x 720 @ 25 FPS is only 300 megapixels per second, but it is still pretty amazing that you are managing to process that with barely more than one CPU core without hardware acceleration.

Yeah I thought about hardware acceleration before I built the system. But was hopeful the i9 9900 would handle it so I didn't have to jump through hoops to figure it out. I think it handled it just fine! Unless somehow Blue Iris is using hardware acceleration without me knowing it, its just the CPU chewing through the data.

I suggest reconfiguring your recording folders.
  • Change the "New" folder to delete when full.
  • Configure some of your cameras to record directly to the Stored folder instead of New.
By doing this, you will be reducing the load for both disks, and you will be gaining a little bit of redundancy. In the event that a disk fails the way you currently have it configured, you will lose ALL video from the timeframe that was stored on that disk. With my suggested configuration, you will lose all the video from some of the cameras, while retaining all video from other cameras. If you are strategic about which cameras you set to each drive, you can help ensure that you still have something usable no matter which disk fails.

This is a good idea. I'll play around with this later. I'm not too concerned with redundancy but it can't hurt either.
 
Those four vCPUs hover around 30% CPU usage with all 13 cameras recording.
13 cameras at 1280 x 720 @ 25 FPS is only 300 megapixels per second, but it is still pretty amazing that you are managing to process that with barely more than one CPU core without hardware acceleration.
I think one of the bigger reasons many VM projects end in failure is that folks new to cameras read threads like this and focus on the # of cameras, not the # of MP/s that the cameras are pushing.

I can see someone new thinking “I’ve only got four cameras! If this guy is running 13, four should be a piece of cake...”, not factoring in that their four cameras are 4K (8MP) and at the default FPS and bitrate are going to generate significantly more megapixels per second than the 13 camera setup in this thread and will likely kill their VM.

@biggen, thanks for sharing your success. The detail in your post was nice and will definitely help many.
 
I finally tested a setup on a VM and it is working way better than I expected. All the doom and gloom about even trying it gave me doubts and now I'm not sure I'll be going back to a hardware install. This is also just a home setup and always like to try different things.

Previous setup was on an Intel i7 6700 with HD530 graphics, 500gb ssd, 16gb ram on Windows Server 2012r2. The core OS was used as a VM host for two other systems and BI was installed on the host OS. Not using quicksync as I never could get a working driver that didn't have the memory issues.

BI has five cameras. 1080p@10fps with motion sensing. It is between 9Mbps and 16Mbps of constant video stream coming in. BI would usually show about 20% load on the task manager on the host OS.

New setup is a Ryzen 1700 with 32gb ram and 1tb M.2 drive my kid was gaming on. Added a four port Intel NIC. I fired up Hyper-V Server 2012r2 then moved all the VM's from the old machine and a few others that were scattered on other systems. Total of eight VM's running on the host. One VM is Windows Server 2012r2 dedicated for BI only.

The BI VM is setup with two cores and dynamic ram with the actual ram usage being in the 2.4 to 3.5gb range. BI is showing about 25% cpu usage inside the actual VM but the host is only showing about 5% CPU usage for the BI VM. All BI settings are exactly the same. I exported the config and deactivated on the old machine then imported the config and reactivated on the VM install.

To be honest it seems to be more reliable and responsive when checking from the phone app. The last two times I had BI installed on a host OS it would sometimes hang when trying to log in via the phone app or make me log out and log back in to restore a stream. It hasn't done that at all running in the VM. Playback on clips from the calendar come up faster and and are quicker to respond to scrolling the timeline.

I'm going to try to upgrade to Hyper-V server 2016 or 2019 as they are supposed to have PCI passthrough and can possibly use the hardware encoding on the host video card. This will make moving to new hardware a lot easier in the future. Just move the VM. No more config backups, unregister, re-register, etc... And if the machine it is on ever crashes just restore the nightly VM backup to another box and go.
 
Yeah, I'm loving running it in a VM. Its super easy to take backups. Xen Orchestra takes a nightly delta backup and moves the backup to my Synology NAS. Then if my host ever craps out, I only have to re-install xcp-ng bare metal and then restore the backup.

I used to work for an ISP back in the 90's and everything back then was one physical server per service. I remember thinking back then, "What a PITA this is to maintain will all these hosts". I refuse to have single servers running single roles in 2019. Those times are over...
 
Last edited:
If you have any questions please ask. I just wanted to share a success story with virtualizing BI since most stories are about failures when going this route.

Hi there! Thanks for sharing @biggen

It's been a year since you started this thread and I thought it would be interesting to check with you if you are still satisfied running BI on your xcp-ng Hypervisor ...

I've started to look into this because I'd like to simplify my home setup which consists of three physical computers.
  1. HP Prodesk i5-600. Dedicated server running BI. I have 9 cameras 1920x1080 15 fps and about half of them are setup for continuous recording. CPU load is a bit above 60%. This machine works well but it's quite old.
  2. Intel NUC i5 7260U. (Ubuntu) Running my home automation software and 17 different docker images.
  3. A laptop PC constantly docked into a docking station. (Windows 10) I'm using it as if it was a stationary computer mostly for code editing in VSC and for browsing the net. No games are going on here but using a 3840x2160 pixel monitor. That is, I don't think I will ever need a graphics card for gaming but I'd like to be able to watch a youtube video if needed.
It would be great being able replace them with a single machine (preferably in a mini-ITX case). Sometimes I tend to try solve problems that don't exist and I make easy things complicated. (Please warn me if I'm doing it again) Maybe I'm about to do that again? The reason I'd like to replace my three physical machines above is that machines will fail to function sooner or later. It's just a waiting game, isn't it?
  • Without any recent experience of VMs and especially Hypervisor class 1 I'd like to check with you guys if I'm on the right track trying to replace also machine 3.
  • xcp-ng seems like a good choise but I'm concerned that it's quite complicated, tons of options with features and concepts that I'd need to learn.
  • Any chance you have a 2020 mini-ITX Blue Iris build recepie lying around for my needs? (preferrably not something with the sound of a hay dryer since it will probably stand on my working desk)
Thanks in advance!

Cheers!

EDIT attaching a file showing my current camera bitrates and totals
Capture.PNG
 
Last edited:
Running a bunch of VMs on one beefy server is one thing. Using one of those VMs with a locally connected keyboard, mouse, monitor is a little more delicate. You need to at minimum have a dedicated graphics card to pass through to that virtual machine. In a mini-itx case, that will use your only PCI-E slot.

Linus tech tips has done some videos about running multiple gaming PCs and/or video editing workstations on one physical server. You might want to look those up. They used "unRAID" as the operating system / hypervisor because it is a pretty decent storage server platform and has good support for virtualization with hardware passthrough. I've used unRAID myself and aside from a few quirks when running Windows guests, it does a pretty decent job of virtualization. Unfortunately I despise linux and much prefer to run Windows VMs, which VMWare ESXi does a better job with IMHO. I also really like how ESXi doesn't force me to pin certain CPU cores to each VM like unRAID does. I feel like it makes ESXi more scalable.
 
Hi there! Thanks for sharing @biggen

It's been a year since you started this thread and I thought it would be interesting to check with you if you are still satisfied running BI on your xcp-ng Hypervisor ...

I've started to look into this because I'd like to simplify my home setup which consists of three physical computers.
  1. HP Prodesk i5-600. Dedicated server running BI. I have 9 cameras 1920x1080 15 fps and about half of them are setup for continuous recording. CPU load is a bit above 60%. This machine works well but it's quite old.
  2. Intel NUC i5 7260U. (Ubuntu) Running my home automation software and 17 different docker images.
  3. A laptop PC constantly docked into a docking station. (Windows 10) I'm using it as if it was a stationary computer mostly for code editing in VSC and for browsing the net. No games are going on here but using a 3840x2160 pixel monitor. That is, I don't think I will ever need a graphics card for gaming but I'd like to be able to watch a youtube video if needed.
It would be great being able replace them with a single machine (preferably in a mini-ITX case). Sometimes I tend to try solve problems that don't exist and I make easy things complicated. (Please warn me if I'm doing it again) Maybe I'm about to do that again? The reason I'd like to replace my three physical machines above is that machines will fail to function sooner or later. It's just a waiting game, isn't it?
  • Without any recent experience of VMs and especially Hypervisor class 1 I'd like to check with you guys if I'm on the right track trying to replace also machine 3.
  • xcp-ng seems like a good choise but I'm concerned that it's quite complicated, tons of options with features and concepts that I'd need to learn.
  • Any chance you have a 2020 mini-ITX Blue Iris build recepie lying around for my needs? (preferrably not something with the sound of a hay dryer since it will probably stand on my working desk)
Thanks in advance!

Cheers!

EDIT attaching a file showing my current camera bitrates and totals
View attachment 69181

Yes I'm still running it. Actually add two cameras to this setup since I wrote this original post. I'm up to 15 cameras now on that xcp-ng host whilst running BI in a VM. I haven't enabled substreams on that BI installation yet and I'm still running under 50% CPU load with only 4 cores assigned. If I actually took the time to enable substreams it would be a massive reduction in CPU cycles.

I will tell you that I decided to run Proxmox at home for my BI installation there and I like it a lot more than xcp-ng. Proxmox has better documentation, better community support/forums, easier container management, and a much better overall host management interface. Not having a built-in web management interface for xcp-ng is really a bummer since you are forced to either use an unsupported Windows client or spin up a dedicated VM to host Xen Orchestra which can then perform the management of the host. I'll probably convert the xcp-ng host to Proxmox and move the BI installation to that sometime this winter when business usually slows down for us.

I will tell you that Proxmox can easily handle items #1 and #2 for you. Item #3 is tricky. Running a virtual desktop is fraught with difficulties mostly because of having to deal with GPU passthrough. I've never done it so have no advice to offer on how to do it. I'm happy to still have laptop and desktops. You will be fairly limited to case and motherboard choices going with a mini-ITX. I'd go for micro-ATX at a minimum.
 
  • Love
Reactions: Buttan Butt
It would be great being able replace them with a single machine (preferably in a mini-ITX case). Sometimes I tend to try solve problems that don't exist and I make easy things complicated. (Please warn me if I'm doing it again) Maybe I'm about to do that again? The reason I'd like to replace my three physical machines above is that machines will fail to function sooner or later. It's just a waiting game, isn't it?

You could definitely replace all three with one machine, but if your reasoning is to be proactive to avoid a future failure, I wouldn't do it. A single new machine could still fail.
 
Thanks for your post @biggen.

I had a successful setup of BlueIris with my 6 5MP Reolink IP Cameras running Windows 10 on my machine. My hardware is an i7 4790 (haswell) with 16GB of RAM and I was running Windows 10 Professional.

Because the need arose to have other always on computers, I decided to try virtualizing. So I installed Proxmox on the bare metal and created a VM for Windows 10, largely following this guide (with Virtio Drivers, etc.): The Idiot installs Windows 10 on Proxmox – Jon Spraggins

I have Proxmox on its own SSD, a separate SSD for the VM harddrives, and I'm planning to pass through a WD purple to the Windows VM for BI recordings.

So on my nice clean installation of W10 Pro, I went ahead and downloaded BlueIris 5, and it installed fine after installing a C++ prerequisite. However, every time I try to start it, Windows crashes with a Blue Screen, and the message is usually "System Service Exception" although I got a "Kernel Security Check Failure" the first time. Prior to installing BI, I did all the Windows Updates, and also installed VLC just fine.

I've tried googling around, but can't find anything that helps. This happens even if I try starting BI in safe mode, and the Windows memory check found no errors.

The Proxmox setup is Proxmox 6.2-4, I've allocated 4-8GB of RAM (with ballooning), machine type i440fx, display SPICE 9qxl0, SCSI controller is the VirtIOSCSI, defaut SeaBIOS, and 6 processor cores of "Host" type.

Did you encounter anything similar to this? And how does my Proxmox setup compare to yours? Perhaps there's a driver compatibility issue somewhere.

If I can figure this out, this would be the ideal setup, but if not I'll have to revert back to Windows 10 on the base machine and perhaps run some VMs in VirtualBox.
 
  • Like
Reactions: biggen
Add the following to /etc/modprobe.d/kvm.conf:

options kvm ignore_msrs=1

You will probably need to create that kvm.conf file. After you add the above line reboot the host and enjoy a working BI installation now. :)
 
  • Like
Reactions: ninplayer1
Add the following to /etc/modprobe.d/kvm.conf:

options kvm ignore_msrs=1

You will probably need to create that kvm.conf file. After you add the above line reboot the host and enjoy a working BI installation now. :)

Wow that was exactly the problem! It works well now, so thank you so much! I was surprised at the confidence in your response, but it's definitely well-placed. ;)

I was going to attempt an iGPU passthrough to optimize hardware acceleration once I had a stable setup, any my understanding is that this option has something to do with that (VGA passthrough from what I've found). It appears I have a lot to learn.

Thanks again!
 
  • Like
Reactions: biggen
I've been running VM's for years now. I know it won't make sense, but they seem to crash less than bare metal systems. ESXI 6.7 Is a wonderful VM to use. You can download it free to use for 60 days. Spend a whole weekend playing around with it, and you'll be hooked. Easier to backup and restore on different hardware too. Once you go VM, you won't go back to bare metal.
 
You can use VMware ESXi for free. There are some limitations, but most home users would be okay with the free version. The enterprise features will expire after 60 days.

Does anyone have a home server with more than 2 physical CPUs? I have two dual socket Xeon servers at home, but no quad socket ones :).

Free vSphere HypervisorPaid vSphere Hypervisor
ExpirationNo time limits on free versionNot applicable
Evaluation time60-day trial of Enterprise Plus featuresNot applicable
Community SupportVMTN ForumsVMTN Forums
Maximum physical CPUs2768 (logical)
Maximum physical memory16TB16TB
Maximum vCPUs per VM8 vCPUs256 vCPUs
Maximum vRAM per VM6TB6TB
Official SupportNoVarious SLAs available
Central Management (vCenter)NoSupported
High Availability (HA)NoSupported
Storage/Backup API usage (VADP)NoYes
Live migration of VMs (vMotion)NoSupported
Load balancing of VMs (DRS)NoSupported
 
  • Like
Reactions: Staff Curtis
The problem with ESXi is that you have to pay for at least Essentials to be able to back up your VMs last I checked. I’m just not a fan of ESXi at all. With KVM, xcp-ng, Proxmox, you can do for free what it would cost $5500/yr (Essentials Plus) to do with ESXi.

Unless you are working with a very large company that is already entrenched in Vmware products and want to have a lab at home to continue to learn on it, there are better alternatives.
 
Last edited:
You can backup ESXi VMs for free using OVFTOOL.

You can get a VMUG subscription for home use for $200/year if you want to use some of the enterprise features, but you don't need to.

For Blueiris, the standalone free ESXi works fine for most people.

For me, ESXi has been rock solid for the last 8 years at home. I had one server running for 4 years on ESXi 5.1. I only stopped using it because I was giving that server to my brother and moved my VMs to ESXi 6.7 on a new server. Now I'm on ESXi 7.0.