Hard crash launching 64 bit version on Win10 KVM VM.

VirtualCam

Young grasshopper
Joined
Sep 25, 2015
Messages
49
Reaction score
11
Are you using the latest 1903 Windows 10 boot ISOs?

I would try that first if you aren't.

You could also try setting the CPU type to Penryn, let it install and patch up to the latest service levels, then change to KVM64.
 

eddyg

n3wb
Joined
May 23, 2019
Messages
12
Reaction score
0
Location
USA
Just curious if anybody has made progress on this.

I recently consolidated multiple old, crusty servers into one "real" server (with a Xeon CPU) that will serve as a VM host for everything. The server is running Ubuntu 18.04, and is mostly Docker containers, but I just configured KVM in order to set up a Windows 10 guest to run Blue Iris.

I'm using the 1909 Win10 ISO, and OS installation went fine. But as soon as you try to launch Blue Iris 5, the Win10 guest BSODs with a SYSTEM SERVICE EXCEPTION.

I configured another VM with the Penryn CPU type as suggested by VirtualCam above. That OS install and update also went fine. But after shutting down the Win10 guest and virsh edit'ing the CPU type to kvm64, it just boot-loops (and this is without having Blue Iris even being installed yet!) Setting the CPU back to Penryn allows the guest to boot again, but trying to start Blue Iris results in the same SYSTEM SERVICE EXCEPTION.

Any ideas? After all this, it will be a bummer if I can't virtualize Blue Iris under KVM!
 

VirtualCam

Young grasshopper
Joined
Sep 25, 2015
Messages
49
Reaction score
11
I have been running Win10+ BI 4 and now 5 on Proxmox for several years. It does work.

Did you install Windows 10 with KVM64 CPU? Did you install all of the virtual drivers? Then install BI?
 

eddyg

n3wb
Joined
May 23, 2019
Messages
12
Reaction score
0
Location
USA
I was sure to install all the drivers ("Device Manager" was "clean", with no exclamation points) and then applied all the pending Windows updates before I shut down the guest and switched the CPU to kvm64... that's when it started boot-looping.

During the install of Win10, I do have to manually load the Red Hat VirtIO SCSI controller driver so it can see the disk to install the OS on.

The only other thing I can think of that may be "out of the ordinary" is that I am utilizing an SR-IOV VF NIC for networking in the guest, so I have to install the vxn65x64 driver (from Intel) for that to work. But once the driver is installed, the network comes right up and I can start downloading the spice-guest-tools for Windows, applying Windows Updates, etc. without any issue.

For some reason, I expected BI under KVM to "just work", so am a bit disappointed! Definitely appreciate you taking the time to reply though.
 

VirtualCam

Young grasshopper
Joined
Sep 25, 2015
Messages
49
Reaction score
11
That network type will give you bad performance, you want to use the VirtIO network interface and install the virtio driver.

Did you try installing Windows 10 with KVM 64 CPU type? What happened?
 

eddyg

n3wb
Joined
May 23, 2019
Messages
12
Reaction score
0
Location
USA
Hmmm... unless I'm missing something, the VF (Virtual Function) of the NIC should give excellent performance? It takes a physical NIC and makes it appear as (up to) 64 distinct NICs (each with their own MAC address, PCI ID, etc.) all natively in hardware courtesy of SR-IOV (single-root input/output virtualization). Each VF interface can then be "passed through" to a VM as a raw PCI device, while physically there is only one a single NIC.

And no, I could not even get the Windows 1909 installer to start with the CPU type set to kvm64. It starts loading, shows the blue Windows logo on a black screen for a couple seconds (never even see the spinning circle of white dots) and then the guest restarts... the blue Windows logo/restart happens over and over and over again until I destroy the VM. And FWIW, after running virsh start blueiris, a few messages get logged but then nothing in /var/log/syslog or /var/log/libvirt/qemu/blueiris.log while it's in this boot loop.
 

eddyg

n3wb
Joined
May 23, 2019
Messages
12
Reaction score
0
Location
USA
Just to make sure the VF NIC wasn't an issue, I created a test VM via virt-install with the same configuration parameters except for using --network=none and it boot-loops the same way with the kvm64 CPU type.
 

VirtualCam

Young grasshopper
Joined
Sep 25, 2015
Messages
49
Reaction score
11
Using SR-IOV is not that simple. The NIC, the motherboard, the hypervisor, and the guest OS must all support it. I could not find any evidence that it's supported under Windows 10, only Windows server. So, unless you're using a server class system, it's not going to work.


I would jump onto the QEMU mailing list, for which the developers follow, and ask for help there.
 

eddyg

n3wb
Joined
May 23, 2019
Messages
12
Reaction score
0
Location
USA
Thanks again for taking the time to reply.
Using SR-IOV is not that simple. The NIC, the motherboard, the hypervisor, and the guest OS must all support it. I could not find any evidence that it's supported under Windows 10, only Windows server. So, unless you're using a server class system, it's not going to work.
I understand about needing to meet all the hardware requirements — and my system does. But VF devices are definitely supported (see the Windows docs here for example), and Intel provides drivers for VF instances of their devices for many versions of Windows. But the NIC isn't the issue (even if I specify --network=none it boot-loops) and I once I install the Intel VF driver, the VM is "on the net" and I can download all the Windows updates, etc. totally fine.

So I'm fairly confident it's not at all related to the NIC considering how early in the boot process things go south. It definitely seems like a CPU-related issue. I read through the Level1Techs forum post you linked, but I can install Windows 10 1909 just fine with --cpu=host (which is great and actually what I'd prefer) and Win10 runs fine, but as soon as I launch Blue Iris it causes a BSOD. It's only if I try to use the kvm64 CPU type that Windows boot-loops, whether it's from an install initially done with --cpu=Penryn or --cpu=host, or try from scratch with --cpu=kvm64 (in which case the Windows installer never runs for more than a couple seconds before rebooting).

If Blue Iris would just run with --cpu=host I'd be all set. :confused:

What I did find interesting in that forum thread was a recent comment that says:
Latest libvirt (4.7.0 as of this moment as I see it on Fedora 29) fixes the problem and the Windows 1803 and 1809 installation works perfectly OK.
I'm running Ubuntu 18.04.3 LTS which only has libvirt 4.0.0.
 

VirtualCam

Young grasshopper
Joined
Sep 25, 2015
Messages
49
Reaction score
11
I never said Sr vio was the cause of your problems, just that it was likely not the best nic to use. Not many people use it.

Looks like you'll need to upgrade your Ubuntu then if you want things to work.
 

eddyg

n3wb
Joined
May 23, 2019
Messages
12
Reaction score
0
Location
USA
I never said Sr vio was the cause of your problems, just that it was likely not the best nic to use. Not many people use it.
Fair enough.

Looks like you'll need to upgrade your Ubuntu then if you want things to work.
Unfortunately upgrading the OS isn't an option. I don't want to risk the stability of all the other Docker and LXC containers that are running just fine (plus, only Long-Term Stable releases have hardware-related packages that are supported by vendors, and the next Ubuntu LTS release isn't expected until April 2020... and most sane people won't upgrade their servers until at least the 20.04.1update is released ;) ) but perhaps there's a way to try upgrading just libvert/qemu/kvm using an alternative PPA or something. (But even libvert 4.7.0 is pretty old — it was released way back in September 2018... the current release is 5.10!)

As much as I wanted to try Blue Iris because it seems to be "the best", I think at this point it may be time to look at other Linux-based options (Xeoma, in particular) and re-visit a Windows-based VM solution after the next OS update.
 

eeeeesh

BIT Beta Team
Joined
Jan 5, 2017
Messages
412
Reaction score
681
I don't want to change the subject, but have you considered running VMWare ESXI as your hypervisor instead of Linux? Plenty of articles out there about the free version and then you can just run BlueIris in a Win10 VM if your hardware is compatible. Lastly, any old Win7 or Win8 license key can still be used to activate Win10

 

analogue

n3wb
Joined
Aug 28, 2019
Messages
8
Reaction score
4
Location
usa
I gave up on blueiris on KVM. Switched to using motion in a docker container and it works surprisingly well once tweaked (recommend you read the docs). Was up and running with a single rtsp cam with motion detection in about an hour.
 

eddyg

n3wb
Joined
May 23, 2019
Messages
12
Reaction score
0
Location
USA
have you considered running VMWare ESXI as your hypervisor instead of Linux?
I decided against running ESXi since I'm a Linux guy. I didn't intend to ever need anything but Docker and LXC. In retrospect, it may have been a good idea to install ESXi and then install Ubuntu 18.04 as a guest, and then run Docker and LXC there, giving me ultimate flexibility for situations like this where all the sudden I need a Windows VM, but I think it would be a major undertaking to switch my server to ESXi on my server at this point.

I gave up on blueiris on KVM. Switched to using motion in a docker container and it works surprisingly well once tweaked
I just bought some new Dahua cameras from EmpireTechAndy that have built-in motion detection, so two of my primary goals in finding some NVR software are: 1) support for in-camera motion detection events (so the computer isn't constantly decompressing multiple video streams) and 2) a decent mobile app (ideally, with push notifications and a frame from the video).

FWIW, I was able to install Milesight VMS Lite into a KVM VM and it started up and ran just fine. It discovered and connected to a camera via ONVIF (I never saw any video, but I also didn't request a trial activation license. I just wanted to see if it worked in KVM.)

I also created a Dockerfile for Xeoma and got that running, but their iOS mobile app seems pretty bad, and I found out that the "Camera-Embedded Detector" module (which is needed to do on-camera motion detection) is an additional $40 per camera (including 1 year of updates).

Next up to look at is DW Spectrum IPVMS, which runs natively under Linux, and has both Mac and iOS apps available, and according to their list of supported cameras, they have "motion detection" for some of the Dahua cameras. (They also have a demo server that lets you easily evaluate the clients.)

I haven't looked at Shinobi primarily because there is no dedicated mobile app. The author says he tried to get ONVIF on-camera event detection working and couldn't so gave up and doesn't intend to try again, opting instead for a "hacky" solution where Shinobi runs a fake SMTP (or FTP server) and relies on the camera trying to send an email or upload an image.
 

analogue

n3wb
Joined
Aug 28, 2019
Messages
8
Reaction score
4
Location
usa
Do you mind sharing model numbers of the Dahua cams you got? I'm looking to get something beefier than the 2MP doorbell cam I have at the moment. Of course being linux friendly, not phoning home, and quality implementations of all claimed supported protocols are super important.

FWIW, I played around with shinobi and quickly ruled it out. Frustrating setup experience and unusual choices for simple things (logs in the UI don't have a timestamp, can't log to file or stdout/stderr but can log to a db table, etc).
 
Last edited:

VirtualCam

Young grasshopper
Joined
Sep 25, 2015
Messages
49
Reaction score
11
I decided against running ESXi since I'm a Linux guy. I didn't intend to ever need anything but Docker and LXC. In retrospect, it may have been a good idea to install ESXi and then install Ubuntu 18.04 as a guest, and then run Docker and LXC there, giving me ultimate flexibility for situations like this where all the sudden I need a Windows VM, but I think it would be a major undertaking to switch my server to ESXi on my server at this point.
Sounds like you need Proxmox.
 

eddyg

n3wb
Joined
May 23, 2019
Messages
12
Reaction score
0
Location
USA
Sounds like you need Proxmox.
Heh. We have a ton of Proxmox (KVM/qemu with a nice web-based UI) deployed at $DAYJOB as well. But as I mentioned before, I'm consolidating all of my services currently running on multiple servers into one main server with Docker, and didn't think I'd have a need for a Type 1 hypervisor, and figured if I absolutely needed to, I could use KVM as Type 2 hypervisor. I'm plenty comfortable with doing stuff via "the command line", so using virt-install is no big deal. I just didn't anticipate compatibility problems using KVM! One thing I like about running Docker on "bare metal" is that I don't need to worry about allocating resources like RAM, CPUs, storage, etc. to each VM.

Do you mind sharing model numbers of the Dahua cams you got?
I ordered an IPC-HDBW4231F-E2-M (I have a corner location with a single camera now that could benefit from extra coverage, so this looked line an interesting replace-and-go solution worth checking out) and a couple IPC-HDW5231R-ZE cameras. I learned about both of them lurking here on IPCamTalk, and the latter one seems to be pretty highly recommended. I really wanted improved night-mode image quality over my existing cameras.

I played around with shinobi and quickly ruled it out. Frustrating setup experience and unusual choices for simple things
I also ended up playing with Shinobi simply because it was so easy to try with this Docker container. I did eventually get a camera connected and showing content, and it seems like a capable solution, but it also seems like it would take a lot of configuring and tweaking to get "right".

I spun up an LXC container (running Ubuntu, of course) and got DW Spectrum IPVMS up and running easily and am very impressed so far. I'm using LXC instead of Docker since it gets licensed based on various "hardware things", and I'm not sure how the ephemeral nature of a Docker container would play with that licensing. I was able to activate a four-camera trial license just by clicking a button in the macOS client, and the DW Mobile Plus app gets good reviews in the App Store and was easy to set up and use, which is an important component of my decision.

The other thing is that DW Spectrum has an extensive open API for integration with other things that I'm looking forward to checking out.
 

IPSweets

n3wb
Joined
Nov 16, 2019
Messages
4
Reaction score
1
Location
Australia
I found a solution:

Relates to msrs bug. Search on google if you want more info. Quick fix for proxmox and other users of kvm is to add ignore_msrs to your config.

Add the following to /etc/modprobe.d/kvm.conf:
options kvm ignore_msrs=1

Running with full CPU speed of host mode now! :)
 
Joined
Apr 1, 2020
Messages
1
Reaction score
0
Location
UK
I found a solution:

Relates to msrs bug. Search on google if you want more info. Quick fix for proxmox and other users of kvm is to add ignore_msrs to your config.

Add the following to /etc/modprobe.d/kvm.conf:
options kvm ignore_msrs=1

Running with full CPU speed of host mode now! :)
Just wasted my evening on KVM + W10/WSvr2019 + Blue Iris having BSODs without fail after install.

Just registered to thank you 'IPSweets' for your post.

My config file was
Code:
/etc/modprobe.d/qemu-system-x86.conf
but after a reboot, seems to have done the trick!
 

HeliosX

n3wb
Joined
Jan 3, 2022
Messages
1
Reaction score
0
Location
United Kingdom
I'm using BI 5.5.4.3 with Win10 on Virt-Manager in Ubuntu 20.04 and I managed to make it work when I set the CPU as "Hypervisor Default". In Windows 10, the CPU now shows as QUEMU Virtual CPU version 2.5+.
Works like a charm now. Hope that will help those who still can't get it to work.
 
Top