Blue Iris on Unraid Windows VM

nanivs

n3wb
Nov 21, 2019
1
0
Dublin, Ireland
Hi There,

Not sure if anyone else run Windows VM on Unraid for Blue Iris. I am trying to make some optimisations to save some power usage on my 24/7 running NAS server.

I have a windows VM running on Unraid and the Blue Iris is running inside the VM. Unraid has a feature to store files on a cache drive temporarily and then later move it to disks during the night. so i have a share setup for cctv footage and it has cache enabled. so all recordings will be stored to cache until the mover script runs in the night. all of this works. I have setup a 30 mins timeout when the mechanical disks spin down if there is no activity. I found that the blue Iris is continuesly accessing the disks and keeps them busy all the time. The disk will spin down if I shutdown the VM. I have nothing else running on the VM apart from this BI instance.

I believe this is happening because BI keeps an eye on the recordings and try to access it all the time (as we see on the timeline view). is there a setting on BI so i can turn it off or BI only look at the timelines for a specific period of time ?
 
Most likely BI is only looking at the Windows registry and its clip database folder often, and then the New/Stored/Alerts/etc would be accessed only when recording or playing back a recording.
 
Anyway I would strongly advise having one hard drive not be part of your pool, instead map it directly to the Blue Iris VM (the unassigned devices addon should help), and use that for all recording. For the VM's OS disk, that should be located only on your unraid cache disk(s) which should be SSD.
 
Interesting, so UnRAID allows VMs? I figured it would be the other way around where you could have a VM that hosts UnRAID.

My preference would be a single dedicated mechanical drive to BI with no shuffling of data around. That might be 4 watts to run it continuously? I found an old 5TB WD Green drive and utilized it as the dedicated drive for BI. Power consumption in my rig is a bit high at around 100 watts, but that's because I'm using a way overkill 16 core workstation that was free to me.
 
Anyway I would strongly advise having one hard drive not be part of your pool, instead map it directly to the Blue Iris VM

Can you explain why? I saw a Spaceinvaderone video talking about the same thing. Using dedicated drives for VMs, docker container, etc. I understand it might improve performance a little bit, but is it really that important? How do you determine the performance benefits of dedicated drives.
 
Can you explain why? I saw a Spaceinvaderone video talking about the same thing. Using dedicated drives for VMs, docker container, etc. I understand it might improve performance a little bit, but is it really that important? How do you determine the performance benefits of dedicated drives.

It is a pretty complex subject. One of the main factors is the realization that almost all the video recorded by a VMS is worthless, and when something important happens you are generally going to export and share or make backups of the relevant video right away. So writing to a parity-backed array seems rather wasteful when compared to writing to a single dedicated HDD that is unlikely to fail without warning.

The question is, how wasteful is it writing the video to the main array? With unraid it is actually not so bad compared to a regular RAID, because unraid does not require all the disks to be active. Only the data disk currently being used and the parity disk(s) of which there can be 1 or 2. All the other data disks remain idle and can be automatically spun down.

But with single-disk recording, it is simple, you have one disk spun up and active, and all your array disks can be spun down to save power, and all their I/O capacity remains available for other file-serving duties and scheduled parity checks. Whereas if you were recording continuous video to your array you could expect performance to suffer considerably.


It is possible to put your virtual disk on a share that uses both the cache and the array, such that it writes to the cache pool first and then the "mover" program automatically moves changed data to the array periodically. In most cases this is only recommended when it provides a needed performance gain. E.g. if you need the speed of the SSDs in your cache, but you also want redundancy to protect against disk failure.

It is also possible to put your virtual disk on only the cache pool and not allow the data to be moved to the main array HDDs at all. Since the cache is generally made with SSD drives, this is typically a great choice for the VM's boot disk image because the OS will be frequently reading and writing small amounts from this disk image and the fast I/O performance of SSDs will be very beneficial. It is not wanted for video recording because you would be eating up the SSD's write endurance with mostly-worthless video.
 
As a concrete example, I run two unraid servers where the main pool is infrequently accessed so the disks can spend most of their time inactive, spun down. Each unraid server also has Blue Iris running in a VM. Both record to dedicated single disks because 1) it is more efficient and 2) it leaves all the speed of the main array available for other purposes. If a video disk fails, I'll lose all the video that was on it, simple as that. It was a conscious decision to make that tradeoff.
 
As a concrete example, I run two unraid servers where the main pool is infrequently accessed so the disks can spend most of their time inactive, spun down. Each unraid server also has Blue Iris running in a VM. Both record to dedicated single disks because 1) it is more efficient and 2) it leaves all the speed of the main array available for other purposes. If a video disk fails, I'll lose all the video that was on it, simple as that. It was a conscious decision to make that tradeoff.
you could pass through 2 or more disks to the windows VM and in windows create a software raid 1/5, or a windows storage pool. Then the recordings are redundant!