Why do people use virtual servers / hypervisors?

bp2008

Staff member
Mar 10, 2014
12,770
14,297
USA
I see a lot of people and businesses moving away from traditional dedicated servers and toward virtual environments controlled by a hypervisor. But I can't really understand why this is so popular.

I can think of two reasons why you might want to run applications in different, relatively isolated operating systems on one machine.

1. Security. If each application runs in its own virtual environment, then they can't affect each other very much.
2. Compatibility. Sometimes you just can't run all your software on one version of one operating system.

Is this really all there is to it? People just have a bunch of low-power applications that for whatever reason work best when isolated from the others by a layer of virtualization?

I could not effectively consolidate my servers at home if I tried. FreeNAS? Requires dedicated hardware. Blue Iris? Uses all the CPU and then some. I run all my low-demand stuff on an Intel NUC that takes up no space and costs me maybe $5 a year in electricity. Maybe I am just really far from being the target audience for this hypervisor stuff?
 
Virtual servers are usually more cost efficient, excluding CCTV. Most services don't require 100% of all available CPU, and definitely not all the time, thus it makes sense you use the rest of the remaining CPU to other services. Virtual servers are also easier to manage than 30 Intel NUCs, less breaking parts and maintenance.

You say FreeNAS requires a dedicated hardware, why is that?

CCTV is a completely different story, I've heard customers want virtual server solutions for CCTV, but as it is very CPU heavy, I dont understand why they would want that.
 
Virtual servers are usually more cost efficient, excluding CCTV. Most services don't require 100% of all available CPU, and definitely not all the time, thus it makes sense you use the rest of the remaining CPU to other services. Virtual servers are also easier to manage than 30 Intel NUCs, less breaking parts and maintenance.

Yes, I agree if for some reason you needed to run 30 servers it would be easier and cheaper to do it with, say, 3 powerful machines as opposed to 30 underpowered ones.

But if you get 3 powerful machines, why not just run 1 operating system on each of them? Is that not a lot easier to manage than 30 virtual systems each with their own OS? Does that not make sense for most businesses and home users?

You say FreeNAS requires a dedicated hardware, why is that?

Technically it is not required, just highly recommended. Mostly it stems from the fact that "FreeNAS is designed to run on bare metal." They say it is possible to get it working with full or near-full functionality in a virtual machine but it takes much extra complicated setup to accomplish. And when a hardware failure occurs, the added complexity makes it less likely for recovery to be smooth. If you are curious, see https://forums.freenas.org/index.ph...nas-in-production-as-a-virtual-machine.12484/

CCTV is a completely different story, I've heard customers want virtual server solutions for CCTV, but as it is very CPU heavy, I dont understand why they would want that.

Actually that is mostly just Blue Iris that is very CPU heavy. I hear exacqVision for example is amazingly efficient.
 
Scalability and efficiency - run hundreds of virtual servers on one high-end machine. And tune each CPU / RAM to the needs of the associated (usually single) application. Many apps just don't need many resources, so huge wastage.
Resilience - automated and rapid transparent transfer of a failed or unavailable environment to another high-end machine with spare capacity - local or remote.
Application compartmentalisation for reliability or security - even now, one app can conflict with another, and not just on resource consumption.
Ease of management and deployment - a single management system for one environment, deploy new server in seconds, snapshot a running app for analysis and backup.
Dramatically reduced power consumption therefore heat generation and server room size - meeting ever-stricter environmental regulation.
Ease of grouping and segregation for security and regulatory requirement - the (accredited) virtual environment includes the network.
Storage separated from the compute resource - SAN is also virtual.
and lots more.
The thing is - it just works so well. The transformation from the traditional server room full of racks with masses of servers, discs, switches and wiring is just amazing.
 
  • Like
Reactions: catseyenu
Hmm. See I've always been a Windows guy and I work for a small company that uses Windows servers exclusively. In that environment it is simply not an effective use of time or money to give each application its own private instance of the operating system. For that matter we have more (mostly antiquated garbage) server boxes than we need. If we need to install a new application, we just put it on an existing server and that is all there is to it. It is standard practice at my house and where I work to run multiple applications on one server.

That said I suppose if our IT guy wanted to he could still save space and time (and, in the long run, money) by virtualizing all our office servers as you describe, and we'd get much better backups out of the deal.
 
Keep in mind with 30 virtual servers you only need one computer to manage them which can push changes to all 30 servers eliminating managing all thirty
 
The concept of using Windows servers for anything other than a Domain Controller for AD, a MSSQL box or possibly a DNS server is crazy from an OS licensing standpoint. IIS is a horrible web server. Why would I pay for a server 2012 license to run IIS for a web server when I could easily run a low resource *nix environment and have a host of different options for a web server, all of which are better than IIS?

File services are the same thing.

Even for the MSSQL servers we are beginning to experiment and move toward virtual environments for them. If you have 30 physical windows servers now you should be doing some type of consolidated management for them (WSUS, AD Group Policy, etc) the same thing applies to the virtual environments. You don't actually go and log into each of the 30 machines to apply patches do you?

When you get past the concept of needing to manage and support 30 separate servers from a hardware and power standpoint and realize you can do the same on 6 or 9 servers in a highly available and redundant cluster, it starts to make sense.

RAM density in the Cisco UCS platforms are insane. We have been migrating our virtual environments from Dell equipment to Cisco UCS and everyone is happy. There will always be a need for dedicated hardware, we are just attempting to more efficiently use the resources we need.

The other part of this that you seem to be missing is the concept of a dedicated SAN environment for storage, which in itself is redundant. Every physical host has redundant connections to the SAN storage processors and that allows a virtual machine to be moved on the fly from one physical host to another. We have been able to provision our clusters so that we can take down at least two physical hosts in each cluster for hardware maintenance without needing to shut down VMs.
 
In my environment it is much better to have virtual servers than full real boxes. I recently built three servers and two run at slightly less than half load. Then one one can fail and the other can take up the slack. I also have another server at a third location that is just a backup so so one fails I can fail to that locations server too. High availability is a must, so a server can fail and another come right up and take it's place. Very cool. If I need to do server maintenance or upgrades I can fail over, do my work and fail back and the end users never know anything happened at all which is important in a 24/7 365 operation as we can really have zero downtime.

The best part is that these two servers (third does not count as it is only backups/spare) replaced 19 separate servers. Just the power savings alone will add up as the new 2u systems replaced those old ones that were almost all 4u systems. Power savings, much easier to cool the data center, and I can now hear myself think with about 100 computer fans less in there I swear.

Most applications and software that we run works much better to have a server setup on just that one software and nothing else that is not needed on there. Makes for less issues and better stability over trying to run 10 major apps on one single server. When you are done with a server you can easily kill it and free the resources again where as when running multiple things on one it can get tricky to remove apps and rules and such without adversely affecting your other programs. It is SO much cleaner to manage a server with only an app on there vs trying to keep track of what is what on a box loaded with tons of software. It all comes down to reliability and up time requirements and it is much better virtual.

I can log into one console and see my overall system health, CPU usage, memory and all and reallocate resources as needed. Oh, vendor xyz says the new version of their database software needs 32gb of ram and not the old 16. Ok, no problem, click click and boom, as long as my server has the head room I gave that virtual server more juice. Overall it is the only way to go for us now, if nothing else for the data security and up time we require there are no other easy options. The 19 servers I replaced to have the same failover capability would now be 57 servers, though that would be greater now as that was a few years back. I shudder to think of the headaches we'd have now without virtual servers.

Also, I can build a few amazing monster servers and have great performance, or I could have 50+ boxes, most who knows how old, running whatever for RAM, HD, speed varying greatly and in general have inconsistent performance.
 
Heh. I can hardly comprehend what you could use 19 servers for. The company I work for does fine with just 3 server boxes for production (no virtualization) and we'd still be fine with only two. Or one, even.

I mean, there would be no point in splitting our workload across multiple VMs running on the same machine, because literally all that would do is increase the overhead, reducing overall capacity, and making management harder.
 
Last edited:
I have programs that were too large for one server, and even on the newer high end servers the software itself cannot handle the number of connections we need it to do. Then we end up with a VM1 and a VM2 with the databases split in two as the software cannot grow as big as we need it too! It ends up being a software problem and not hardware. Amazing how hard it is to get the correct software for a given job at times, I've just about given up finding a new software/hardware solution for access control at this point and we might end up with a third virtual server for our existing solution. Sigh. What I need is to me so darn simple and yet it does not seem to exist regardless of price even.
 
That is pretty good pricing for virtual servers. SSD-based, no less. I just wish I could find a hosting environment that gives you lots of storage without charging an absurd sum of money for it. I recently had to move my weather camera archiving site off a Godaddy shared hosting account because they decided after 5 years of service that it was violating their terms of service by storing too much data (~10,000 files == 190 GB on an unlimited storage plan). Deceptive marketing at its best.

I could not find another host with low storage costs so I bought a 3TB external drive for my NUC, and moved the site to my house. Thank goodness it only uses a couple GB per month so I don't think my ISP will complain.
 
Check Amazon. They offer a bunch of services called AWS and one of those services is remote disk. They even have different flavors of that in case you need something archived or need it highly available. S3 is available for remote access and can be coupled with their content delivery network for regional access. Glacier is used for data archiving and is dirt cheap.

Another thing, your ISP isn't worried about GB of data transfer unless they are a WISP. I just checked my firewall and for this month alone I have downloaded 390GB and uploaded 26GB. I would like to think this is normal for a household that uses netflix, amazon, and or sling tv for media.

Razer, I don't know what you are looking for in an access control system but we have been running Cisco Physical Access Manager since 2009 in a geographically disparate environment and have been nothing but pleased with it. The hardware is robust, the management software is constantly being improved, and the system is open enough to bring a lot of options to the table for catering to your specific application.
 
I looked at Amazon's s3 storage service before I moved the site. It would have been about $6 per month for current storage levels, practically doubling my hosting costs, and that number would only increase over time until amazon cut prices. But the worst part is I would have had to redesign my entire web app to support remote storage of the weather cam images. So instead I just eliminated the additional monthly cost of the 3rd party hosting and moved it to my house.

My internet is via DSL, but in the fine print they claim to have a rather complex system of logging what they call excessive use. According to them there is a daily threshold and if you exceed it often for a long time then they will complain to you. That said, they've never complained to me about anything I do.
 
We have over 3000 servers with about 2000 of them running as VMs on only 100 hosts ( or about a 20/1 ratio) real servers are going the way of the mainframe with vmware taking over as the Datacenter OS. Just a matter of time before we go 100 virtual...