What is virtualisation?
Virtualisation a term which is used when referring to using a single physical machine to run multiple unique virtual machines.
In its essences there are typically two aspects of virtualisation which are defined by two terms that will be used quite heavily in this documentation:
- This is the host machine which provides the underlying environment that the client virtual machines run on. It is the role of the host server to provide the interaction between the physical hardware and then simulate a hardware environment for the clients virtual machines.
Client virtual machines (sometimes referred to as guest machines)
- The aim of the host machine is to run multiple client virtual machines. Each of these virtual machines will be allocated specific resources (such as disc space, Memory/RAM and CPU time) for their usage. Client machine should remain independent and have some kind of separation from one another.
Generally speaking there are two primary ways which virtualisation is achieved.
Full virtualisation is where the host server simulates a hardware environment for the virtual machines to run in. Full virtualisation comes with slightly more overhead, however it ultimately provides the most flexible and complete virtualisation environment.
Using this method is far more comprehensive because you can run ANY operating system within the virtualised environment in its original form. This means there is little or in most cases no difference between what you would expect to see in a truly dedicated environment and a virtual environment. This is good, because there is no need to change your approach to configuring and administering the machine.
We believe that the only way to do virtualisation properly and provide an enterprise grade hosting solution is via full virtualisation.
Examples of software that can provide virtualisation include:
Operating System Virtualisation
Under this configuration virtualisation is achieved at an operating system level, with specific individual user-space instances being defined. Typically client or guest VMs reside within containers on the host server and each will have a specific amount of resources allocated to them on the host server.
This does have its advantages in so far as it introduces little overhead when compared to full virtualisation techniques. The drawback, however, is there is no where near the same level of flexibility and you can typically only run the same OS as the host server.
Further to this, operating system virtualisation is often seen as a "poor man's dedicated server". Historically there has been a large cost and performance jump between a shared hosting environment and a dedicated server. To bridge this gap, and to provide a "Virtual Private Server (VPS)", operating system virtualisation technology came along. It allowed people who required a dedicated server for configuration or security reasons but could not justify the large buy-in costs to do so on a budget. Whilst coming up with this solution the operating system has needed to be contorted into a fashion to provide a level of functionality that it was never really designed to do. As result, you end up with a half virtualised environment, half real environment trying to co-exist on the same machine. This can introduce unexpected or non-standardised behaviour, for example, networking configuration is often more contained to a specific user-space (instead of being virtualised), meaning high level configuration such as routing changes or firewall configuration is not possible within a virtual machine instance.
Examples of technologies/software which could provide this style of virtualisation include:
Why would you want to use it?
Less hardware failures - By reducing the number of physical machines and associated hardware components then you are also reducing the total number points where a failure can occur. This will in-turn reduce overall downtime.
Allows for only high-end rack mounted kit to be used - Like most hosting providers we use various grades of server hardware. For example, our entry level servers are based on cheaper tower-style servers. By consolidating these independent machines onto high quality servers you're going to be gaining a better grade of hardware for a similar cost. This is good as rack mounted equipment has additional redundancies such as dual power supplies and hot swappable discs. In the event of a hardware failure, the problem can be rectified much quicker which will provide a more reliable service.
Less hardware means less compatibility issues - One of the major time delays when it comes to new server builds is receiving shiny new hardware that comes with a newly developed hardware that is unsupported by the operating system. The more you are able to standardise your hardware configuration the more you time can be saved on new server deployments. It also makes the on-going support significantly easier.
Standardised hardware configuration - if the past two points haven't covered this off enough, then hopefully this one will. Keeping everything standard is a good thing. Training is much easier, keeping people immersed in a single hardware platform is going to make sure that everyone in the team knows every single little intricacy of the chosen infrastructure. This is only ever going to reduce downtime in the event of a failure and reduce unknown pitfalls when using mixed hardware (which in itself can be the source of an outage). Further to this, It makes managing a spares inventory much more simplistic which in turn makes failures easier to deal with... and yes, once again, it is all about keeping it simple and reducing downtime!
Fewer machines means less cables - One of the most frustrating source of outages is someone accidentally or inadvertently removing an incorrect cable. I say frustrating, because everyone knows that to keep an electrical device powered on it needs to remain plugged in. That said, I think that every system administrator that has ever worked for Anchor can embarrassingly admit to inadvertently knocking the power out of a machine that was supposed to be online. The reality is when you are working in a rack with 25 machines or more it doesn't take much to knock out a cable, whether that be a network or power cable. Consolidating the hardware or even using blade servers is always going to reduce the likelihood of this happening.... and YES reduce downtime!
Potentially lower Total Cost of Ownership (TCO) - Imagine you have a requirement for both a Windows and a Linux environment for different applications which are only supported in totally disparate environments. Traditionally, you would need to deploy two seperate servers with two seperate operating systems. You would need to make sure that all the hardware components are supported in both operating systems and would be deploying twice as much infrastructure. Being able to run both these machines from the one physical server is going to provide a reduced TCO.
Current generation hardware is many times faster than what is really required
- We're living in an age where most server hardware can support 32GB or 64GB worth of RAM, each machine has anywhere up to 2 processors each with 4 separate cores on each chip, historically the maximum was 2. Hard drives are also always becoming faster and making better use of caching and whilst static storage isn't quite here yet for server hardware it doesn't seem too far off. With all these recent performance gains it seems for a change that hardware can actually supply more performance than is usually required. Meaning many dedicated servers are being deployed with resources (CPU time, Memory and I/O bandwidth) sitting there idle. Pooling these resources is way of making use of the excess on the rare occasions it is needed.
Rapid remote deployment of new servers
Once a physical server is ordered, we need to track the order, receive the hardware which includes labelling the physical hardware and record the details (specifications, serial numbers, physical hardware configuration), Make numerous changes to the hardware to ensure reliability and performance, allocate rack space in our data centre, configure the network, allocate power resources; this is all before we even get to transport it to our data centre, rack the machine and turn it on for the first time and let it boot. Once done, we will run the hardware through a full server hardware burn-in testing which takes anywhere up to 7 days; longer if the hardware is faulty. By this time we've spent anywhere up to 8 hours, hands on work. As you can see this is a fairly involved process which has evolved over the 8 years. Every single one of these steps is absolutely critical in deploying a dedicated server, if virtualisation can reduce the amount of physical hardware in use it is only going to reduce the number of times we need to go through this. This is a good thing, as it saves us time (and you money) whenever a new machine is deployed.
- Given the hardware is already at the data centre and plug-in and worked new virtual machines can be built by someone at their desk, there is no requirement for people to go on site and get the machine up to the point where it is remotely accessible.
- Virtual machines lend themselves really well to be pre-built in advance with common configuration already in place, this means when a new service is ordered it can be deployed in the space of minutes instead of the days or weeks associated with separate physical hosts.
Smooth migration paths
- One of the really cool things with virtualisation is you can move machines from one physical host to another relatively trivially with very little downtime. Traditionally if you outgrew the performance limitations of your existing server chassis and wanted to upgrade further it would generally be a two stage process. First a new machine would be deployed con-currently with the first, a copy of the data would be moved to the new machine and an often long testing phase would ensure to make sure that everything works fine in the new home. Once testing was complete, it would be necessary to shut down the old host. Move any dynamic content (such as a databases) to the new server before bring the services up again. By virtue of the process it was quite time consuming and often fraught with peril and there were often long period of down time, particularly if there were large databases involved. The virtual machine alternative is much more sane. Under this configuration the virtual machine can be pretty much picked up from one host machine and moved to another with no more than about 15 minutes down time. Previously this was unheard of under a Windows environment.
Power is becoming more and more important in a co-location environment. Read our article on why power is more important than space for more information on why this is the case. Reducing the number of physical machines is going to reduce the amount of power consumed. Additionally, there is always a fair amount of power lost when converting from AC to DC power; so running one machine with lots of hard drives and memory or even running blades (which consolidate the power supply units over multiple physical machines) will have a significant effect on reducing the power consumption.
- Need more memory available to your application? Easily done. We can allocate more memory to the virtual machine then you will just need to find a time to reboot your virtual machine to detect all the new memory. Much better than the old process: Order new memory, wait for it to arrive, complete a 2 day burn in process to make sure that they are working correctly then schedule downtime, power off the machine, add new memory and then bring it back up with the new upgraded memory.
- This isn't just limited to memory upgrades, all components such as disc space and CPU resources can easily be increased or if necessary, trivially migrated to another physical host.
The future will be virtual, so why wait?
- This seems to be a trend which has been continuing to see a lot of take up in the past few years. There are more and more providers offering virtualisation in some shape or form. The even better news is, the migration process back out of a virtual environment is no different from migrating from one physical machine to another; so there is really no argument against going down this path.
- Virtual servers are always going to have an amount of overhead
Given a virtual server is a machine running on-top of another server there is going to be an amount of overhead to run the host machine. Whilst this is very low in enterprise-grade virtualisation (For example: VMWare ESX) there is an unavoidable extra layer of abstraction. If you have a service that demands every ounce of performance from its hardware, then virtualisation may not be the way to go. An example would be a high volume database server which is particularly disc I/O bound. Running this host on bare metal may be a better solution.