Comparison of Virtualisation Technologies

This pages includes some of the testing notes discussing the from when we evaluated various virtual private server software prior to deployment of our vps hosting environment here at Anchor.



  1. Support for both Windows & Linux

  2. 32 bit & 64 bit guest support

  3. Full virtual machine. No containers.
  4. SVGA+ console of guest
  5. Battery backed hardware RAID controller
    • I/O performance will blow without this.
  6. Guest CPU resource control


  1. Guest disc I/O resource control
    • Read/write bandwidth per second.
    • I/O operations per second.
  2. Para-virtualised device drivers
    • For performance.
  3. Virtual storage
    • For example, can provide 100 GB block device to a guest which only takes up X GB (where X < 100) on disc.

  4. VM live migration
    • Allow us to perform hardware maintenance or resource balancing across cluster.
  5. High availability
    • Automatically startup VMs on other nodes if a node fails.


  1. SMP guests
  2. Automatic VM migration across cluster for resource balancing

Comparison of candidates


  • vCPU multi-threading
  • Can have your own kernel
  • Can allocate swap space
  • Para-virtualisation and Full hardware virtualisation
  • Dynamic RAM re-allocation
  • Tagged VLAN support


  • No Windows support
  • Multi-threading based on number of CPUs in the host node
  • No swap space, out of memory = process reaping
  • No custom kernel
  • Limited IPtables/Netfilter support for guests
  • Operating System Virtualisation
  • Tagged VLAN support

VMware ESX 3.5

  • vCPU multi-threading (1, 2 & 4 vCPU)

  • Can use your own kernel
  • Can allocate swap space
  • Para-virtualisation and Full hardware virtualisation.
  • Tagged VLAN support


  • Limited to single CPU for Linux guests.
  • Para-virtualisation and Full hardware virtualisation
    • Linux para-virtualisation requires driver support (e.g SUSE 10)
    • Para-virtualised drivers are really nothing more than virtualisation-optimised drivers
  • Tagged VLAN support


Each hyper-visor that supports full hardware virtualisation provides a method for a user to gain remote access to the VM as if it were a fully dedicated machine. For Windows operating systems, this means you can use Remote Desktop or VNC. For Linux or other Unix like operating systems you can use SSH or if configured, a X desktop which you can get a VNC server running on.

Para-virtualisation in VMware ESX, Xen and Hyper-V provide the same functionality as full hardware virtualisation to gain administrative access.

OpenVZ, being a container based virtualisation technology does not provide such features natively. You can however gain SSH access or VNC if a X server is configured to support it within the container.

Maintenance of VMs

As with dedicated servers, VMs & containers should have regular security updates applied. Some virtualisation providers, provide tools to manage updates for certain types of guests (Hyper-V & VMware). Also there are Open Source tools that can do this for the guests them selves.

It is important to keep in mind that the hyper-visor host system may have updates applied to it, those will not affect the guest other than modifications to the hyper-visor.

OpenVZ host node systems work slightly differently. If a Kernel update is applied on the host node, all containers will now have the new kernel version.

Security Analysis/Considerations

VMs, VPS and the like should be treated exactly the same as dedicated server with regards to security. This means you should be controlling ingres and egress of data with the use of firewalls.

Packages should be maintained and monitored for updates. Trusted sources used for these packages would include your operating system vendor or other trusted external sources.

VMs & VPS are just as likely to be vulnerable to system attacks as dedicated servers. So it is important to ensure sane configuration settings are used.

Portability of VMs

A new standard is being created to deal with portability of VMs from one hyper visor to another, it is called Open Virtual Machine Format, or OVF.

XenSource, VMware & Microsoft have committed to supporting this standard. Currently the standard is at version 1.0.

OpenVZ containers will only work on OpenVZ systems.

Xen domU guests can be migrated from one host to another, and in some cases include a hyper-visor version change. VMware ESX can also run VMware Server virtual machines. Additionally physical machines can be converted into virtual ones by using numerous tools out there.

Links P2V, VMconverter, procedure on Linux P2v

Hardware Utilisation

All of the above mentioned virtualisation technologies provide features that allow the host to control access to the systems resources.

This means different classes of products can be defined based on how much or many system resources a VM or VPS can use.

Various counters can be used with or without bursting ability.

Multi threading & vCPU allocation

Xen and VMware allow for allocation of a number of virtual CPU units.

With Xen this can be one through too the total number of physical CPUs or even more. If you allocate more virtual CPUs than what exist, a work request will be queued for a physical CPU.

VMware allows for the allocation of 1, 2 & 4 vCPUs to any virtual machine.

Hyper-V guest can be configured with 1, 2 & 4 vCPUs for fully supported Microsoft guest operating systems, other operating systems are only supported with one vCPU.

See also:

References/External Links

Other pages in similar categories