Tag

kvm Archives - AWS Managed Services by Anchor

Bugfixing KVM live migration

By | Technical | 2 Comments

Here at Anchor we really love our virtualization. Our virtualization platform of choice, KVM, lets us provide a variety of different VPS products to meet our customers’ requirements. Our KVM hosting platform has evolved considerably over the six years it’s been in operation, and we’re always looking at ways we can improve it. One important aspect of this process of continual improvement, and one I am heavily involved in, is the testing of software upgrades before they are rolled out. This post describes a recent problem encountered during this testing, the analysis that led to discovering its cause, and how we have fixed it. Strap yourself in, this might get technical.

Read More

What’s the big idea with: Plug and play hypervisors?

By | Technical | No Comments

Here at the Anchor Internet Laboratory we’ve been discussing ideas for new deployments of our VPS infrastructure. One that we’re excited about is the idea of “plug and play” hardware. Plug and play what? Deploying more hardware capacity takes time. It needs to be burn-in tested, tracked in the asset management system, installed, configured, and integrated with other systems. It’s not difficult, it just takes time. We’ve got pretty much fully automated provisioning of new VPSes, but the hypervisors that run them need hands-on love. We think we can make this a lot better. We’ve been looking at Ceph for shared storage. The key benefit of shared storage for VPS infrastructure is that it decouples the running of VMs (on Compute nodes) from their disks (on Storage nodes). This would…

Read More

An introduction to hardware-assisted virtualisation techniques

By | Technical | No Comments

Our last post on the benefits of virtualisation technology raised hackles with some readers in discussion forums, particularly regarding flexibility and performance. The flexibility that we talk about is purely in the context of being a service provider – that you can use all your resources immediately when you have a bare-metal box is missing the point. We sell virtual private servers, so it’s VPS upgrades that matter. That’s not really interesting though, we’re going to talk about performance optimisations for virtual machines (VMs) instead. In recent years we’ve seen various hardware extensions introduced to reduce the overheads incurred by virtualisation, with the trend being to push virtualisation functionality into the CPU and hardware subsystems, relieving the hypervisor of much of the magic required to make things work transparently. We’ll…

Read More

Pacemaker and Corosync for virtual machines

By | Technical | One Comment

In the previous post we talked about using Corosync and Pacemaker to create highly available services. Subject to a couple of caveats, this is a good all-round approach. The caveats are what we’ll deal with today. Sometimes you’re dealing with software that won’t play nice when moved between systems, like a certain Enterpriseā„¢ database solution. Sometimes you can’t feasibly decompose an existing system into neat resource tiers to HA-ify it. And sometimes, you just want HA virtual machines! This can be done. The solution If the solution to our problem is to run everything on a single server, so be it. We then virtualise that server, and make it highly available. Once again, it’s important to remember that we’re guarding against a physical server going up in smoke. There’s no…

Read More

Ninja migrations from VMware to KVM using vmdksync

By | Technical | 2 Comments

We recently made the decision to pay off some of our technical debt by eliminating the VMware servers we built when we first started our Virtual Private Server (VPS) offering. VMware is a commercial vendor platform so it’s not exactly trivial to jump ship, but it is possible with some time and effort. Forcing a few hours downtime on our customers for business reasons is not cool, so we had to find a better way. Background and rationale When we first started offering virtual servers the software landscape was very different. After comparing what was available at the time we settled on VMware ESX for our virtual private server product – the right features, suitable for a VPS product, secure and manageable enough, sufficiently mature and reliable, and a nominal…

Read More

Bugfixing the in-kernel megaraid_sas driver, from crash to patch

By | Technical | 2 Comments

Today we bring you a technical writeup for a bug that one of our sysadmins, Michael Chapman, found a little while ago. This was causing KVM hosts to mysteriously keel over and die, obviously causing an outage for all VM guests running on the system. The bug was eventually traced to the megaraid_sas driver and the patch has made it to the kernel as of version 3.3. As you can imagine, not losing a big stack of customer VMs at a time, possibly at any hour of the day, is a pretty exciting prospect. This will be a very tech-heavy post but if you’ve ever gone digging into kernelspace (as a coder, or someone on the ops side of the fence) we hope it’ll pique your interest. We’ll talk about…

Read More

Wireless IP KVM mk II

By | Technical | No Comments

If you have been following this blog for a while you might have seen my previous article on the portable, wireless IP kvm that we constructed a while back for datacentre use. This has proven to be an invaluable tool for remotely accessing machines instantly, in fact so invaluable that contention for its use frequently causes consternation. When I completed the last device, I made a list of how it could be improved in a future revision so when I decided we needed a new one, I thought I’d take care of some of the improvements I had planned. To refresh your memory: remove covers of internal components to reduce space requirements and improve cooling align the wireless antennae in the middle of the case so cables from the wireless…

Read More

A Portable Wireless IP KVM Solution

By | Technical | One Comment

If you have hundreds of dedicated servers in a remote datacentre, and a need to operate on the console of some of those servers on a semi-regular basis (and I KNOW you do), then you’ll understand the frustration of having to physically put someone in front of those machines. You need to take into account travel time, and waiting time while the server is doing anything until the next point it requires input from the human. This can all be very frustrating and time wasting, since whoever is designated to operate the console is taken away from their regular tasks, and datacentres being what they are it is unlikely they’ll be able to make the best use of their time by multi-tasking with other jobs. Enter the Wireless IPKVM. No…

Read More