Load balancing at Github: Why ldirectord?

October 31, 2009 Technical, General

Some comments on Github’s blog post “How We Made Github Fast” have been asking about why ldirectord was chosen as the load balancer for the new site. Since I made most of the architecture decisions for the Github project, it’s probably easiest if I answer that question directly here, rather than in a comment.

Why ldirectord rocks

The reasons for Github using ldirectord are fairly straightforward:

  • I have a lot of experience with ldirectord. Never underestimate the value of knowing where the bodies are buried. In ldirectord’s case, there aren’t many skeletons, but “better the devil you know” is a valid argument. If you’ve got strong experience in making something work (and you’ve managed to make it work), and you don’t have a lot of time for science experiments, then there’s a lot to be said for going with what you know.

    This goes beyond simply knowing what to do when things go wrong, of course. You’ll also know how to install and configure it already, how to monitor it, and so on.

    What’s more, in ldirectord’s case I had already proven that it worked in an architecture almost identical to Github’s, and with a similar load profile. At a previous job, I had ldirectord serving a sustained aggregate of 2500 TCP connections per second on a 128MB Xen VM, passing to a large set of backends in a manner almost identical to Github.

  • Anchor has a lot of experience with ldirectord. Whilst my experiences are one thing, there’s a lot more to building an infrastructure than just setting it up. I like to take holidays as much as anyone, and so there was no point in using something that nobody else in the company had any experience with, if there was something else that we did all know about.

    Thankfully, ldirectord lined up nicely, since it’s what we use for our other load balancing setups (not setup by me, either — these were already in place before I arrived). This meant that there was already a pile of documentation and knowledge amongst the sysadmin team about ldirectord and it’s quirks. Also, being automation junkies, we already had Puppet dialled in to install and configure ldirectord, and we knew exactly how to monitor it.

  • Ldirectord will do the job. With the prior experiences of myself and the rest of the Anchor team, we were confident that ldirectord would do the job, and at the end of the day that’s what really matters.

The Alternatives

It’s all well and good to say “we know it and it works”, but I’m not really expecting anyone to just read that and say “well, OK, I guess we’ll use ldirectord”. In fact, if you apply the above criteria to your own situation, there’s every possibility that you’ll come up with a different answer — and if you’ve never setup a load balancer at all, then you’ve got no experiences to use to guide you.

So, here are the other load balancing options I’ve dealt with, and what I think of them. This might give you a bit of food for thought when choosing your load balancer.

  • keepalived. This is the project closest to ldirectord in terms of functionality and operation. It actually uses the same load balancing “core” as ldirectord, IPVS, part of the Linux Virtual Server project. As such, it performs similarly to ldirectord when it comes to actually redirecting requests to backends, and is another excellent choice for load balancing.

    For Github, though, there wasn’t any benefit in using keepalived. Whilst I used keepalived extensively at my last job, nobody else in at Anchor had had much to do with it. Also, keepalived has a built-in failover mechanism, which we didn’t need because we already use Heartbeat/Pacemaker for all our HA/failover requirements. I also feel that keepalived is more complicated when compared directly to ldirectord, largely because of it’s built-in failover capabilities. That’s not to say that combining Pacemaker and ldirectord is dirt simple, but if you’ve already got Pacemaker on hand anyway…

    If all you needed was a HA load balancer, and had no experience with either ldirectord or keepalived, I’d probably recommend keepalived over ldirectord, as it’s one project and one piece of software to do everything you need.

  • Load-balancing appliances. Sometimes misleadingly referred to as “hardware” load balancers (they’re still chock full of software, kids — and unlike high-end routers, I don’t know of any true L4 load balancer that has it’s forwarding plane entirely in hardware).

    I loathe these things. They’re expensive, restrictive, slow, and generally cause you a lot more pain and suffering than they’re worth. At my last job, one of my projects was to convert most of one of our existing clusters from a load-balancing appliance to use keepalived. Why would we do this? Because the $100k worth of appliance wasn’t capable of doing the job that $15k worth of commodity hardware and an installation of keepalived were handling with ease — and with capacity to spare. That cluster was our smallest, too, with probably only 2/3 the capacity of the other clusters run by keepalived.

    At the job where I had ldirectord handling 2500 conn/sec, we had also previously used a load-balancing appliance, which was supplied and managed by the hosting provider. It was a management nightmare — we couldn’t get any useful statistics out of it at all, like the conn/sec coming in or going out, and we couldn’t usefully adjust the weightings of each backend (to tune how many connections were going to each different sort of machine) or manage the system in real-time. When we switched to using ldirectord, a small shell script (involving watch and ipvsadm, mostly) was all it took for the CTO to be able to watch exactly how the cluster was performing, in real time, throughout the day. He loved the visibility — and the fact that we were saving several hundred dollars a month didn’t hurt, either.

  • haproxy. While we use haproxy extensively within Github, I don’t think haproxy is the right solution as the front-end load balancer for a high volume website. Being a proxy, rather than a simple TCP connection redirector, it has much larger overheads in CPU and memory, and adds more latency to the connections. All of Github’s load balancing is being done out of one small VM, and it barely raises a sweat. The return traffic doesn’t even go back through the load balancer at Github, since we’re using a really neat mode of IPVS that allows the traffic to return to the client directly. While you can throw hardware at the load balancing problem, I still prefer to be efficient where possible.

    Since haproxy makes a second TCP connection, rather than just redirecting an existing one, it mangles the source IP address information — and while you can work around that in HTTP with custom headers, that doesn’t work for other protocols like SSH. I cringe at the thought of trying to defend against a DDoS attack when the most useful piece of diagnostic information (the source IP) can’t be correlated against the actions of an attacker on the site.

    If all you know is haproxy, and you’re running a low-volume site that only has to deal with HTTP(S), then haproxy will probably do the job — it’s certainly handling more connections inside Github than most sites will ever see. However, I’d recommend getting someone who does systems administration full-time (like us!) to install and manage a real load balancer like ldirectord rather than use haproxy, along with keeping your other basic infrastructure on track. Wouldn’t you rather be developing new features rather than dealing with this stuff?

So, there’s one geek’s opinions on load balancing. Questions and comments appreciated, and if you’d like to know more about any part of the Github architecture (or any other aspect of systems administration), please let us know in the comments and I’ll whip up some more blog posts.