Tales of Hardware – IBM x3650

By March 10, 2009Technical

All the servers Anchor buys are from Supermicro. Most people won’t have heard of them, but they’re a sizeable hardware vendor that also does some OEM gear. Supermicro certainly doesn’t carry the mindshare of other big brands like HP, Dell, et al., but we chose them because their stuff is reliable and affordable – we focus on the things that actually matter, rather than some enterprise-y idea of sticking with big brands that you trust – “noone ever got fired for buying IBM” they say.

Actually, hold that thought for a moment.

I’ve got a couple of servers colocated at the datacentre, the newer of the two is an IBM x3650 named yumi. I chose the IBM because it was on special at the time, and has the capacity for expansion that I desired – it’s a 2RU box with room for two Xeons, 12 FB-DIMM RAM slots and six hot-swap hard drive bays (eight if you choose the 2.5″ option).

There are a number of things I like about the IBM compared to the Supermicros that I’ve worked with so far. The documentation doesn’t suck and the hardware is very nice to work with, almost entirely tool-less. The rackmount rails are an absolute joy. On the downside, yumi weighs about 20kg – hefting that much server over your head and onto the rails is no mean feat.

Thing’s haven’t been entirely without problems and annoyances, of course. One of the reasons to buy hardware from BigCorp is support. There’s much to be said for being able to get someone on-site to fix or replace your troublesome hardware when something goes wrong. At Anchor we’re not too concerned about that. We keep more than enough spare hardware on-hand to deal with failures, and this is just a factor of the way we run things. No hardware vendor can give us instant turnaround (we can be on-site in 10-15min), and the kinds of failures we have deal with are best dealt with ourselves.

Likewise, my colocated server is for my own use, so I’ve no need for vendor support. One of the downsides then of buying from BigCorp is that you tend to get locked-in to their way of doing things. I was planning to install a second CPU that I’d picked up through retail channels, only to discover that the heatsink is entirely non-standard, and that the motherboard requires a separate Voltage Regulator Module (not the case with Supermicro servers). There are good reasons for this, and I’m sure big enterprises don’t much care about the cost, but a $400 price difference for me was galling.

Don’t even think about buying non-IBM hard drives either – the server doesn’t come with empty hot-swap drive sleds, just blanking plates to cover the front of the chassis. Ebay provided a way out of this one, but it was something of a hassle getting them shipped over from the US. Joy of joys, IBM also uses torx screws on the drive sleds. It looks like I’m not the only one who thinks IBM engages in deviant sexual behaviour either.

After the purchase of an overpriced Xeon E5420 and half a dozen drive sleds, I turned my attention to the disk subsystem itself. I wanted a hardware RAID card, and lo and behold, there’s one available. A little more money later, and I’ve got myself a RAID-10 array across half a dozen drives. Of course it’s not written anywhere, but the controller is a specialised piece of kit made by Adaptec (it uses the aacraid driver in Linux). We’ve had some mixed experiences with Adaptec hardware at Anchor, but I don’t have much choice either (the card with it’s battery-backed cache RAM sits in a DIMM slot on the motherboards… huh?).

A couple of weeks ago yumi was rebooted for a kernel update, and then promptly failed to come up. After much prodding I noticed that the RAID card wasn’t bootable, its boot BIOS had disappeared entirely from the usual POST process! My solution was to install GRUB to a USB stick – this is sufficient to kickstart the process and then pass control to the main drives, which otherwise seem to be working fine. Needless to say, I’m rather unimpressed. I’ll probably get around to making a support request eventually, but the fact is that I’m pretty lazy. Indirectly, this is a great argument for letting other people deal with server hardware, rather than colocating your own kit.

There’s two predictable responses here, as I see it:

  1. “IBM are bastards!”
  2. That hardware’s not for you

You get to choose. I just think it’s a learning experience, and now I’m curious about some HP hardware… I expect I’m in for some more learning.

(actually, there’s another possible response: “why don’t you hack up a solution yourself? it’d be cheap, too”. anyone who’s worked with computers for a long time knows that the last thing you want to do is faff around with computers. I’d much rather be out taking photos, sewing, changing the world, etc.)