Why Power is More Important than Space when Provisioning Servers in Full Rack Co-location

In recent years we've seen a change in the charging methodology of data centres from space being sold on per-rack or per-square metre basis to space being sold by kilowatts (or power consumption). If you've not seen this change affect you yet, it's a fairly sure bet that you will soon.

Is the rack half full or half empty?

Equipment designed to be housed in data centres is standardised industry-wide on the 19" rack. Data centres are filled with 19" racks and server equipment is sold in the 19" form factor. Whilst the industry continues to manufacture, supply and install 19" racks the strange reality is we can only use a small portion of the space that they afford. You simply can't budget on filling a rack with server equipment available for sale today in a data centre offering shared co-location. At best you're probably looking at around 50% utilisation as we'll see.

Why haven't I heard about this before and why can I only half fill my rack?

There are a number of reasons and changes that have brought us to the situation we're in today where all of a sudden we need to be more conscious of the power our equipment consumes than it's physical size.

  1. Historically in many major data centres, power has not been measured or charged at the per-rack or per-suite (a group of racks in a caged area) level, nor has it been closely monitored or metered. Charging has been based on physical space or floor area. Per-rack charges and per-square metre charges have been the industry norm.
  2. It's not particularly easy to measure power usage at any level and it gets progressively harder the lower down the food chain you go. As a result no doubt the choice has been made to measure the only thing that could be - physical space utilisation - and hope the pricing levels cover the power bills.
    • Per-suite : requires investment in power distribution units that will allow power metering, recording and reporting on a per-client basis
    • Per-rack : requires investment in, and management of, managed power rails that report power usage along with collection and reporting of this data. These are more than 5 times the cost of unmetered units.
    • Per-server : is currently not possible in any commonly used hardware from a co-location provider's perspective (it would in reality have to come from a power rail and would require reporting on a per-power outlet basis. There may be some manufacturer specific IPMI-type utilities, but anything that could be used for billing purposes would need to be hardware agnostic).
  3. Many large data centres have historically not charged or enforced power usage charges (more likely for the above reasons). This has only further engrained the notion that space is what we pay for rather than power.
  4. In any situation other than a data centre - where we have a massive concentration of power hungry machines - power consumption simply isn't relevant. The power usage of the typical office desktops, servers, server rooms etc is never itemised, the IT department is never accountable. Power charges are just considered to be part of fixed operating costs, so thinking about power on a per server or per rack basis is a new concept.
  5. Energy and electricity prices have increased significantly in recent years and are only set to continue this trend in future. The higher they go the more relevant power costs become. The previously low pricing no doubt contributed to the our lack of concern.
  6. As much as the Internet is responsible for bringing the ever growing number of data centres into the media, companies hosting Internet facing infrastructure probably aren't a data centre owner's cup of tea. The really large consumers of space with the really large budgets are the corporates, large enterprise. They operate on a different playing field, they tend to buy massive amounts of space and never push any of it to its limits, they only ever utilise a minor percentage of the power that is allocated to them - so measuring power isn't that important.
  7. The power density of computers and servers has been on a constant growth path and it's only been in very recent times that any consideration has been given to reduction of power consumption from hardware manufacturers. And still the number of CPUs, cores, hard drives and the amount of RAM that can be crammed into a 1 RU space continues to increase. As a result the amount of energy we are now able to consume from a rack filled with 1 RU servers or worse still - blade centres - is very, very large. The corresponding heat output is also huge, to the point that traditional cooling systems simply couldn't cope with large numbers of full racks in condensed areas.
  8. Power is not easily quantified - it's invisible. It can be measured but it's very dynamic and unpredictable. You can look at a server and know its height, width and depth. The same machine however can have significant variation in power consumption based purely on how it is used. An application that runs the box at 100% utilisation will cause it to consume significantly more power than one that leaves the server idle. Further more, with every change to the CPU, memory or hard disc specification the power consumption has the capacity to vary wildly again.

The status quo

Currently most default power allocations in shared co-location facilities seem to be somewhere between 1.6 kW and 2.4 kW per rack at the upper end. It's beyond the scope of this article to go into the theory but suffice to say this represents a power load of between 6.6 Amps and 10 Amps per rack. Whilst there's a direct relationship between Watts and Amps, I use Amps because most managed power rails will spit this figure out to your fairly easily.

This is very different to the power that is actually delivered to your rack. The industry norm seems to be 2 x 20 Amp circuits. One of these two circuits is usually considered to be purely for redundancy purposes. So of the 40 Amps provisioned, the power limits let you use as little as 17% of it before you might start to exceed your limits.

Given the power consumption of equipment available today, these numbers may seem artificially low. To understand the reason they are set at this level you need to consider the heat generated by the power consumption.

If you want to follow the manufacturers specification (and adopt ambient room temperatures around 22 degrees celcius) you need to consider the ability of the data centre and your racks to dissipate the heat generated by all your power hungry servers. Your average shared co-location facility is geared towards rack deployments that stay within the average power consumption level that the facility has been designed for. If you try and fill a rack with 1 RU servers you're going to greatly exceed the power limits and you're going to have a rack with temperatures that greatly exceed recommended range specified.

By nature most shared facilities tend to adopt closed racks for security reasons and if you're only occupying a small number of racks in a shared space then this will certainly be the case. Closed racks unfortunately tend to inhibit air flow and trap hot air close to the servers.

Being able to dissipate the heat in a fully laden data centre becomes the primary problem and the one that dictates the power limits.

The problem of heat becomes circular as you increase your power draw, the more heat you generate, the more energy you consume to remove the heat created and maintain an acceptable ambient temperature.

So how full is a Full rack?

As we discussed server power consumption can vary wildly depending on specification and the nature load on the server. We analysed one of our dedicated server server racks to get some average figures last year and came up with this.

Rack configuration:

  • 29 live rackmount 1 RU servers
  • 20 servers configured with
    • Dual power supply
    • Dual Xeon Dual/Quad core CPUs
    • Average 2 GB RAM
    • Mix of SCSI/SAS HDDs, typically two per server at 10/15k RPM
  • 9 servers configured with
    • Single power supply
    • Single Xeon Dual/Quad core
    • Average 1 GB RAM
    • 2 x SATA HDDs
  • Equipment usage
    • Most of this equipment geared towards serving websites, average load is not particularly high.

Rack summary:

  • RU of servers: 29
  • Power consumption: 14 Amps or 3.36 kW

We can use these figures to decide how many servers a rack operating on your more standard co-location power limits can support:

  • 1.6 kW : 13 servers
  • 2.0 kW : 17 servers
  • 2.4 kW : 21 servers

At best you can see 50% space utilisation.

If you're using dual power supply equipment - the number of available power outlets in most racks will also become a constraint that you need to investigate with the commonly used power rails offering only 21 usable outlets each (24 outlets, three of which use a less-common high-current pinout).

What does all this mean for me?

Right now there are a number of things you can do to operate within the typical power constraints:

  • Start by having realistic expectations for how many servers you will be able to put in each rack. Do your calculations.
  • Realise that just because your rack is 42 RU and your servers are 1 RU each, does not mean that you can put 42 servers in it.
  • Get an idea of how much power your equipment uses. Cheap devices are available to do this from your local electronics stores.
  • Look for low power variants of CPUs if purchasing new equipment.
  • Consider virtualisation to generate more efficient utilisation of server hardware.
  • On larger rack deployments consider doing high levels of per-rack utilisation, but deploying one-third or one-quarter fewer racks. You can save significantly on your infrastructure spend.
  • If you expect to have high power load deployments, pay careful attention to your choice of server racks.

Where technology will take this problem in future is hard to tell. The world seems to be rapidly coming to grips with a future where energy is much more expensive than it is today and where global warming is a real phenomenon. More power efficient technologies are on the way including static state storage, smaller profile server hard discs, blade servers, and more efficient server designs. To what degree the savings these technologies will bring will be offset by an ever-increasing appetite for hosted infrastructure we can only speculate.