Server Rack Equipment Layout and Cable Organisation Ideas

This article has been written to pass on all the things we've learnt about setting up a rack so that you can avoid making some of the same mistakes that we have made over the years. It is divided into two parts; first we identify all of the issues that warrant consideration so that you can understand for yourself and extend upon our ideas, and secondly, our recommendations or rules if you're just looking for a simple guide to follow.

Factors to consider when considering server rack management

Before diving into our ideas on best practice for managing server racks in a workable fashion, I think it helps to have an understanding of the problems that these ideas are intended to solve. If you're fortunate enough not to have previously ended up with a spaghetti-like mess of cables of your own creation, or worse still via inheritance, then hopefully you'll soon be convinced of the need to pay attention to your rack deployment.

Why should you care about rack layout?

A rack which is a mess will inevitably cause outages in it's own right or result in outages of increased durations. How?

  • By accidentally knocking out power and network cables when working on servers or tugging on cables.
  • By not being able to locate end points of network cables because you can't trace them through the tangle
  • By physically impeding access to equipment

Rack specification

I'll briefly make mention of racks since in most cases with co-location you don't have a choice over the rack as it's already supplied. We discuss the attributes to look for in a rack in more detail and look at some Australian suppliers in our article on Server Racks. Factors to be aware of with your physical server rack:

  • Rack size - typically 600mm wide and range in depth from 900mm to 1070mm. Deeper racks tend to make installations more manageable and can help with ventilation. Power rail mounting options will vary also with some providing easier installation and more clearance than others.
  • Power rails - not strictly part of the rack but two common options are managed and unmanaged. Managed power rails should be considered an essential item for assisting with remote management.
  • Ventilation - poor rack design can have a huge effect on the build up of heat in a server rack. Look for something with large areas of perforated panels and plenty of openings at the top of the rack.

Equipment location

Thought needs to be given to the location of your equipment within the rack.

  • How many servers do you expect to install
  • Where will your switches be located
  • Will you mount equipment at the top of the rack? if so, can you reach equipment installed at the top? See the LED lights? Easily insert and remove cables?
  • Will you keep spare parts in your rack?
  • Does the power allocation for your rack actually let you fill it up? if not should you be spacing equipment out?

  • Will you logically group different parts of your infrastructure together?
  • Will you house a keyboard/monitor/mouse within your rack? Have you left space at a comfortable height?
  • Do you get cool air through a hole in the base of the rack? Will you leave space to ensure it is not trapped at the bottom?
  • Will some servers generate more heat than others, and can you spread them out?
  • When you need to do work on your servers will you be able to easily access network ports, power outlets, USB ports, serial ports, power switches etc?

Knowing your RU's

Even if not cleverly numbered as in this picture, you can tell an RU by the smaller metal spacings between them (the lower ellipsis in the picture). Within an RU the holes are separated by larger metal spacings (the upper ellipsis in the picture).

Do not be fooled! Not all racks will start with a full RU at the bottom or top. Use this to your advantage for cooling.

rack-ru-image.jpg

Cabling distribution

If you're only deploying a single rack cabling distribution is not something you will give much thought to, your upstream connections have to enter your rack and you'll have to deploy a switch somewhere within. If you're deploying multiple racks on the other hand you have a few choices. These choices relate to where you put your switches.

Option A - Patch panel approach In multi rack deployments you could traditionally install a patch panel in each rack with enough ports for every device and then trunk these connections back to a central communications rack/location. In your communications rack you would then patch each port into a switch.

This approach is nice since it provides a clean and easy mechanism to connect any service in any rack to any other device in your infrastructure. It minimises the need to string cables between racks in an ad-hoc manner. Patch panels potentially allow for the use of larger switches and higher utilisation of ports.

On the downside the cost of installing patch panels can be significant, additional connections are introduced which represent points of failure and a not insignificant amount of space is consumed by the patch panels. To make reasonable use of the patch panels which are installed you need to be confident in your long term requirements.

Option B - Switch distribution approach An alternative approach is to install switches in each rack and use the switch layer itself as the patching system. Each switch need only be connected by two ethernet cables.

This alternative has the benefits of:

  • Reduced cabling between racks
  • Installation can usually be carried out by technical staff (rather than cablers) using off-the-shelf cables.
  • Reduced space requirements
  • Reduced points of failure.

On the downside, the ability to patch non-network based services is restricted. Telephone and voice services, cross connects and so forth require custom cabling at the time.

At Anchor given the majority of services are network based we find that Option B to be far easier and more workable.

Switching

Whilst strictly not part of the rack hardware if you're deploying a rack you will need switching hardware. There is a myriad of switch vendors on the market which we wont try and review but we will consider the choice between managed and unmanaged. To maintain high availability services remote management capability is vitally important. Outages are only extended by the need to get on-site to solve a problem.

We highly recommend the use of managed layer 2 switching at a minimum. This will let you enable and disable ports on problem-causing infrastructure, thus isolating problems from affecting other services on your network.

If adopting a switch distribution approach as discussed above you need to consider the location of switches within you rack. There are two alternatives - either at the top of the rack or at a lower level say near the middle.

  • Top mounted can be difficult to reach and gain visibility of LED's for diagnostic purposes.
  • Mounting at lower levels may increase the risk of connectors being accidentally knocked or disconnected.
  • Top mounted involves longer cabling distances to equipment on average than middle mounted, potentially contributing to the cabling mess
  • Top mounted ensures that the switch is clear of power rails which can restrict access to the rear of the rack. A switch mounted in the middle may prove difficult or impossible to remove once power rails and cables are installed in the event of switch failure. This is a major concern.

The last point is a killer, so switches should be mounted at the top in our view. Physical access to reach ports and view lights is a real concern and requires mitigation by provision of a step ladder or similar for shorter staff to stand upon.

Cooling

As the density of equipment in your rack increases, cooling becomes an important factor. Whilst most data centres commit to keeping ambient temperatures within an acceptable range, these guarantees will do nothing to stop your choice of rack layout generating significant localised heat concentrations. These heat concentrations can lead to increased power consumption and higher rates of equipment failure (by operating outside of manufacturers specifications).

At Anchor we've seen a combination of poor rack design combined with dense rack utilisation contribute to internal rack temperatures pushing towards the 50 degrees celsius mark in a room with an ambient temperature of 22 degrees!

Racks deployed with equipment of a total power draw of less than 1.8kW in a rack will rarely result in a major cooling problem. A cooler rack will always be more efficient but it is only with increased power density that cooling becomes an issue which must be addressed.

Things you can do:

  • If you have your own suite, caged or secured area consider using open racks with no sides or doors.
  • Most racks can accommodate top mounted fans that help to exhaust heat and promote drawing of cool air up through the rack.
  • Horizontally mounted 1 RU fan units are available which can help to push hot air out through ventilated doors.
  • Leave space between each server (eg for 1 RU servers, leave every second RU free) to increase airflow.

Rack density

When deciding how your rack will be laid out it helps to have an idea of how it will look like once fully provisioned. In most cases deployment occurs gradually with equipment being added over time so it's not always easy to have a master plan.

One factor that may help you is the maximum power available in your rack. In our article on power vs space in full racks we showed why a full rack will often only be half filled with equipment. If you're aware of rack density limitations at the outset then you should keep in mind options such as leaving every second RU between servers free to help with cooling, or not hesitate to install the fixed monitor and keyboard tray safe in the knowledge that you're not going to run out of room.

Maintaining order in the rack

Whatever system you have in place to maintain order in your rack it's important that the rules are clear and known to everyone that will work on the rack. The rules must be easily and practically followed on a day to day basis else a clean rack will quickly become a mess.

Vertical mounting rail positioning

The vertical rails can only be adjusted before any equipment is installed in the rack so you need to ensure it is set a depth that will accommodate all of your requirements from day one. This is not always such an easy task since you don't always know what you will be installing in the rack in advance.

The dimensions you need to consider are:

  • The distance between the rails - depth
  • The distance between the vertical rails and the inside of the front door - front spill, and
  • The distance between the vertical rails and the inside of the back door - back spill

Most equipment manufacturers will publish the requirements for mounting of their equipment although some are not always so easy to find. The equipment we've found to be consistently the most difficult to accommodate for - in the sense that it needs something different to most other equipment - is Dell. Note that for Dell equipment there are a few different types of rail options and so it's often a matter of choosing the right one at the time of purchase. Drive arrays and blade chassis can also be particularly deep requiring special consideration.

In setting your rail positions also look for any intrusions such as door locking mechanisms which may affect where you can mount servers.

As a general rule, the positioning of the vertical rails becomes less of an issue in racks of greater depth as you obviously have more room to play with. At 900mm it can be very much a balancing act, by the time you get out to 1,070mm it probably doesn't matter so much.

Anchor rack layout rules

Taking consideration of the above factors we have a simple set of rules that we follow in laying out new racks.

Hotside/Coldside

Many data centres will dictate a hotside and coldside. Make sure you follow this from the start. The hotside must be the side that the majority of the hot air leaves the rack, generally the rear of your servers. The coldside will generally be the front panel of your servers.

Vertical Rack mounting rail positioning

Suggested positioning (for 1000mm deep rack):

  • Frontspill: 70mm
  • Rail depth: 740mm
  • Backspill: 190mm

These dimensions are based on the requirements for most major server equipment manufacturers. You should check the specifications with the vendors for any equipment that you intend to install. Varying rack depths may require alternate configurations.

Switch location

Switches should be located clear of other equipment within the top 3 RU of the rack.

Cable management

Immediately below switches a 1 RU horizontally mounted cable management device should be used to track network cables out to the edge of the rack.

Cables should be trunked vertically on one side of the rack using loops of double sided velcro.

The vendor-supplied cable management devices should be attached to the APC power rails and power cables cable-tied close to the ports to avoid risk of accidental disconnection. Power cables should be trunked vertically using loops of double sided velcro. One trunk will be required on each side of the rack.

Cables should be labelled at both ends so that they can be traced.

Server location

For racks with standard power allocations (<2kW) leave space between servers, for 1 RU servers leave every second RU free.

Keep space in the middle of your rack free for installation of a fixed or slide out keyboard and monitor for ease of server maintenance.

Equipment labeling

Label all devices at both front and rear with short, uniquely identifiable names. Names on labels must directly correlate to those used on on documentation and monitoring systems. No unlabelled equipment should be permitted in your racks, no excuses!


See also: