So now we’ve made the decision to deploy hardware in the US we need to start making some of the practical solutions, such as:
- Which facility provider should we be using? and;
- Where should the data centre be physically located?
To make this decision we had a number of important requirements for each of the services we’d need to procure.
Data Centre Providers
- The data centre must be carrier netural facility and rated as a tier3 or greater data centre as per uptime institute guidelines
- Given we do not have any staff on that ground at this point we require good smart-hands which includes a team that will complete all of the initial deployment:
- Receiving the servers and network devices from the hardware vendor and verify received as ordered
- Install kit into racks and record location for our internal documentation
- Cable up the machine to both power and networking. Carefully following cabling diagrams prepared by Anchor and supplied to the technician.
- Take care of the rubbish removal from the facility and disposal
- Be available 24×7 for emergency response to failed servers/hardware
- Provide a realistic service level agreement for these services so we can reliability predict mean time to repair after hardware failure.
- Be capable enough to get the initial equipment to the point where we could access them remotely to bootstrap the environment.
- Facility Location was important to us as well. Do we want somewhere on the West coast which is closest to Australia, resulting in the lowest level of latency and is much easier to visit in the event we wish to go to the facility in person? Or somewhere on the East coast, which positions us better on a Global scale but has longer latency and is less accessible? How much would the price vary from location to location. There’s an awful lot of competition on the West coast of America — so perhaps that would mean prices would be more competitive?
The beauty of doing this entire “Internet thing” for a while is that we already have reasonable amount of experience when it comes to negotiating bandwidth contracts with telcos and other IP transit suppliers. We also have a pretty good idea on how we want to structure our connectivity.
We also essentially need to deploy two networks:
- Our public facing network connectivity which would be using need:
- To be fully multi-homed. Ie, we never allow ourselves to purchase bandwidth from one single supplier or companies which share common network components upstream. The is absolutely necessary to avoid any single point of failure.
- Allow us to receive a full BGP feed and allow us to dictate how our traffic is routed. We don’t want to be relying on third parties to make changes to our network traffic.
- Have a primary data link which was fast and had way more capacity than we would need from day 1. (At least 100Mbps)
- Have a secondary link which has the ability to be rapidly increased (talking minutes versus hours for the upgrade).
- An out-of-band, management network. This network was going to be used to build up our infrastructure from day zero. When we say build up, we mean install operating systems, configure routers and get our primary, public facing network running. Once the environment has been bootstrapped we would be using this network for day to day management services and in the unlikely event that our primary, redundant network becomes unavailable give us a way in and diagnosing what specifically is going on. Some of the requirements for this link are totally opposite to the public facing network:
- The link only needs to have limited capacity. 10Mbps will be sufficient enough for our purposes.
- This connection should be as simple as possible. No BGP routing, go through as few network devices as possible (no routers, just switches).
- Must be totally independant of the Primary/Backup links. Geographic diversity from the other connections is a must.
- Must be reliable
In more recent times we’ve been deploying Dell Hardware for various reasons. Some of these include improved performance, greater power efficiently but one of the biggest gains has actually been as result of the included DRACs (Dell Remote Access Cards), with these units we can get access to the machine consoles as if we are sitting in front of the physical machine. This means we’re able to do more and more work remotely without actually needing to be at the data centre in person. Obviously, when we’re deploying hardware on the other side of the globe this inclusion is absolutely imperative. With Dell’s Global presence it makes this decision very much a ‘no brainer’
The power rails which we use in Australia are APC devices which come with remote reboot capabilities. This allows for machines to be powered off and on remotely. We have done a fairly considerable amount of development using the devices both to track power usage as well as integration in provisioning systems. On this basis, we would be continuing with these units.
The final question is the switching infrastructure and misc items such as cables and rack cage nuts. For here the important thing was to find a supplier who was local, could delivery everything to the datacentre and be vendors for HP (who we use for the our switching infrastructure) as well as the APC remote reboot devices.
The hunt begins!