Data centres are a hot topic right now thanks to outsourcing, cloud computing and Software as a Service (or Software + Services). You might know a computer room but it’s nothing like a data centre in terms of scale, power, cooling, fire suppression, fault tolerance and processes. It’s a thoroughly different experience.
Microsoft is building a set of new data centres around the world to host their new cloud computing (Azure) infrastructure for Software + Services. This includes the Grange Castle facility in Dublin that made news headlines a year ago when they turned the first sod. It appears Microsoft refers to the architecture being used as 3rd generation.
The typical data centre is a large purpose driven building with pre-deployed power and cooling. There’s a huge cost to building/maintaining/operating this building until it’s fully populated. The big costs? Building, rent, water and power. Electricity is a huge cost and we all know it’s only getting higher. Green (as in money/taxation, not environment) Party politicians want carbon taxes and we’re likely to see those soon. Those operating costs are increasing. This makes it harder to keep computing costs down – more reason to seek cloud/outsourcing services to capitalize on costs savings through shared or bulk buying, i.e. many clients in a managed data centre sharing costs.
Microsoft builds and maintains their own data centres so they are their own client. They can’t share those costs. I’ve heard that they buy some staggering number of servers per month so growth is constant and huge. They could build huge data centres and populate them but the overhead of half empty data centres would be massive.
Their 4th generation architecture is a simple concept. Instead of building the huge building they will build a spine or back bone. They defined a modular architecture where they can drop in pre-built and populated building blocks in a just-in-time (JIT) basis. These blocks are similar to lorry containers on sight. They blocks are pre-fabricated so building costs are minimal. Because the building isn’t 1 huge block it also simplifies cooling, one of the big draws on power and a major draw of water. They are looking at using uncooled external air to cool the individual blocks. Each block has direct access to the external air. They might not get 19 degree Celsius internal temperatures but do they really need that? Nope. Servers will happily run at 30 degrees. We only cool beneath that for historical reasons and for human comfort levels.
Using JIT MS can keep a certain amount of resources free while putting more on order. This Lego-style approach is a simple one and a money saver. Use what you need now, have some on reserve and have a fixed plan on what/when you will purchase to maintain the reserve.
We do something similar at work. We acquire servers only as required. We power up and network racks only as required. We keep a certain percentage of resources free and have a trigger to acquire fresh upgrades to our reserves. This keeps our operating costs down which we can pass on to our clients. Of course, we’re not on the same scale as MS’s data centres … yet 🙂