How Facebook’s Open Compute Project could better the modern data centre world for all.
A number of years ago the data centre industry was facing a huge problem. The proliferation of cloud alongside the emergence of IoT was starting to change the data centre game – and more importantly, the demands placed on data centre infrastructure.
Rocketing power consumption, energy efficiency and environmental impact started rising on the agenda of top tech companies and execs, with Facebook seemingly coming up with a solution in 2011.
Following a redesign of its Oregon data centre, Facebook’s Jonathan Heiliger announced the Open Compute Project (OCP), which saw the social networking giant openly share designs of data centre products. The goal of OCP, as they themselves state, is “reimagining hardware, making it more efficient, flexible, and scalable.”
What was first established after a data centre revamp, quickly become a powerful community boasting big names like Google, Microsoft, Goldman Sachs and Rackspace – all with the shared common goal of “redesigning hardware technology to efficiently support the growing demands on compute infrastructure.”
The community’s goal was a bold one – to make data centres 38 percent more energy efficient and 24 percent less expensive to run.
Read more: Google joins Facebook’s Open Compute Project, as Microsoft, Equinix & Schneider Electric add to the open source project
It is the design, argues Keith Sullivan, Managing director at CSsquared, that allows OCP data centres to deliver the ambitious efficiency targets, while also solving one of the biggest efficiency problems in traditional data centres – power.
According to Mr Sullivan, the design allows OCP data centres to deliver the ambitious efficiency targets, while also solving one of the biggest efficiency problems in traditional data centres – power.
“The single biggest in-efficiency in a data centre today for the whole PUE model is the power supply of the servers. In each server you have two power supplies, because the traditional data centre is also built on redundancy. These two power supplies are each only running at about 35-40% load, resulting in an efficiency of about 75-80%,” said the MD of open compute consultancy CSsquared.
“So because you’re running A and B feeds every single server has two power supplies and they’re all losing about 25% of power – If you’re talking about a data centre facility that has 1MW power, 1MW loosely equates to about one million pounds a year. So if you’re losing 25% on your server power supplies that’s £250,000 a year and that’s the single biggest loser of power.”
The Open Compute model, however, is built in a totally different way. In a purpose built Open compute data centre, there is no need for a UPS or battery room as the power is fed directly into the cabinets.
An example of this is shown through a recent joint collaboration between CSsquared and Volta Data Centres. This is the recent opening of The Open Lab, which with guidance from Facebook is to host at Volta DC’s London facility as an open stack and hardware integrator.
“In the Open Compute world, you can take out those power supplies and feed three phase power direct to the rack where there is a rack power shelf which centrally rectifies the power to a 12V DC power feed which is then distributed within the rack via a 12V bus bar.
“So you’ve got one highly efficient rectifier that eliminates that 25% and works it over by 2%. There are huge power efficiencies and the density you get is far greater. But the biggest gain that people don’t get is the shift from redundancy into resilience.”
Resilience is another area where OCP is turning the data centre game on its head. Where downtime and outages fuel news headlines worldwide and hit brand reputation, Mr Sullivan argues that Open Compute data centres need not worry about such risks, nor deal with redundancy. Pointing to Facebook as a case in point, Mr Sullivan told CBR:
“Facebook takes it to the level of where they have their mega data centre up in Sweden and they’ve taken the resilience up to a level where they can lose that whole data centre and nothing happens.So if they lose power to the data centre they’ll only have enough backup generators to take on the critical workloads and maintain the most critical databases.
“So they’ve taken it to a level where they can lose the whole cabinet, or lose the whole power or whatever amount that you have of IT infrastructure.
It seems that Open Compute data centres have it all – power efficiency, resilience and cost effective to boot. What makes this model even more attractive is that it is, in its very name, open. Data centre operators can share in the designs and seize on the cost and power savings of this new hardware approach. Going open compute could be the way forward for data centres, as Mr Sullivan told CBR:
“It’s just way more scaled, way more flexible and the compute power is far greater.”