Like the unfortunate person who continually diets but only seems to gain more weight, power hungry data centres, despite adopting virtualisation and power management techniques, only seem to be consuming more energy than ever, to judge from some of the talks at the Uptime Symposium 2010, held this week in New York.
"There is a freight train coming that most people do not see, and it is that you are going to run out of power and you will not be able to keep your data centre cool enough," Rob Bernard, the chief environmental strategist for Microsoft, told attendees at the conference.
Power usage is not a new issue, of course. In 2006, the US Department of Energy predicted that data centre energy consumption would double by 2011 to more than 120 billion kilowatt-hours (kWh). This prediction seems to be playing out: An ongoing survey from the Uptime Institute found that, from 2005 to 2008, the electricity usage of its members' data centres grew at an average of about 11 percent a year.
But despite all the talk in green computing, data centres don't seem to be getting more power efficient. In fact, they seem to be getting worse.
"We haven't fundamentally changed the way we do things. We've done a lot of great stuff at the infrastructure level, but we haven't changed our behavior," Bernard said.
Speakers at the conference pointed to a number of different power-sucking culprits, including energy-indifferent application programming, siloed organisational structures, and, ironically, better hardware.
One part of the problem is the way applications are developed. "Applications are architected in the old paradigm," Bernard said. Developers routinely build programs that allocate too much memory and hold on to the processor for too long. A single program that isn't written to go into sleep mode when not in use will drive up power consumption for the entire server.
"The application isn't energy-aware, it doesn't matter that every other application on the client is," he said. That one application will prevent the computer from going into a power-saving sleep mode.
The relentless pace of processor improvement is another culprit, at least if the data centre manager doesn't handle it correctly. Thanks to the still-unrelenting pace of Moore's Law, in which the number of transistors on new chips doubles every two years or so, each new generation of processors can double the performance of its predecessors.
In terms of power efficiency, this is problematic, even if the new chips don't consume more power than the old ones, Bernard said. Swapping out old processors for new ones may get the application to run faster, but the application takes up correspondingly less of the more powerful CPU's resources. Meanwhile, the unused cores idle, still consuming a large amount of power. This means more capacity is wasted, unless more applications are folded onto fewer servers.
"As soon as you replace your hardware with something more efficient, your CPU usage, by definition, will go down," Bernard said.
Speakers at the conference estimated that the average CPU utilisation (which is the number of processor cycles that are actually tasked with doing something) hovered somewhere between 5 percent and 25 percent. Despite virtualisation efforts, the percentage seems to be going down as time passes.
Organisations are not thinking enough about how to consolidate workloads, Bernard charged. Each new application added by an organisation tends to get its own silo, and very little work is done in sharing resources.