Share

How green is your datacentre? If you do not care now, you will soon. Most datacentre managers have not noticed the steady rise in electricity costs, since they do not usually see those bills. But they do see the symptoms of surging power demands.

High-density servers are creating hot spots in datacentres that have surpassed 30kW per rack for some high-end systems. As a result, some datacentre managers are finding they cannot get enough power distributed to the racks. Others are finding they cannot get more power to the building: they have fully tapped the utility company's ability to deliver additional capacity.

The problem already has the attention of Mallory Forbes, senior vice president and manager of mainframe technology at US bank Regions Financial. "Every year, as we revise our standards, the power requirements seem to go up," says Forbes. "It creates a big challenge in managing the datacentre because you continually have to add power."

Energy efficiency savings can add up. A watt saved in datacentre power consumption saves at least a watt in cooling. IT managers who take the long view are already paying attention to the return on investment associated with acquiring more energy-efficient equipment. "Energy becomes important in making a business case that goes out five years," says Robert Yale, principal of technical operations at investment management firm Vanguard Group. His 60,000 square foot datacentre caters mostly to web-based transactions. While security and availability come first, he says Vanguard is "focusing more on the energy issue than we have in the past".

Green datacentres do not just save energy, they also reduce the need for expensive infrastructure upgrades to deal with increased power and cooling demands. Some organisations are also starting to take the next step and are looking at the entire datacentre from an environmental perspective.

Following these steps will keep astute datacentre managers ahead of the game.

Consolidate servers, then consolidate some more

Existing datacentres can achieve substantial savings by making just a few basic changes – and consolidating servers is a good place to start, says Ken Brill, founder and executive director of the Uptime Institute consultancy. In many datacentres, he says, "between 10% and 30% of servers are dead and could be turned off".

Cost savings from removing physical servers can add up quickly - up to £600 in energy costs per server per year, according to one estimate. Mark Bramfitt, senior programme manager in customer energy management at US energy supplier Pacific Gas and Electric, adds that the money saved in power use is matched by savings in cooling costs. The utility firm offers a "virtualisation incentive" scheme that pays up to $300 (£150) per server taken out of service through server consolidation.

Once idle servers have been removed, datacentre managers should consider moving as many server-based applications as possible into virtual machines. This allows a substantial reduction in the number of physical servers required, while increasing the utilisation levels of remaining servers.

Most physical servers today run at about 10% to 15% utilisation. Since an idle server can consume as much as 30% of the energy it consumes during peak use, so increasing the utilisation levels offers more for the money.

To that end, virtualisation vendor VMware is working on a new feature associated with its Distributed Resource Scheduler that will dynamically allocate workloads between physical servers that are treated as a single resource pool. Distributed Power Management is designed to squeeze virtual machines on as few physical machines as possible, and to automatically power down servers that are not being used. The system makes adjustments dynamically as workloads change. In this way, workloads might be consolidated in the evening during off-hours, then reallocated across more physical machines in the morning, as activity increases.

Turn on power management

Although power management tools are available, administrators today do not always make use of them. "In a typical datacentre, the electricity usage hardly varies at all, but the IT load varies by a factor of three or more. That tells you that we're not properly implementing power management," says Amory Lovins, chief scientist at the Rocky Mountain Institute, a US energy and sustainability research firm.

Just taking full advantage of power management features and turning off unused servers can cut datacentre energy requirements by about 20%, he adds.

This is not happening in many datacentres today because administrators focus almost exclusively on uptime and performance, and IT staff are not comfortable yet with available power management tools, says Christian Belady, technologist at Hewlett-Packard. He argues that turning on power management can actually increase reliability and uptime by reducing stresses on datacentre power and cooling systems.

Vendors could also do more to facilitate the use of power management capabilities, says Brent Kerby, Opteron product manager at Advanced Micro Devices’ server team. AMD and other chip makers are implementing new power management features, but he warns that while support these features is inherent in Microsoft Windows, “you have to adjust the power scheme to take advantage of it". Kerby says the feature should be turned on by default. "Power management technology is not leveraged as much as it should be," he adds.

But in some cases, power management may cause more problems that it cures, says Jason Williams, chief technology officer at messaging logistics service provider DigiTar. He runs Linux on Sun T2000 servers with UltraSparc multicore processors. "We use a lot of Linux, and power management can cause some very screwy behaviours in the operating system," he says.

There can be problems using the Advanced Configuration and Power Interface (ACPI), a specification co-developed by HP, Intel, Microsoft and other industry players, he says. "We've seen random kernel crashes primarily. Some systems seem to run Linux fine with ACPI turned on, and others don't. It's really hard to predict, so we generally turn it and any other power management off."

Upgrade to energy-efficient servers

The first generation of multicore chip designs showed a marked decrease in overall power consumption. "Intel's Xeon 5100 delivered twice the performance with 40% less power," says Lori Wigle, director of server technology and initiatives marketing at Intel. Moving to servers based on these designs should increase energy efficiency.

Future gains are likely to be more limited, however. Sun Microsystems, Intel and AMD all say they expect their servers' power consumption to remain flat in the short term. AMD's current processor offerings range from 89W to 120W. "That's where we're holding," says AMD's Kerby. For her part, Wigle does not expect Intel's next-generation products to repeat the efficiency gains of the 5100 chip. "We'll be seeing something slightly more modest in the transition to 45 nanometre products," she says.

Chip makers are also consolidating functions such as input-output and memory controllers onto the processor platform. Sun's Niagra II includes a Peripheral Component Interconnect Express bridge, 10Gigabit Ethernet and floating-point functions on a single chip. "We've created a true server on a chip," says Rick Hetherington, chief architect and engineer at Sun.

But this consolidation does not necessarily mean lower overall server power consumption at the chip level, says an engineer at IBM's System x platform group who asked not to be named. Overall, he says, net power consumption will not change. "The gains from integration... are offset by the newer, faster interconnects, such as PCIe Gen2, CSI or HT3, FBDIMM or DDR3," he says.

Use high-efficiency power supplies

John Koomey, a consulting professor at Stanford University in the US and staff scientist at Lawrence Berkeley National Laboratory, says power supplies are a prime example of the lack of focus on total cost of ownership in the server market, because the inefficient units that ship with many servers today waste more energy than any other component in the datacentre. Koomey led an industry effort to develop a server energy management protocol.

Progress in improving designs has been slow. "Power-supply efficiencies have increased at about one half percent a year," says Intel's Wigle. Newer designs are much more efficient, but in the volume server market, these are not universally implemented because they are more expensive.

With the less efficient power supplies found in many commodity servers, efficiency peaks at 70% to 75% when servers are at 100% utilisation but drops down to around 65% efficiency at 20% utilisation - and the average server load ranges from 10% to 15% utilisation.

This means that inefficient energy supplies can waste nearly half the power before it even gets to the IT equipment. The problem is compounded by the fact that every watt of energy wasted by the power supply requires another watt of cooling system power just to remove the resulting waste heat from the datacentre.

Power supplies are available today that attain 80% or higher efficiency – even at 20% load - but such supplies cost significantly more. High-efficiency power supplies can carry a 15% to 20% premium, says Lakshmi Mandyam, director of marketing at one US utility.

Still, moving to these more energy-efficient power supplies reduces both operating costs and capital costs. Every £20 spent on energy-efficient power supplies creates five times as much in savings on the capital cost of cooling and infrastructure equipment, says the Rocky Mountain Institute’s Lovins. Any power supply that does not deliver 80% efficiency across a range of low load levels should be considered unacceptable, he says.

To make matters worse, server manufacturers have traditionally over specified power needs, opting for a 600W power supply for a server that really should only need 300W, Sun's Hetherington says. "If you're designing a server, you don't want to be close to threatening peak power levels. So you find your comfort level above that to specify the supply," he says.

"At that level, it may only be consuming 300W, but you have a 650W power supply taxed at half output, and it's at its most inefficient operating point. The loss of conversion is huge. That's one of the biggest sinners in terms of energy waste," he says.

All of the major server vendors say they already offer or are phasing in more efficient power supplies in their server offerings.

HP is in the process of standardising on a single power supply design for its servers. Paul Perez, vice president of storage, network and infrastructure at the hardware giant, spoke at a recent Uptime Institute conference. "Power supplies will ship this summer with much higher efficiency," he said, adding that HP is trying to increase efficiency percentages into the "mid-90s". HP's Belady says all the company's servers use power supplies that are at least 85% efficient.

Smart power management can also increase energy supply utilisation levels. For example, HP's PowerSaver technology turns off some of the six power supplies in a C-class blade server enclosure when total load drops, saving saves energy and increases efficiency.

One resource IT managers can use when determining power supply efficiency is the information from the 80Plus.org certification programme. The scheme, initiated by electricity companies, lists power supplies that consistently attain an 80% efficiency rating at 20%, 50% and 100% loads.

Stanford University's Koomey says search engine giant Google has taken an innovative approach to improving power supply efficiency at its server farms. Part of the expense of power supply designs lies in the need for multiple outputs at different DC voltages. "In doing their custom motherboards... they went to the power supply people and said, 'We don't need all of those DC outputs. We just need 12 volts.'" By specifying a single, 12volt output, Google saved money and delivered a higher efficiency power supply. "That is kind of thinking that's needed," Koomey says.

Break down internal business barriers

Most IT departments are not held accountable for energy efficiency, even though performance and uptime are carefully tracked, because of the separation between IT functions and facilities management. The IT department generates the load, while the facilities management department usually gets the power bill, says the Uptime Institute's Brill.

Breaking down the internal barriers within enterprises is critical to meeting the challenge - and providing a financial incentive for change. Better communication between IT and facilities managers is also essential as cooling moves from simple room-level air conditioning to targeted cooling systems that move heat exchangers up to – or even inside – the server rack.

The line between facilities and IT responsibilities in the datacentre is blurring. "The solutions won't happen without coordination by people who hardly talk to each other because they're in different offices or different tribes," says Rocky Mountain's Lovins.

This narrow view has also afflicted IT equipment vendors, says Lovins. Engineers are now specialised, often designing components in a vacuum without looking at the overall system – the datacentre. What used to be a holistic design process that optimised an entire system was "sliced into pieces", he says.

Follow the standards

Several initiatives are under way that may help users identify and buy the most energy-efficient IT equipment. These include the 80Plus programme and a planned Energy Star certification programme for servers. Government authorities are also looking at ways to promote the use of energy-efficient servers.

The not-for-profit Standard Performance Evaluation Corporation (SPEC) is also working on a performance per watt benchmark for servers to help provide a baseline for energy efficiency comparisons. The specification is slated for release this year. When completed, the standard will be useful for making comparisons across platforms, says Klaus-Dieter Lange, chair of the SPEC power and performance committee.

Be an advocate for change

IT equipment manufacturers will not design for energy efficiency unless users demand it. Joseph Hedgecock, senior vice president and head of platform and datacentres at investment bank Lehman Brothers, says his company is lobbying vendors for more efficient server designs. "We're trying to push for more efficient power supplies, and ultimately systems themselves," he says.

The Vanguard Group's Yale says his company is involved with the Green Grid