This is the second half of a two-part article. You can find the first half here.

In order to cut down on costs and ensure that there is enough power, a close look at each individual component is requisite. The next step is to figure out how each component impacts the datacentre as a whole.

Steve Yellin, vice president of product marketing strategies at Aperture Technologies, a datacentre management software firm, says that managers need to consider four separate elements that contribute to overall datacentre efficiency - the chip, the server, the rack and the datacentre as a whole. Savings in any one of these components yields savings in each of the higher area above it.

"The big message is that people have to get away from thinking about pieces of the system," Stanford University's Koomey says. "When you start thinking about the whole system, then spending that $20 extra on a more-efficient power supply will save you money in the aggregate."

Going modular

There are strategies for cutting power in each area Yellin outlined above. For example, multi-core processors with lower clock speeds reduce power at the processor level. And server virtualisation, better fans and high-efficiency power supplies - such as those certified by the 80 Plus programme - cut power utilisation at the server level.

Five years ago, the average power supply was operating at 60 percent to 70 percent efficiency, says Kent Dunn, partnerships director at PC power-management firm Verdiem and programme manager for 80 Plus. He says that each 80 Plus power supply will save datacentre operators about 130 to 140 kilowatt-hours of power per year.

Rack-mounted cooling and power supplies such as Liebert's XD CoolFrame and American Power Conversion's InfraStruXure cut waste at the rack level. And at the datacentre level, there are more efficient ways of distributing air flow, using outside air or liquid cooling, and doing computational fluid dynamics modelling of the datacentre for optimum placement of servers and air ducts.

"We've deployed a strategy within our facility that has hot and cold aisles, so the cold air is where it needs to be and we are not wasting it," says Fred Duball, director of the service management organisation for the Virginia state government's IT agency, which just opened a 192,000-square-foot datacentre in July and will be ramping up the facility over the next year or so. "We are also using automation to control components and keep lights off in areas that don't need lights on."

Finding a fit

There is no single answer that meets the needs of every datacentre operator.

When Elbert Shaw, a project manager at Science Applications International, consolidated US Army IT operations in Europe for several dozen locations into four datacentres, he had to come up with a unique solution for each location. At a new facility, he was able to put in 48-inch floors and run the power and cooling underneath.

But one datacentre being renovated only had room for a 12-inch floor and two feet of space above the ceiling. So instead of bundling the cables, which could have eaten up eight of those 12 inches, blocking most of the airflow, he got permission to unbundle and flatten out the cables. In other instances he used 2-inch underfloor channels, rather than the typical 4-inch variety, and turned to overhead cabling at one location.

"Little tricks that are OK in the 48-inch floor cause problems with the 12-inch floor when you renovate a site," says Shaw.

"These facilities are unique, and each has its own little quirks," Koomey says.

Power management tips

Various experts suggest the following ways of getting all you can from your existing power setup:

  • Don't oversize. Adopt a modular strategy for power and cooling that grows with your needs instead of buying a monolithic system that will meet your needs years down the road.

  • Plan for expansion. Although you don't want to buy the extra equipment yet, install conduits that are large enough to accommodate additional cables to meet future power needs.

  • Look at each component. Power-efficient CPUs, power supplies and fans reduce the amount electricity used by a server. But be sure to look at their impact on other components. For example, quad-core chips use less power than four single chips but may require additional memory.

  • Widen racks. Use wider racks and run the cables to the side, rather than down the back where they block the air flow. Air flows from the front of a server, through the box and out the back. There are no inlet and outlet vents on the sides. It is similar to a PC, and you can put a piece of plywood along the side of server without affecting airflow through the machine. But, if you put the wood along the back, it will overheat.

  • Install a UPS bypass. This is a power cable that goes around the UPS rather than through it. That way, if the UPS is taken offline, there's still a route available for the electricity to flow through and you have power redundancy when you bring a UPS device down for maintenance.