This is the first half of a two-part article. You can find the second half here

Sal Azzaro, director of facilities for Time Warner Cable, is trying to cram additional power into prime real estate at the company's 22 facilities in New York.

"Its gone wild," says Azzaro. "Where we had 20-amp circuits before, we now have 60-amp circuits." And, he says, "there is a much greater need now for a higher level of redundancy and a higher level of fail-safe than ever before."

If Time Warner Cable's network loses power, not only do televisions go black, but businesses can't operate and customers can't communicate over the company's voice-over-IP and broadband connections.

When it comes to the power crunch, Time Warner Cable is in good company. In February, Jonathan Koomey, a staff scientist at Lawrence Berkeley National Laboratory and a consulting professor at Stanford University, published a study showing that in 2005, organisations worldwide spent $7.2bn (£3.56bn) to provide their servers and associated cooling and auxiliary equipment with 120 billion kilowatt-hours of electricity. This was double the power used in 2001.

According to Koomey, the growth is occurring among volume servers (those that cost less than $25,000 per unit), with the aggregate power consumption of midrange ($25,000 to $500,000 per unit) and high-end (more than $500,000) servers remaining relatively constant.

One way Time Warner Cable is working on this problem is by installing more modular power gear that scales as its needs grow. Oversized power supplies, power distribution units (PDU) and uninterruptible power supplies (UPS) tie up capital funds, are inefficient and generate excess heat. Time Warner Cable has started using Liebert's new NX modular UPS system, which scales in 20-kilowatt increments, to replace some of its older units.

"The question was how to go forward and rebuild your infrastructures when you have a limited amount of space," Azzaro says.

With the NX units, instead of setting up two large UPSes, he set up five modules - three live and the other two on hot standby. That way, any two of the five modules could fail or be shut down for service and the system would still operate at 100 percent load.

Other approaches

Some users are trying innovative approaches. One new way of approaching the power issue is a technique called combined heat and power (CHP), or co-generation, which combines a generator with a specialised chiller that turns the exhausted waste heat into a source of chilled water.

Another new approach is to build datacentres that operate off DC rather than AC power. In a typical datacentre, the UPSes convert the AC power coming from the utility's main power supply into DC power, then back into AC again. Then the server power supplies again convert the power to DC for use within the server.

Each time the electricity is switched between AC and DC, some of that power is converted into heat. Converting the AC power to DC power just once, as it comes into the datacentre, eliminates that waste. Rackable Systems has a rack-mounted power supply that converts power from 220-volt AC power to -48-volt DC power in the cabinet, then distributes the power via a bus bar to the servers.

On a larger scale, last summer the Lawrence Berkeley lab set up an experimental datacentre, hosted by Sun Microsystems, that converted incoming 480 VAC power to 380 VDC power for distribution to the racks, eliminating the use of PDUs altogether. Overall, the test system used 10% to 20% less power than a comparable AC data centre.

For Rick Simpson, president of Belise Communication and Security, power management means using wind and solar energy.

Simpson's company supports wireless data and communications relays in the Central American wilderness for customers including the UK Ministry of Defence and the US embassy in Belise. He builds in enough battery power - 10,000 amp hours - to run for two weeks before even firing up the generators at the admittedly small facility.

"We have enough power redundancy at hand to make sure that nothing goes down, ever," Simpson says. So even though the country was hit by category-4 hurricanes in 2000 and 2001, "we haven't been down in 15 years," he says.

Belise Communications equipment all runs directly off the batteries and UPSes from Falcon Electric.

The electric utility's power is used only to charge the batteries.

Scaling up

While there is a lot of talk lately about building green datacentres, and many hardware vendors are touting the efficiency of their products, the primary concern is still just ensuring you have a reliable source of adequate power.

Even though each core on a multi-core processor uses less power than it would if it was on its own motherboard, a rack filled with quad-core blades consumes more power than a rack of single-core blades, according to Intel.

"It used to be - you would have one power cord coming into the cabinet, then there were dual power cords," says Bob Sullivan, senior consultant at The Uptime Institute. "Now with over 10 kilowatts being dissipated in a cabinet, it is not unusual to have four power cords, two A's and two B's."

With electricity consumption rising, datacentres are running out of power before they run out of raised floor space. A Gartner survey last year showed that half of datacentres will not have sufficient power for expansion by 2008.

"Power is becoming more of a concern," says Dan Agronow, chief technology officer at The Weather Channel Interactive. "We could put way more servers physically in a cabinet than we have power for those servers."

The real cost, however, is not just in the power being used but in the costs of the infrastructure equipment - generators, UPSes, PDUs, cabling and cooling systems. For the highest level of redundancy and reliability - a Tier 4 datacentre - for every kilowatt used for processing, the Uptime Institute says that some $22,000 (£10,900) is spent on power and cooling infrastructure.

This is the first half of a two-part article. Now read:

Put your datacentre on an energy diet (part 2)