Share

What can IT managers do make datacentres more power-efficient? The Green Grid has come up with a set of guidelines to improve things.

Typical problem areas

1. Transformers and power distribution units operating below capacity.

2. Air conditioners having to pump air over a long distance.

3. Cooling pumps operating at low efficiency levels.

4. Redundant designs under-utilising components.

5. Over-sized uninterruptible power supplies (UPS) operating inefficiently.

6. UPS' loaded too lightly to operate efficiently.

7. Blockages in the under-floor area interrupting air-flows.

Here are a set of guidelines to follow which will help overcome these problems.

Guidelines

The idea is to better cool hot areas and not waste capacity cooling already cool areas. Right-sizing UPS' and other equipment will ensure it operates efficiently.

a) Have a cold aisle:hot aisle layout to promote effective flow and separation of cold and warm air streams and also site air-conditioning equipment appropriately.

b) Select power-economising modes of server operation.

c) Model the datacentre's air flow using CFD software (computational fluid dynamics) and try different floor tile vent locations and CRAC (Computing room air conditioning) unit sites. Such optimisation of cooling can help save up to 25 percent of datacentre energy costs (HP, Christopher Malone, PhD, Christian Belady, P.E., Metrics to Characterize Data Center & IT Equipment Energy Use, Digital Power Forum, Richardson, TX (September 2006).

(d) Power and cooling systems have fixed power losses whatever the load. The better they are utilised the proportionally lower these fixed losses are. Match your capacity here to the power and cooling needs of the datacentre IT equipment, with a margin for growth.

e) Upgrade to much more efficient modern UPS which can use a third or less of the electricity needed by older UPS boxes. (Talk to suppliers like APC.)

f) Direct cooling to datacentre hot spots - this is termed closely-coupled cooling - and don't cool already cool areas. Shorten cooling air paths where you can so as to reduce air pumping needs. Move to a datacentre design where the server and storage rack airflows match the room airflow and you're not wasting energy pumping air in opposing directions.

g) Use server and storage virtualisation to reduce the number of physical devices to be powered and cooled. This also reduces space take-up in the datacentre.

h) Use more energy-efficient lighting with lights switched on by motion sensors or timers. This saves on direct lighting costs and also saves on cooling cost as lights cause some heat themselves.

i) Improve airflow inside racks and along aisles by fitting blanking plates to close off empty rack spaces.

j) When delivering cooling water directly to racks use professional engineering resources to minimise risk to electrical systems and insulate pipework, etc.

k) Consolidating servers using multi-cored chips can reduce the number of individual servers requiring power and cooling. Purchasing more power-efficient chips with, for example, power stepping to reduce power draw when the computing load goes down, also reduces power and cooling needs.

l) Use air-conditioning economiser mode if possible.

m) Check that individual air-conditioning units are co-ordinated and not working in opposition.

Datacentre cooling process

In summary the idea is to treat the datacentre power and cooling needs holistically and think about the total cost of ownership with power and cooling needs included. Cooling should be directed closely to the hot spots inside racks and vented warmed air taken out of the datacentre in a co-ordinated way to be cooled.