Intel, Yahoo and Google share innovations for data centre managers

Data centre decisions are never easy, no matter what the size of your company. When it comes to making the most of your facility, why not follow the lead of the big players?

Share

Data centre decisions are never easy, no matter what the size of your company. When it comes to making the most of your facility, why not follow the lead of the big players?  

We talked to executives at some of the tech industry's largest companies to find out how they are innovating in brand new data centres, including one that Google built in Belgium and Cisco's new state-of-the-art facility in Texas. Intel and Yahoo also weighed in with their best practices.

Google: All about efficiency

Google operates "dozens" of data centers all over the world. The firm's primary focus is on making its data centres more efficient than industry averages, says Bill Weihl, green energy czar at Google. According to EPA estimates, many data centres run at an efficiency of around 2.0 PUE (power usage effectiveness), meaning they use twice as much energy as they actually need. The PUE is the total energy consumed by all data centres within a company, divided by the energy consumed by IT.

Google, for its part, runs at around 1.18 PUE across all its data centres, Weihl says. One of the ways Google has become more efficient is by using so-called "free cooling" for its servers.

"We manage airflow in our facilities to avoid any mixing of hot and cold air. We have reduced the overall costs of cooling a typical data centre by 85%," Weihl says, adding that the reduction comes from a combination of new cooling techniques and power backup methods described below. The average cold aisle temperature in Google's data centers is 80 degrees instead of the typical 70 or below. Hot aisle temperature varies based on the equipment used. Google would not elaborate on specifics about hot aisle temperatures or name specific equipment.

Further, Google uses evaporative cooling towers in every data centre, including its new facility in Belgium, according to Weihl. The towers push hot water to top of the tower through a material that causes faster evaporation. During this period of evaporative cooling, the chillers used to cool the data centre are not needed or used as often.

"We have data centres all around the world, in Oregon where the climate is cool and dry and in the southwestern and midwest part of the US. Climates are all different, some are warmer and wetter, but we rely on evaporative cooling almost all of the time," he says.

Weihl says the facility in Belgium, which opened in early 2010, does not even have backup chillers, relying instead on evaporative cooling. He says it would be a "100 year event" when the data centre would not need evaporative cooling, so Google chose to forgo backup chillers to reduce the facility's electrical load. The centre runs at maximum load most of the time, he says. On some infrequent hot days administrators put a few servers on idle or turn them off.

Google's cooling towers

He advises companies to look seriously at "free cooling" technologies such as the evaporative cooling towers described above. Another option is to use towers to redirect outside air to servers, then allow the server temperatures to rise within acceptable ranges and use less direct cooling on the racks.

In terms of power management, Google uses voltage transformation techniques that step AC power down to DC voltage. Google also uses local backup power, essentially a battery on each server, instead of a traditional UPS, mostly due to the AC-to-DC conversion process.

Google uses a transformer to convert energy from utility power lines before the power is sent to servers. Traditionally, individual power supplies have converted the voltage from AC to DC, but that tack has proven to be inefficient, industry experts agree.

David Cappuccio, an analyst at Gartner, says, "Google, Facebook and many others have begun reducing the number of AC/DC conversions from when power hits the building to when it's delivered to the servers," he says. This can take the form of DC-based power distribution systems that move the conversion away from individual servers and to the tops of each rack. Typically this shaves a few percentage points off energy use, he explains.

Google also uses power supplies in servers and voltage regulators that are 93% efficient, he says. To make more efficient regulators would be prohibitively expensive.

"We use a single output power supply for a 12-volt rail that draws virtually no power when it is charged. The backup draw is less than 1% as opposed to a typical draw of 15% or more," says Weihl, citing the EPA estimates on typical data centre energy draw.

Another interesting Google technology involves custom software tools for managing data sets. Weihl says much of the management is automated with tools that help find out why a server is drawing too much power or how it may be misconfigured. The company uses a proprietary system called Big Table that stores tabular data sets and allows IT managers to find detailed information about server performance.

Google claims that its data centres are at an overall efficiency overhead of 19%, compared to the EPA estimate of 96% for most data centres. The overhead percentage indicates how much power is used for heating and cooling IT gear rather than to run the servers.

Cisco and the "downsized upgrade"

Like other organizations, Cisco has implemented the concept of a "downsized upgrade" achieved through virtualisation and consolidation. The process involves reducing the overall size of the data center and compacting equipment into a smaller chassis to save energy, but at the same time actually increasing the performance of the data centre.

At Cisco's new centre in Texas, for instance, the company mapped out enough space for a massive cluster of computers that can scale with rapid growth. The basic concept: Cram as much power into a small space and still get high performance.

Essentially, a cluster by Cisco's definition is a rack with five Cisco UCS (Unified Computing System) chassis. In each chassis there are eight server blades. In the data centre as a whole, there is a potential to have 14,400 blades. Each blade has two sockets, which can support eight processor cores. Each core supports multiple virtualised OS instances. To date, Cisco has installed 10 clusters, which hold 400 blades.

Another way Cisco has improved is with cable management. John Manville, Cisco's vice president of IT, says Cisco has saved $1 million by reducing the number of cables in its data centers.

"Most people don't realise" that cabling accounts for 10% to 15% of total costs, says Manville. "That reduction in cables also keeps the airflow moving better, and with the new cooling technology we installed, we expect to save $600,000 per year in cooling costs."

Besides this consolidation, Cisco is also figuring out how to reduce hardware and management costs for each operating system and each server. Manville says the costs today are around $3,700 per physical server per quarter. Through virtualisation, he expects to reduce that cost down to $1,600 per physical server per quarter and eventually hopes to reduce that figure even further, down to $1,200 per server per quarter.

The Texas data centre is actually two separately located facilities that operate as one, a concept called Metro Virtual Data Centers, which Cisco developed internally and does not sell publicly. The company plans to open two more MVDC facilities in the Netherlands by the end of 2012, for a total of four operating as one.

The MVDC approach is not about cost savings or energy conservation, because both data centres run the same applications at the same time. Instead, Cisco uses the technique for replication. If a natural disaster takes out one data centre, operations continue unabated in real time.

Like Google, Cisco is highly focused on efficient operations. Manville says the Texas facility goes a few steps further than most. For instance, power is distributed at 415V for a savings of about 10% compared to the typical lower voltage systems used in other places. The facility also uses all-LED lighting for about a 40% savings in energy use compared to incandescent lights, he says.

LED lights are expensive, and about where compact fluorescent bulbs were when they first appeared, says Charles King, an analyst with PUND-IT. "Over time, as costs come down, LED will become a no-brainer so Cisco deserves kudos for pushing the envelope."

Find your next job with computerworld UK jobs