Share

The 80,000-square-foot building will double the size of the SDSC's facilities; besides an additional 5,000 square feet of data-centre space, the expansion will house classrooms, offices, meeting rooms and a 250-seat auditorium.

Under development since 2003, the building has an energy-efficient displacement ventilation system that uses the natural buoyancy of warm air to provide improved ventilation and comfort; exterior shade devices, such as awnings, to control temperatures by blocking the sun; and natural ventilation (the windows in the building will open) to save on energy.

The SDSC also is carefully selecting the IT equipment that will populate the datacentre to help lower overall energy consumption and save on operational costs.

"To marry the energy efficiency of [the building] with the IT systems and understand what impact they are putting upon each machine room" will be crucial to the data-centre expansion's success, says Gerry White, director of engineering services with the design and construction office at the University of California at San Diego (UCSD), which is home to the supercomputer centre.

The decision to build an energy-efficient datacentre came down to a matter of need, says Dallas Thornton, the SDSC's IT director. Funded by the National Science Foundation and associated with the University of California network, the SDSC provides facilities for academic research on such data-intensive topics as earthquake simulations and astrophysics. It offers users more than 36 teraflops of computing resources, as well as 2 petabytes of disk and 25 petabytes of archival tape capacity on-site.

While the high-performance computers and related equipment provided by the SDSC for this research are augmented by computers that individual projects supply, all require a significant amount of power and cooling, Thornton says. The present datacentre has a constant load of about 2 megawatts of power, which equals roughly enough energy for 2,000 residential homes. "Being on the cutting edge of technology, we've seen a lot of [energy] load before a lot of other folks, so we've had to do something [about energy consumption] just to stay in business," he says.

An easy sell

Thanks to the centre’s intense power requirements and the mandates specified in Title 24 of the California Code of Regulations, which sets the standard for energy-efficient new-building construction, Thornton and his colleagues in engineering and facilities didn't face much opposition when they tried to persuade upper management that the data-centre expansion should be energy efficient. And the fact that the new datacentre will save significantly on operating costs made the convincing even easier.

"Saving money is huge -- it all comes back to the cost of power, so for us [saving money] with a green datacentre really sits well" with upper management, Thornton says, although the logic behind energy efficiency should be clear to any executive team. "The No. 1 cost in running a datacentre is power, so if you can create ways to reduce that footprint, it should be an easy sell."

Because much of the datacentre’s cost savings will come from the design itself, Thornton says he can't predict what kind of operational cost savings the centre will gain by buying IT equipment that consumes less power than traditional computers do. Some savings are already clear, however; the centre estimates it will save 40% on operating the new building vs. a traditional building because of its plan to co-generate power locally by using waste steam to power steam chillers that will help cool the datacentre, he says.

The SDSC has submitted a grant proposal for IT equipment that will go into the new datacentre, so Thornton doesn't know yet what that will entail. However, he predicts the centre will house much of the same type of IT equipment as the SDSC's existing 14,000-square-foot datacentre does, which includes IBM, Sun, Dell and HP servers; Sun, IBM, DataDirect Networks, Hitachi and Copan Systems disk storage; and IBM and Sun StorageTek tape drives.

The SDSC isn't waiting until the new datacentre is finished to save on energy and costs, however. As routine IT equipment upgrades occur, Thornton and his 18-person department look for some basic features that can help improve energy efficiency, such as power management options, multi-core chips and high-density servers, such as blades, that make the most of their capacity.

"When we work with vendors, we really hold them to keep their energy consumption down," he says.

Higher use, lower cost

Thornton describes a method of achieving energy efficiency in the datacentre that, while simple, seems to run contrary to the way datacentres have been structured for years: By getting the maximum use out of existing servers, companies can avoid powering unused portions of their machines and therefore spend less money on energy. This approach -- known as high-utilisation and often dependent on such technologies as virtualisation, which can turn dedicated servers into multifunction computers -- means servers spend the most time possible processing tasks and the least time sitting idle, yet still on.

"What people aren't paying attention to is the idea of high utilisation," says Andrew Kutz, an analyst with Burton Group. Organisations -- in particular, online ones where delays in server response times can lead directly to dollars lost -- have become so obsessed with making sure their servers are functioning at top efficiency during peak traffic times that they're not using a large percentage of the servers' processing power during off times -- yet they're still powering these servers.

Let's say an application server can handle a maximum of x simultaneous requests from users, yet that level of requests is achieved only a small percentage of the time. That means the rest of the time, a portion of the server's processing power is unused but still draws energy.

"Companies need to focus on their servers being highly utilised," Kutz adds. That can be achieved without affecting performance through such techniques as virtualisation and power management.

For example, the SDSC is planning to replace about 20 dual, single-core-processor Dell servers. Each consumes about 400 watts with five dual, quad-core processor Dell servers that each consume about 300W. Thornton plans to virtualise the operating system and combine applications on these fewer servers so there is minimal impact to users.

What this consolidation means to energy efficiency -- and the bottom line -- is significant. Each of the existing 20 servers consumes 400W, with a total power draw of 8,000W; the five new servers each will consume 300W, with a total power draw of 1,500W, but will offer the same computing power.

"During less than their lifetime, [the new servers] will have paid for themselves on utilities savings alone," Thornton says. "Not to mention savings of freed-up power and cooling equipment, space, and other datacentre infrastructure."

In addition, the SDSC has instituted a chargeback mechanism for users who install their own IT equipment in the centre, Thornton says, charging them more for high-consumption machines and less for energy-efficient architectures.

"We're trying to encourage the users of the datacentre to limit power use and upgrade what they have on the floor to new machines," Thornton says. "The key is aligning incentives with targeted outcomes -- in this case, energy efficiency."

Creature comforts: supercomputing temperature concerns

In 2005, the San Diego Supercomputing Center won a best-practices award for the design that will help to make its data-centre expansion a model of energy efficiency. As rewarding as that distinction is, the project managers behind the building, now under construction, are focused equally on making sure it meets the needs of its occupants.

The new-data-centre project, which began in 2003 and is slated to be completed next year, quickly turned into a challenge of just how energy efficient a building could be, says Gerry White, director of engineering services at the University of California at San Diego's (UCSD) design and construction office. The SDSC is situated at and affiliated with the UCSD.

"We've been pushing for energy conservation on campus for 15 years now, and we're doing it as we can," White says. "The opportunity presented itself to take a good, serious look at how to configure, develop and take care of this expansion [in an energy-efficient manner], and then to do this on every new building."

The new building also must serve the purposes of the expanded datacentre and the 300 occupants it will house. While it may have been tempting to come up with a building plan that is just a design and engineering feat of energy efficiency, the project team also needed to consider the human element, because besides housing supercomputers and related IT equipment, the expansion will contain offices and classrooms. And that meant that ambient temperature had to be considered.

"There's no way to deny it, if you don't have [air-conditioning] in the building, you will save energy," says Craig Johnson, senior mechanical engineer, also in the UCSD's design and construction office. Even in San Diego's mild climate, just opening and closing the windows doesn't always work, he says.

"A lot of people think because we're in San Diego, you can open the windows and everything will be fine. Well, eight or nine months of the year that does work fine," but the heat and humidity during the summer and early fall require the building to be cooled, Johnson says. The team decided there would be cooling in the warm months and heat in the cool months, so that the occupants would have a "comfort band" that would adjust as the seasons change.

Combined with other architectural elements -- such as exterior shading devices that limit the effects of the sun on the building, and a concrete mass exterior that helps cool the building naturally -- the amount of air that needs to be moved through the building is still less than what a typical office building requires, Johnson says.

Based on computer models, the project leaders expect the new datacentre to be 43% more energy efficient than the state guidelines for new building construction. The team is taking steps to make sure the building not only meets its projections but also is comfortable, White says.

"We're installing 50 or 60 meters in the building to monitor how it performs, and if it's comfortable for the occupants," he says. "We have every intention of using the data from all of those meters to compare the actual building back to what the model said it would be."

Sidebar: Eight ways to green your existing datacentre

Companies don't need to build a whole new datacentre to begin saving on energy. Below are some steps recommended by Burton Group analyst Andrew Kutz that enterprises can take in their existing datacentres to save on power consumption:

1. Cut the physical number of servers through high-density options, such as blade servers, and through virtualization.

2. Reduce storage hardware by using SANs or other NAS devices that consolidate storage space. Consolidation of physical units greatly affects the amount of power consumed by the datacentre and can also represents lower-acquisition costs.

3. Look for energy-efficient hardware such as multicore CPUs that reduce redundant and external electronics and therefore save on energy.

4. Check out CPU performance-stepping technology that dynamically adjusts the energy that processors require in relationship to processor load.

5. Dynamic control of a server's internal fans can reduce the energy needed when the air in the datacentre is cooler.

6. Liquid cooling of server racks can limit the amount of energy needed to remove heat from the datacentre.

7. Follow the hot aisle/cold aisle layout for arranging equipment in the datacentre. Although this technique dates to the mid-1990s, "it's extremely effective," Kutz says.

The design lets cool air flow through the aisles to the servers' front-air intake, and lets hot air flow from the back of servers to the AC return ducts, therefore requiring less energy for cooling.

8. Look for software that is multithreaded to take advantage of multicore-processor machines. "Today you can buy a new server out-of-the-box that is multicore, but the software's not written for it, so you can't take advantage," Kutz says. "This falls in the lap of the software designers, they need to make sure their software is multithreaded to take advantage of multiprocessor machines."