Welcome to world's largest supercomputing grid

With 20 petabytes of storage, and more than 280 teraflops of computing power, TeraGrid combines the processing power of supercomputers across the US

Share

A unique US government-funded computing effort is making it easier for corporations to access the largest-scale computers on the planet. Dubbed TeraGrid, the effort spans nine different academic and government institutions and has reached a critical mass this year.

The notion is to combine the largest supercomputers into a global processing and storage grid to tackle the thorniest computing problems. "We want to make available high-end resources to the broadest community," says Dane Skow, who is the director of the Grid Infrastructure Group and performs the operational coordination of TeraGrid from the University of Chicago's Argonne National Lab. "We want to leverage our top-of-the-line equipment for people who don't have the skills to do it themselves."

TeraGrid began with grants in 2000 to the Pittsburgh Supercomputing centre. It has grown by adding other supercomputer centres around the country, and it just completed a second user conference held at the University of Wisconsin earlier this month.

Part of the TeraGrid is a simple user interface for the world's largest distributed computing environment, the ultimate graphical user interface (GUI) on steroids. "The point of TeraGrid is to pull together the capabilities and intellectual resources for problems that can't be handled at a single site," says Rob Pennington, the deputy director of the National centre for Supercomputing Applications (NCSA). "We make it easier for researchers to use these multiple computing sites with a very small increment in training and technical help."

Big numbers

The numbers are staggering, even for IT managers who are used to big projects. The TeraGrid network currently spans more than 20 petabytes of storage - that's enough to hold a billion encyclopedias - and more than 280 teraflops of compute power.

While there are big numbers involved in describing TeraGrid, "we want to be more than just a source of computing cycles," says Pennington. What TeraGrid is trying to accomplish is to produce a common means of accessing processing power and storage on the largest scale, freeing people from doing custom programming jobs. "We are trying to make it better on the front end," says Skow.

"This isn't just about providing some time on a big machine but being able to solve all the plumbing problems so that we can have a uniform end-to-end and integrated experience for all kinds of research," says Bill Bell, the deputy director of the NCSA.

And the TeraGrid is only going to get bigger, thanks to a combination of various government, military and private sources. "We are at the beginning of a very aggressive growth curve thanks to the National Science Foundation," says Skow. "The sized of the problems that a researcher can attack is going to be enormous and on a scale that was never possible before. We are going to see basic science being able to leapfrog and go far beyond what people have been able to do previously."

Practical supercomputing

This isn't just a bunch of academics figuring out the next billion decimal places of pi or being able to draw pretty fractal pictures faster. While certainly many of the projects have to do with advancing basic science research such as studies of black holes, climate modelling and data visualisation, there is a big component of commercial and industrial research, too. This means that TeraGrid can have direct benefits in a number of different commercial markets and disciplines, including understanding how ketchup works. (Maybe this science will once and for all settle whether it is a vegetable or not.)

One TeraGrid project was an effort to vastly increase the number of special chemical catalysts called zeolites, which are used in a wide variety of industrial processes, from making laundry detergents to refining petroleum products. Until recently, the total number of natural zeolites stood at about 50. Then a chemical engineering professor from Rice University, Michael Deem, developed an application that ran on several TeraGrid supercomputers to come up with millions of different chemical structures that could be used for future catalytic purposes.

Another example of a commercial application of supercomputing technology was a project done for The Procter & Gamble (P&G) to improve the Pringles potato chip production line. The company had a problem with the high speed of the manufacturing line creating air drafts that were blowing the chips out of the cans during the packing process. P&G was able to cut down on wasted chips by using computational fluid dynamics models developed by aircraft maker Boeing, said Melyssa Fratkin, corporate and government relations manager at the University of Texas Advanced Computing centre (TACC). Call this another form of chip processing!

Find your next job with computerworld UK jobs