Share

Discussions about supercomputer performance almost always centre on processing speed - how many gazillion operations per second can be performed by the giant machines. Makers and users of supercomputers also like to brag about the number of processors, the amount of memory and the bandwidth available for moving data about.

Such metrics are important determinants of how much work the machines can do. Questions of storage are less often a focus – but are becoming critically important. How much disk capacity do the computers have? How fast can data be written to and read from storage? How easily and quickly can an application be restarted when a disk fails? How can file systems be scaled up to efficiently handle petabytes of information? How on earth can you find anything when your system has 30,000 disks?

Those questions and others will be the focus of the Petascale Data Storage Institute (PDSI) recently founded by computer scientists at the US Department of Energy and three universities, with a five-year $11m (£5.5m) grant from the US government.

"The overall goal is to make storage more efficient, reliable, secure and easier to manage in systems with tens or hundreds of petabytes of data spread across tens of thousands of disk drives, possibly used by tens of thousands of clients," says Ethan Miller, a computer science professor at the University of California.

That system may not much resemble the one used by company accounting departments, but the computer scientists at the PDSI say - and the vendor sponsors are hoping -- that new technologies from petascale storage research will trickle down to commercial users.

"The use of high-performance computer clusters in many commercial applications, [such as] oil and gas, semiconductors and biotechnology, is growing substantially," says Garth Gibson, a principal investigator for the PDSI and a professor at Carnegie Mellon University in the US. He adds that companies are increasingly using supercomputers to boost revenues. "High-performance computing is not so much about cost reduction as it is about improving the quality of products," Gibson says.

Storage systems have the unfortunate quality of not scaling well. And PDSI researchers are trying to solve other problems too:

  • Disk access times have not kept pace with disk capacity. In 1990, a computer could read an entire hard drive in under a minute. Now it takes three hours or so to read the largest disks. "It's only going to get worse, and it will take longer and longer to recover from a disk failure," Miller says.
  • As the number of disks in a system increases, so does the probability that one will fail in any period of time. Now, big systems at government laboratories fail once or twice a day, but with multipetabyte systems, that rate could increase to a failure every few minutes.
  • When a disk does fail, the hardware used to restore the affected data to another disk has to work even harder, increasing the chances of more failures.

Applications at US government laboratories - for example, simulations of the ageing of nuclear weapons - can run for months. They generate huge amounts of data, in part because they periodically copy the contents of memory to disk as "checkpoints" in case a disk or processor fails. Researchers will look for faster checkpoint/restarting methods, better fault-tolerance technologies and more efficient file systems.

One promising approach that is now coming into use at government laboratories is a technology called object storage, by which clients can access storage devices directly without going through a central file server. Object storage devices have processors attached to them so lower-level functions, such as space management, can be handled by the devices themselves. And because data objects contain both data and metadata, it is possible to apply fine-grained, highly intelligent controls for security and other purposes. In addition, object-based storage systems tend to be much more scalable than traditional ones.

Researchers will also work on protocols and APIs, especially those related to the Linux operating system. They will help develop extensions to Posix, the portable operating system interface for Unix, to enable more effective use of file systems in highly parallel computer clusters. Researchers will also work with the Open Group and the Internet Engineering Task Force to make the Network File System protocols for file access more capable in highly parallel systems.

The PDSI will explore a number of emerging technologies, such as phase-change Ram (Pram), Miller says. Pram, recently announced by Samsung Electronics, offers the speed of dynamic Ram with the non-volatility of flash memory. Miller says it is the perfect place to put metadata because it can be accessed much more quickly than if it were on disk, making object storage systems much faster.

Miller says Pram might also be used to store indexes used by search engines, greatly accelerating them as well. That increased speed may prove to be of interest to businesses such as oil companies that have huge stores of private data but lack the enormous resources of a company like Google.

Few corporations will ever have systems the size of those at the US government laboratories, with tens of thousands of disks, says Miller. But even desktop systems, which will have more and more disk drives over time, will experience some of the challenges the PDSI will address.

"I can't tell you yet which ones they will be," Miller says. "But problems at the high end have a nasty habit of trickling down to the low end."