We use cookies to provide you with a better experience. If you continue to use this site, we'll assume you're happy with this. Alternatively, click here to find out how to manage these cookies

hide cookie message

Revving up the supercomputer

Supercomputing requires productivity, not just processor performance

Article comments

For years, the name of the game in supercomputing has been raw speed, with hardware and software designers striving to boost the number of instructions per second -- FLOPS -- that could be crunched. Gigaflops computers gave way to teraflops machines, which are now yielding to petaflops models -- those able to execute 1 quadrillion computations per second.

But those performance ratings are misleading, because they ignore a huge portion of the time required to solve a problem with these multiprocessor computers -- the hours, weeks or even years it can take for software designers to formulate a solution and for programmers to code and test it.


That's why the US Defense Advanced Research Projects Agency in 2002 changed the name of its High Performance Computing Systems program to High Productivity Computing Systems (HPCS). DARPA hoped that its contractors -- Cray, IBM and Sun Microsystems -- could come up with programming languages and tools to improve software development productivity tenfold.

Sun recently lost its bid to go to the next phase of the DARPA job, but that hasn't stopped it from forging ahead with its HPCS programming language, called Fortress. In January, Sun released an early version of a Fortress interpreter. Similarly, Cray and IBM have released their own first-draft implementations of new languages.

The three languages, all available as open-source software, differ substantially when it comes to details, but they have this much in common:

  • They are aimed at a wide range of multiprocessor computers and clusters, from the "petascale" behemoths at national laboratories to the multicore processors now appearing on desktops. Similarly, they are intended for use in at least some mainstream, business-oriented applications, not just in science and engineering.
  • They try to make it easier for programmers to exploit the various levels of parallelism in application software threads, multicores, multiprocessors and distributed clusters.
  • They employ techniques to relieve programmers of work and help them avoid opportunities for coding errors. For example, all use a technique called "type-inference," so programmers don't have to specify the type of every variable, which is tedious and error-prone. And they use techniques for synchronising operations without locking, so that common problems such as deadlocking are avoided.

John Mellor-Crummey, a computer science professor at Rice University, salutes the productivity goal of the three languages, noting, "Programming of parallel systems is much too hard today."

But he says it won't be easy to evolve the nascent languages -- which now run on single, shared-memory systems -- to run efficiently on big, distributed-memory parallel systems. "Until then, these languages won't see much attention," Mellor-Crummey says.

Eric Allen, a co-leader of the Fortress project at Sun Labs, says the language is ideally suited for relatively static environments. But applications that do a lot of dynamic code-loading or Web accessing would probably still be coded in Java, he adds. He says a full-function Fortress compiler will be developed and will include optimisation features that have never existed in a language before.

Share:

Comments

Send to a friend

Email this article to a friend or colleague:


PLEASE NOTE: Your name is used only to let the recipient know who sent the story, and in case of transmission error. Both your name and the recipient's name and address will not be used for any other purpose.


ComputerworldUK Knowledge Vault

ComputerworldUK
Share
x
Open
* *