Multi-core and parallel applications: what's the future?

With multiple cores now the hot trend in the processor industry, Intel and AMD are shoehorning as many cores onto a single chip as possible. But where's the software to harness all that horsepower?


With multiple cores now the hot trend in the processor industry, just how many can the likes of Intel or AMD shoehorn onto a single chip?

The latest answer, if Intel is to be believed, is 80. That's right: 80 cores on a single, 80 TFLOPS die. At least that's the number allegedly on the prototype chip held up by Intel CEO Paul Otellini at the Intel Developer Forum last week. As a technological tour de force, it was truly a moment to remember -- even if it's years away from a production reality.

But what use are all those cores? Where's the software that's going to harness all that horsepower?

Otellini's company will launch quad-core processors next month. Such is the urgency that Intel feels it needs to display in order to keep up with AMD, the company that's been making all the running in both the enterprise and the consumer space in the last three years, that Santa Clara is putting out a product that's not quite an integrated quad-core product.

In fact, it's two dual-core products in a multi-chip package, and the integrated product, with four-cores on a single die, won't ship until the second half of 2007, once the company has moved its production to 45nm processes. Smaller features means more circuitry on a single chip -- four cores rather than two.

It's not just the larger physical packaging that disadvantages the twin dual-core approach. Without tight integration, the two pairs of chips will use more power, and won't communicate as fast as if they were on the same die, according to Insight 64 analyst Nathan Brookwood.

Intel's competitor reckons that it's way ahead. AMD CTO Phil Hester said that Intel has "just screwed up" and is only now responding to what the Texas company did two or three years ago.

So with few real advances in speed available for power and cooling reasons, both companies are competing to cram more cores onto each chip. But the question is whether Intel's 80-core chip and whatever it and AMD produce in the meantime, will actually make a real difference -- or is it more marketing smoke and mirrors?

The problem is that most software development has been predicated on the ability of single CPUs to simply keep getting faster. That's not a problem for programmers, who can simply keep adding code to do cool stuff, secure in the knowledge that next year's chips can handle it. That's no longer a valid assumption.

With multiple cores the future and speed increases no longer there to be taken for granted, how can the software industry respond to this sea change? Is parallel programming due for a return to the fashionable status it enjoyed in the 1980s and before?

For decades, parallel or concurrent programming has been seen by many as the best way to maximise hardware resource utilisation -- multiple program threads within a single process, in effect. On a single CPU, that doesn't buy you nearly as much as it would on one with multiple cores, which would seem to be ideal for such a technique. But parallel programming is universally acknowledged as being very hard to do. As a result, there's very little parallel software out there.

Maybe not. Programming gurus Herb Sutter and Microsoft's James Larus reckon that, on servers at least, the problem isn't as huge as it might appear. They argue that "concurrent programming languages and tools are at a level comparable to sequential programming at the beginning of the structured programming era", and continue: "For many server-based programs, however, concurrency is a 'solved problem'."

That's because server-side applications have inherent parallelism, say Sutter and Larus, citing web and database servers that can handle concurrent access to a single data store. Client-side applications are far more difficult, because the various elements that make a client application "interact and share data in myriad ways", making it hard to figure out how to program concurrent processes.

For Sutter and Larus, developers need to find new ways of programming.

"The software industry needs to get back into the state where existing applications run faster on new hardware. To do that, we must begin writing concurrent applications containing at least dozens, and preferably hundreds, of separable tasks", they argue. "Concurrency also opens the possibility of new, richer computer interfaces and far more robust and functional software."

The answer, for them, is new paradigms, tools and constructs. But will it happen? The huge growth story of the industry right now, virtualisation, means that IT managers can run each process or VM on a single CPU.

Given the general marketing push and technological effort that's going on right now to link the benefits of virtualisation with multiple core CPUs, one has to suspect that for most developers and users alike, it's going to be business as usual -- only instead of extra clock cycles, it'll be extra cores that determine just how fast the software runs.

VMware's technical marketing manager Richard Garsthagen tends to agree. He links virtualisation and multi-core chips by pointing out that the amount of computing power available to IT managers from a multi-core server is huge. "What else are they going to do with all that power other than run a virtualisation system?", he argues.

He suggests that virtualisation over multiple cores is a form of parallelism anyway, because it's about running multiple tasks in parallel, it just doesn't mean it's one application on a single chip.

"And with the advance of Web services and frameworks such as .Net, applications are becoming componentised and are getting smaller", he says, "so the need for parallelism is less."

We stand at a crossroads of software development methodologies: the purists of parallel programming versus the pragmatists of virtualisation. Which will emerge victorious?

Find your next job with computerworld UK jobs