Intel CTO: Computing's future in multicore machines

For much of his 34 years at Intel, Justin R. Rattner has been a pioneer in parallel and distributed processing. His early ideas did not catch on in the market, but the time has come for them now, he recently told Computerworld's Gary Anthes.

Share

Are we at the end of the line for microprocessor clock speeds? We'll see modest growth, 5% to 10% per generation. Power issues are so severe that there won't be any radical jumps. If you get a 2% improvement in clock speed but at a 5% increase in power consumption, that's not a favorable trade-off.

I keep reassuring Bill Gates that there is no magic transistor that is suddenly going to solve his problem, despite his strong desire for such a development.

What exactly is Gates worried about? First, a steadily rising single-thread performance would benefit the entire existing base of software. Second, multicore and, later, many-core processors require a new generation of programming tools. Given the rudimentary state of parallel software, the investment across the entire computing industry will be very large. Third, the tools have to be applied by people with the skills needed to use them effectively.

Retraining existing programmers and educating a new generation of developers coming out of school is another formidable challenge. It will take years, if not decades, to reach the point where virtually all programmers assume the default programming model is parallel rather than serial.

So the only way to keep Moore's Law going is to add more computing cores to a microprocessor chip? The only way forward in terms of performance - but we also think in terms of power - is to go to multicores. Multicores put us back on the historical trajectory of Moore 's Law. We can directly apply the increase in transistors to core count - if you are willing to suspend disbelief for a moment that you can actually get all those cores working together.

How many cores might we see on a chip in five years? We have been talking about terascale for the past couple of years, and we are demonstrating an 80-core [processor chip]. Our [future] product is Larrabee. It's not 80 cores; we can do things like that in research because we don't care how much it costs. Our hope is that that will stimulate software developers to bring terascale applications to market. We are talking about early production [of Larrabee] in 2009.

How many cores will Larrabee have? I can't comment on details about the first product. It's sufficient to say "more than 10", which is what we define as the boundary between multi­core and many-core. It's better to think of it as a scalable architecture family, with varying numbers of cores depending on the application.

In five years, will virtually all new software be written for multiple processors? Yes, but people will not go back and rewrite a lot of existing software. I don't think word processors need 16 cores grinding away on them.

So, will we need a new kind of programmer then? Yes, it's a whole new ballgame. We have been trying to get the attention of the academic community on this problem. They got all fired up about parallel programming about 20 years ago. Everywhere you went, people were working on parallel programming, but it never came down. It remained in the high-performance computing space.

Now, every computer you buy has two or four or eight or maybe a lot more cores. Twenty years ago, the market for [multiprocessor] machines wasn't big enough to support the kinds of research and development to really move the ball down the field. It's a disappointment, but not a surprise, that not much happened over 20 years. Now, the financial incentive is there, and the research and development budgets are there. We know we can't sell hundreds of millions of processors that people can't program. But we are on the flat part of the learning curve right now.

Find your next job with computerworld UK jobs