Intel CTO: Computing's future in multicore machines

For much of his 34 years at Intel, Justin R. Rattner has been a pioneer in parallel and distributed processing. His early ideas did not catch on in the market, but the time has come for them now, he recently told Computerworld's Gary Anthes.

Share

What are the academic researchers doing now? Until recently, they weren't even teaching parallel programming. You could get a Ph.D. in computer science and never write a parallel program. But now hundreds of universities worldwide are reintroducing parallel programming into their curricula. Intel and other companies are working on funding programs to reignite academic research in parallel programming and architectures.

We went out to the universities and talked about these plans. They said, "This is great, because we weren't talking about this." It's sort of like the elephant in the room. We are all buying dual-core this and quad-core that, but no one was saying, "We really don't have much technology to do all this stuff."

What sort of research are you doing internally? Take a look at our CT - a dialect of C for "throughput" computing. It raises the level of abstraction so programmers are not dealing with parallelism in an explicit fashion. You can think very naturally about data structures, and it just has a huge impact on productivity. It lets you express data parallelism very easily. What's different about it is that it deals with all kinds of data structures in an almost seamless way.

Can scheduling those parallel threads be done just in software? If you rely on the operating system to schedule all those threads, you are probably dead in the water. We've developed an architecture for hardware thread scheduling and done extensive simulation of it to understand the trade-offs and refine the mechanism. It's too early to say which product will have it and when it will reach the market.

But aren't there many applications that are inherently not suited to parallel processing? We have faced this problem internally. There is this unfounded belief in Amdahl's Law [which limits the speed that can be gained by adding more procĀ­essors]. It is: "I've got this program, and it doesn't get faster after four cores. You put eight cores on it, and it still runs at the same speed." I hear that all the time. But then we take a look at it and we go, "You know, you didn't really think about it." But there may be another way where you don't get as much performance in the two-to-four-core range, but it keeps scaling, so you take a slower initial position, but you can scale to 16 or 32 processors. It's a matter of clever algorithms, different decomposition of the problem and using better tools that make decisions on the fly for the programmer.

What's the future of spintronics, in which information is based on the spin of an electron rather than on its charge? Charge-based electronics is going to run out of steam. The memory guys have already hit that point, basically. They can't make those memory cells any smaller. So researchers are looking at quantum effects like spin, and some early results aren't bad. Spin has some nice things about it, both in terms of performance and power.

Could we have working computers based on spin in 10 years? Yes, but I wouldn't be more aggressive than that. We'd want to make the transition seamlessly. We wouldn't want to say, "OK, there will be no new microprocessors for five years while we figure out spintronics."

"Recommended For You"

Software fails to cope with processor overload Intel, AMD multicore chip sales may be slowed by software