It’s all about the graphics.
At CES, Intel and AMD chatted up their efforts at integrating graphics functions into their CPUs. For AMD, this has been a long road that began with its purchase of leading GPU maker ATI. For Intel the first steps came with its Larabee project and seem to have come to fruition with Sandy Bridge.
Now think, who does that leave out? The other GPU maker, NVIDIA, that’s who.
But no longer. At CES NVIDIA announced that it plans to build high-performance hybrid of a CPU and GPU based on the smartphone sustaining, low-power loving ARM architecture. The CPU/GPU (codenamed Project Denver) will go into everything from PCs on up to supercomputers, says NVIDIA.
In an article in I wrote for the January 2011 issue, AMD’s Chuck Moore explained why graphics is the next step in CPUs:
What will keep computing marching forward, according to Moore, is the integration of CPUs and graphics processing units (GPUs) into what AMD calls an accelerated processing unit, or APU. Say you want to brighten an image: Just add 1 to the number representing the brightness of every pixel. It'd be a waste of time to funnel all those bits single file through a CPU core, or even 16 of them, but GPUs have dedicated hardware that can transform all that data practically at once.
It turns out that many modern workloads have just that kind of data-level parallelism. Basically, you want to do the same thing to a whole lot of data.
That key insight drove AMD to acquire a leading GPU maker, ATI Technologies, and start work on jamming their two products together. So a future processor, from AMD at least, would probably contain multiple CPU cores connected to several GPU elements that would step in whenever the work is of a type that would gum up a CPU core.
Besides keeping NVIDIA from feeling left out, the deal could be a good fit for ARM, too. Intel, with it’s x86 Atom platform, is encroaching (or trying to, anyway) on ARM’s smartphone and tablet turf. And its fighting back by going up the computing food chain, with some companies working on ARM-based servers. A tie-up with an established PC player like NVIDIA could help it gain some traction against Intel. (And AMD I suppose, but really Intel.) Microsoft says a next generation of Windows will work with ARM processors, by the way.
Wait, you say. Doesn’t IBM make CPUs too? Well, yes, and you could argue that they got in on this parallel/graphics stuff earlier than everybody, with the Cell processor. For that same article, I interviewed Jim Kahle, who led the design of Cell:
With Cell, the processor released in 2006 to power the PlayStation 3, IBM has already gone in that direction. Instead of actual GPU functions, it developed a more flexible core that specializes in executing the same instruction on several pieces of data at once. IBM, with help from Toshiba and Sony, stuck eight of the new cores on the same chip with a more traditional processor core. But that's not quite where Kahle, who led the Cell project, sees things going in the future. Instead he expects to see a mix of general-purpose cores and cores specialized for one task—encryption, decryption, video encoding, decompression, anything with a well-defined standard.
(Some interesting thoughts of Kahle’s got cut from the final article. He went on to say that, because today’s specialized task might be totally irrelevant a few years down the line, IBM is working on reconfigurable special-purpose cores.)
If you stop and think about what IBM does today, a direct integration of GPUs doesn’t make sense. IBM, for example, has 100 percent of game console market and zero percent of the PC market. Game consoles are probably always going to require stand-alone GPUs. (Even with Sandybridge, that’s the case: in December NVIDIA crowed about the 200 new products that will feature both Intel’s SandyBridge and it’s GeForce GPU.) So IBM, as they say, has no dog in this fight.
Even so. It should be a really interesting fight. Stay tuned.