Confessions of a Hot Chips n00b

I spent last week at the Hot Chips conference, which, for you non-cognoscenti, is an all-star conference on high-performance microprocessors. I watched as Intel, NVIDIA, IBM, AMD, and a constellation of other chip designers presented Power-pointy microchip architectures until my brain had disintegrated into a thin gruel. I would like to share some observations, but they will all be borrowed, as my melting neurons were unable to produce their own.

It's not news that everything is about multicore and GPGPUsâ''that's general-purpose graphics processing unitsâ''and the Hot Chips lineup reflected that fact. For those of you unlucky enough to know even less than I do, a GPGPU is a sort of semi-holy grail for system-on-a-chip architectures. GPUs have been used for, well, graphics rendering and processing pretty much since the dinosaurs roamed the earth. But recently with Moore's law sending the semiconductor industry into its screaming death spiral, people have looked for ways out of relying solely on CPUs (central processing units) which are brainy but compensate for their intelligence by being a lot less energy efficient per computation.

If you can get the CPU to be the brains of the operation, so to speak, you can get him directing a bunch of heavy-lifter GPUs, whose strength lies in their amazing ability to crunch numbers that would make your head explode. They can do that because of their ability to deal with floating-point operations.

But the problem is that in order to use these hired thugs for anything other than video processing, you basically have to lie to them and tell them they're working with graphics. You do that with a thin layer of code that converts your instructions into the only language they can understand: red, blue, and green pixels and where to put them. NVIDIA was the first to do so, inventing a language called CUDA. Then the Cell processor came along. Now it's Intel's turn, with the much-vaunted Larrabee architecture, which isn't even a chip yet. But it's still made big waves, because it takes GPU manipulation out of the proprietary NVIDIA pool. Now you don't have to learn to use CUDA. That is the chip engineering equivalent of a swift slap across the face with a white glove: 85 percent of the world's programmers already know how to use Intel's x86 architecture (not to mention C).

A quick rundown of several technologies at the show, and the associated commentary, after the jump.

1. A company called Audience has built a chip called the A1010, a voice processor based on the human hearing system. This digital signal processing chip replaces the traditional fast fourier transform with something they call a fast cochlear transform. (A Spectrum story that examines this technology in depth is coming soon.)

A bigwig in attendance thought it was a great idea because it works more like the human ear than like a machine. Cell phones equipped with these babies will block out the nasal lady announcing the gate change, the shrieking baby, the man nattering about The Big Merger, and even a jackhammer. The best part? It can probably do it on your end too, adjusting the volume to cancel out noise not just on the other guy's end but on yours. To my (limited) knowledge, the only way they could possibly do that is by installing a little finger that extends out of the phone to plug your ear, but who knows.

And the most important information? The thing is now in the LG Cyon and the

Sharp SH705iII.

2. Faint blurring around objects on HD video, called "halos," are apparently a bigger problem than colon cancer, judging by the concentrated brainpower going into solving this enormous problem. Witness three separate chips, rolled out by AMD (which bought ATI to integrate graphics and CPUs), NXP Semi and Toshiba, each taking on this life-threatening situation so that AV nerds need never again struggle with faint halos around the helicopters populating their video games.

You see, HD right now is "fudged." 24 frames per second of TV-frequency are translated to the 60Hz frequency that an HD TV is capable of. That means that every second, 96 pixels need to be interpolated by image processing software. The result is a weird kind of visual time lag that the presenter showed in a frame-to-frame analysis of a helicopter flying past a building with many windows in the background. Let's say the helicopter is in front of the first row of windows in one frame, and in front of the second row of windows in the next. Because the TV has pushed through only about a third of the information the processor needs to visually interpret the 3-dimensional location of the helicopter relative to the building, the chip just starts making things up. So between the first and second row of windows in the background, instead of a smooth wall, you see a schizophrenic pattern of "new" windows which the computer threw in there as its best guess for what we should be looking at.

Anyway, AMD's mediaDSP solves that problem via a whole system of flow charts

that I never want to see again. This seems like a pretty boring payoff for buying ATI.

3. The really big reveal of the day was the architecture of ANTON, a specialized chip that is optimized for a molecular dynamics simulation engine. Yeah, yeah, bear with me.

It's a supercomputer from New York-based D.E. Shaw Research, following in the footsteps of everyone who is going to GPUs to take over for CPUs. D. E. Shaw counts biologists, computational chemists, and electrical engineers, among other scientists, among its constellation of polymaths working on machines that can simulate molecular dynamics. Here's why you want a molecular dynamics simulation engine: Drug design.

Right now, drug design averages 5 years before clinical trials can even get started. You have to start with 10 000 petri dishes and test 10 000 interactions. Then you have to pay lots of people to physically examine the results. Then, of those 10 000 interactions, you pick the best ones. You whittle them down until you have something you can test in mice. Then you test it in monkeys. Then you can start clinical trials with human test subjects. That's ten years for one drug. But if you could model the entire first part, you get rid of your 10 000 petri dishes; your countless man-hours wasted on people checking the interactions manually; your lather, your rinse, your repeatâ''with the right supercomputer, you could shorten drug development cycles by five years.

"Think of it as a CAD tool for drug design," said presenter Martin Deneroff. The reason no one has been able to do that is the enormity of parallel processing power required-- more than you can find in a Roadrunner, a Cray and a Blue Gene L all put together. "The designers of these supercomputers couldn't build a general purpose computer that goes faster," said Deneroff. "They're not useful for drug design." On any of these, just one simulation will take a year to complete. Anton was designed to get this number to under one month. The solution? Throw out all general processing functions: you'd have a computer that literally does molecular dynamics and nothing else.

I'm pretty confident that ANTON was the biggest deal at Hot Chips. But, on a

note of caution, and as was made clear in the panel session on Monday night that looked back at all the bad predictions and misfires over the past 20 years, my opinion may be subject to changes.

4. Godson-III is a next-generation multicore microprocessor (4 cores, then eventually 8) from the Chinese Academy of Sciences which, if all goes well, should eventually compete with the rest of the world's great-great grand-daddy generation of microprocessors. A time machine to the year 1997!

That performance lag belies the INCREDIBLE fact that China has been putting resources into microprocessor R & D for, um, three years now, as opposed to the 35-40 that's been the norm among the lock-step IC manufacturing frenemies around the globe. I am not making that up: "20 years ago in China, the decision was made not to support R & D in microprocessors," explained the presenter. "Consequently, our microprocessor R&D started only recently." The chips will be fabbed by ST Microelectronics.

More than a few noses were out of joint about Chinaâ''s porous relationship with intellectual property, specifically MIPS, a chip architecture that has been used in microprocessors since the mid-1980s (it was invented by a Stanford University-based company). Earlier generations of Godson were known for their unfettered use of the â''MIPS-likeâ'' architecture. To their credit, the Chinese government later paid for a MIPS license, so Godson IIIâ''s use of MIPS is legitimate. But there are still a lot of sore feelings about how quickly the Godson chip series has progressed, given its short R & D timeline. The implications are obvious.

China, perhaps deservedly, didnâ''t seem to get much respect around these parts. Top comments overheard during and after the presentation:

Anonymous: [snickers and leans over to neighbor] "Think they fab 'em in Taiwan?"

Anonymous big deal: "I wonder how many patents they violated just taping these things out?"

Yet a third publicly considered the irony and absurdity of China making money on intellectual property.

But Real World Technologies wunderkind David Kanter puts it best: â''Even if they were more or less â''copiesâ'' from other designs, itâ''s still an impressive and significant feat.â''

Related Stories

Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Advertisement
Advertisement