Tech Talk iconTech Talk

Ray tracing, Parallel Computing and a Bugatti Veyron

At last week's Hot Chips symposium, Nvidia founder and CEO Jen-Hsun Huang delivered the first keynote about the GPU computing revolution.

The keynote was definitely the highlight of the conference, but before I get all swoony over the incredible directional flame sprites and the finger-licking Bugatti Veyron their GPUs can render, first I need to pick on Nvidia a little.

That’s because the company was selling $200 3-D glasses at their booth. Or, they were trying to. I didn’t see anyone buy them, and if anyone did, they didn’t tell me about it.

The glasses were supposed to augment a very engrossing 3-D Batman game Nvidia had nakedly set up to lure passers-by. Apparently they created a deeper z-space by giving each lens a different refresh rate. Something like that. I put on the glasses and played for a while. It’s telling either of my unsophistication with games or of how unimpressive these glasses were that I failed to notice that you had to actually turn them on—when someone out pointed my mistake, and I flipped the on switch, the only difference I noticed was a pretty blue LED light.

But enough: let’s make with the swooning.

First, Huang took the audience back to February of 1993, when he'd just finished his master’s in electrical engineering at Stanford, and Nvidia was just a gleam in a venture capitalist's eye. For perspective, 1993 is so long ago that there was no need to have a PC on your desktop even if you were trying to get people to invest in your computer company. “If we had told our investors at the time that we’d be using the same hardware to play games and try to cure cancer," he said, "I am sure we would not have been funded."

“The GPU will likely be parallel processor for the future,” he told the crowd. Computers are being driven to parallel computing because people can do magical things with them.

Nvidia’s Teraflop-capable GPUs can, in fact, do some things that would have literally appeared to be magic to a person in 1993: Augmented reality in Monday Night Football, where it’s possible for the football players to stand on top of the 3-D rendered line of scrimmage projected onto the field but under the players. The flags rendered under the ice at Olympic hockey games; Ann Curry’s set during the 2008 election coverage. But you know all this stuff.

The point is this: The GPU has evolved faster than any other tech component, their complexity increasing from a few million transistors in 1994 to billions in 2009. That’s a thousand-fold increase in complexity in only 15 years.

What did they do with all that complexity? Shaders. Shaders and programmable pipelines made it possible for computer game designers to be artists. Let’s take an extreme example. Pacman and his attendant ghosts are lovable, clunkety and pixelated.


Let's leave aside the fact that these were animated with pixels instead of polygons and that GPUs barely existed when Pacman was born. With the obscene amount of processing power GPUs now command, a programmer can now create a specific mood for his or her game by automatically shading all scenes and objects with a hypercolor style or a sepia tint, you name it. The result can be anything from the eye-poppingly surreal textures of Super Mario Galaxy... the otherworldly, overexposed dreamscape of Riven or Myst.


Shading is great but Nvidia wanted to take it to the next level: articulate the surfaces, but also the physics underlying what you see on the surface. Now you’re getting into computational visualization.

This is where ray tracing comes in. With ray tracing, an image is generated by tracing a path through each pixel in a virtual screen, and calculating the color of the object visible through it. Huang showed us what exactly ray tracing can do by way of a Bugatti Veyron, rendered with 2 million polygons worth of luscious, mouth-watering detail.

[This image was from the 2008 SIGGRAPH conference-- the image from Hot Chips isn't online yet but it's even prettier!]

Because ray tracing constructs the entire image using information from the computed trajectory of rays of light bouncing from surface to surface, you can light the scene, place your object into the scene, and then do a “walk through”, panning inside the car, where it’s possible to see details—there is no independent lighting inside the car—provided exclusively by ambient “light” rays diffracting and reflecting off the environment and streaming in through the windows. The lighting was so complex and subtle you begin to understand how the GPU could harness physics simulations as impossibly complex as molecular dynamics.

This animation was running on three GeForce GPUs, each with almost 1 Tflop of processing horsepower. That’s about 2.7 Tflops to sustain animation that was very close to photorealistic. (1500-2000 instructions per component, all in HD. 100 shader instructions per component, 4 components per pixel [R G B alpha], 1.5 Flops per instruction on average, 60 frames per second, etc—that adds up to 500 shader Gflps: and if this sentence makes you want to die, read "Data Monster," the tutorial on GPUs and graphics processing in the September issue of Spectrum.) But that only represents, Huang said, about 10 percent of the total math capability of a GPU.

Meanwhile, let’s do a little side by side comparison. Intel's vaunted Nehalem CPU, trotted out earlier that day: 3 Ghz, 4 cores, and a bunch of other stuff—theoretical peak performance of 96 Gflops. That's great for general purpose computing, but two orders of magnitude short of being able to run the Bugatti animation in real time, which requires 5 Tflops. Nehalem—and the CPU in general—is designed for general-purpose computing, but not for graphics.

Animators will be making increasingly photorealistic art for games: water, fire, clouds, smoke—anything that obeys the laws of physics can be rendered to look real, provided you have the right algorithms and a monster amount of GPU muscle.  To prove that point, he showed a nice video of water gently rippling in the sunlit breeze. It was more than photorealistic. But to do all that, you’re using a 3D fluid solver that renders in agonizing detail about 262,000 individual particles to generate fluid motion. Each particle has its own shadow and motion blur. Not to mention color, alpha, etc.

 But ray tracing has a way to go, Huang said. Where it's great for photorealism, it’s not good for real-time rendering. The Bugatti for example was super-impressive in still frame; but when you moved around it, it got grainy and monochrome. Not for long—as soon as you stopped, the image filled in remarkably fast. If you're just making a movie, you can pre-bake the animation as long as you want. For games that's obviously a nonstarter.

To illustrate the true power of ray tracing, Huang showed us the directional flames Industrial Light & Magic did for the Harry Potter movie, which are apparently just unthinkable without monster processing power. Fire is amazingly complex because it’s alive, dynamic, moving and turbulent, so normally, to do fire special effects, animators use and sculpt sprites of real flames. But you can’t animate flame sprites directionally. The ILM fire simulator runs on top of CUDA, and the realistic flames shooting out of Dumbledore's hands are as good as any real-life flame thrower.

In addition, there are some things you can’t pre-animate because you don’t know how it will work at game time. For example, a really awful tackle in a football video game. Animators combine physics simulations and morph them with motion capture, because even though the motion capture is convincing to a certain extent, a brutal tackle would be really painful to motion-capture.

When a program is written taking full advantage of the GPU, obscene improvements are the norm, and not just for graphics. A certain unnamed quantum chemistry program, for example, had a 130X speedup when it was run properly on a GPU. It’s totally doable when an application is inherently parallelizeable.

The point is this: Moore’s law as applied to Intel’s CPUs can reap performance improvements of, on average, 20 percent per year.

By contrast, over the next 6 years, Huang predicted, a co-processing architecture (ganging together a CPU and one or more GPUs) would enable a performance improvement of 570X. Understandably, later blog posts that referenced this figure had people's heads exploding. But keep in mind, this is for specialized applications: graphics, oil & gas equations, seismic, molecular dynamics, quantum chemistry.

I assume ray tracing lends itself to parallel computing, and also that with a 570X performance improvement, this Bugatti will look photorealistic in real time by 2015.  But I think the real issue is whether that 570X speedup will help humanoid characters be truly photorealistic by 2015.

Huang wrapped up the talk by wowing us with all manner of Star Trek daydreams—the real-time universal translator, the smartphone app that can tell you what you’re looking if you just snap a picture of it (WANT).

But even with all those goodies, I’m still stuck on the Uncanny Valley problem. I wonder how far we’ll have to go into physics simulations before we break humanoid characters out of the Uncanny Valley. Even the most advanced animations—Beowulf and Digital Emily—are convincing until they start talking. There’s something impossible to render accurately about teeth, I think. Digital Emily was perfect until she showed her teeth, and the sad thing is, when I mentioned this to Paul Debevec, he looked crestfallen and explained that they had modeled the teeth exactly.

The upshot is this: I don’t think we’re going to get out of the Uncanny Valley until we can do essentially molecular dynamics on every part of the human face, and that includes building the teeth from the ground up.

The good news is, if Huang’s prediction proves true, and GPU performance increases by 570 over the next six years, that’s not a crazy thing to aspire to do. Whether it’s worthwhile, that’s another story.



Becton: 8 cores, uncore and hardcore

On Monday at the Hot Chips conference, Intel shared the first details of the Beckton processor, which is now called Nehalem EX. (Gotta love those crazy Intel naming conventions—Lynnfield, Beckton, Clarksview. They sound like gated communities in hell.)

The 8-core, 2.3-billion transistor Nehalem EX debuted in a pretty white dress and 18-button white gloves. Charlie Demerjian (formerly with bright red web site The Inquirer, but who has recently started up his own SemiAccurate) has the roundup of what makes Becton interesting: “On the surface, Becton looks like a simple mashing together of two 4-core Nehalems. The specs are 8 cores, 16 threads, 4 DDR3 memory channels, 4 QPI links and 24MB of L3 cache all stuffed into a mere 2.3 billion transistors.”

“With 4 QPI links, 8 memory channels, 8 cores, 8 cache slices, 2 memory controllers, 2 cache agents, 2 home agents and a pony, this chip is getting quite complex… To make it all work, the center of the chip has a block called the router. It is a crossbar switch that connects all internal and external channels, up to eight at a time.

With that many available inputs and outputs, you start to understand why the focus of Becton was on the uncore, and how things get moved around the die and the system in general. Without all the effort put in, just doubling up a Bloomfield or Lynnfield wouldn't scale at all, much less to the 2,000-plus cores Intel is claiming Becton will hit.”

Leaving aside for the moment the hair-raising projection of 2000 cores—let’s discuss the uncore.

Because, with the terms multicore and manycore shoved firmly down marketers’ throats, it’s time for a new buzzword: Uncore. That’s right, uncore: a term so new, its Wikipedia page doesn’t even have references. (disclaimer for anyone who is going to burst into tears of didactic rage telling me the term is not new: that was dry, sardonic humor.)

The uncore is just everything on a microprocessor that is not the core: the arithmetic logic unit, floating point unit, and cache.

For reasons that make me sleepy, the uncore must run at twice the memory frequency. But the rest of the Hot Chips conference was much more multicore than uncore.


Medical Alerts in 140 Characters or Less

"@trialx CT looking for diabetes trials in new york for 55 yr old male"

This message was posted today on the TrialX Twitter feed. TrialX, which started as an online matchup between clincal trial organizers and participants, has expedited this courtship even further by opening up its forum to Twitter, and they're making a nice profit while doing it (the company was awarded this year at New York City Entrepreneur Week's business plan competition). Clinical trial investigators pay $99 per month to post experiments they're running. Patients, on the other hand, can search the feed or post their own medical details and wait for a response from the program.

In an article published yesterday in the journal Telemedicine and e-health (available here), Mark Terry catalogues the ways that TrialX and others in the medical field are using Twitter.

Clinical trials is only one of the areas that is getting a boost from Twitter, according to Terry. Many doctors have begun using it in their private practices and are offering advice to Twitter novices. There are a few medical twitter pioneers out there worth note. Phil Baumann and Michael Lara have both blogged about how the community should be using twitter to do things like update physicians on conference highlights and keep up to date with fluctuations in the blood glucose levels of diabetic patients.

Of course, the most beneficial aspect of Twitter is that it communicates in real-time. This will lend itself very well to enhancing disaster alerts. The CDC has jumped on the bandwagon with three different feeds, one dedicated to emergency notifications, one specifically for information about the flu, and one that more generally redirects traffic to the CDC site.

Gadgets to fix problems I didn't know I had

I get a lot of press releases touting new gadgets. Most are minor improvements on existing technologies—Sonos has a new touch screen controller, Griffin has new iPhone cases.  After all, how many truly different household or handheld gadgets can their be?

Turns out there are at least two more than I thought. In the past week I’ve heard from two companies offering gadgets that solve horrible household problems I confess I’d never before worried about—flooding from toilet overflows and death from killer icicles.

A company called AquaOne Technologies would like to stop my toilet from overflowing, wasting water and ruining my floors in the process, with a gadget called the H2Orb, $130 gizmo that installs between the water line and the toilet. The device uses a low power microcontroller from Texas Instruments and two wireless sensors, one in the tank that detects a slow leak, the other in the bowl that detects imminent overflow.

And Gutterglove would like me to install their heated gutter guard, the Gutterglove IceBreaker, that melts ice and prevents icicles from forming, pointing out that falling icicles can kill people. The Icebreaker uses a single self-regulating cable to generate heat. Pricing is available from local dealers.

How have I lived so long without these?

Photos: top left: H2Orb; bottom right: Gutterglove IceBreaker

Will Cooler Heads Prevail in Nanotoxicology Issue

Earlier this month in a blog entry in which I bemoaned the rather polemical tone of the debate on nanotechnology and its environmental, health and safety (EHS) issues I referenced an article where the author was getting a little fed up with all this talk about the threat of nanotechnology on our health when there was not one example of anyone being harmed by nanotech.

Unfortunately, no sooner did he speak than the first example of that harm was chronicled in a new study. The European study has linked seven cases of lung disease in China to working with nanoparticles in which two of the cases resulted in death.

Andrew Maynard on his 20/20 Science blog has done a thorough job of analyzing this report and what its ramifications may be for nanotech, so I haven’t much to add.

But I would like to highlight a point Maynard makes that may be missed by some alarmists and shouldn’t be:

At the end of the day, the study says little about the potential hazards of nanoparticles in general, and next to nothing about the possible dangers of nanotechnology.  If the sad deaths of the two workers and the lung disease of their five colleagues were used to press home a preordained nanotechnology agenda, it would amount to little more than a cynical misuse of the data—not a move that is likely to encourage evidence-based decisions on either workplace safety or safe nanotechnology.

As well as his balancing thoughts:

Yet to dismiss the study as flawed and irrelevant would be equally foolish.  The reality is that two workers died and nanoparticles were implicated, at a time when increasing numbers of nanoparticle-containing products are entering the market.  As the details of the study become known, people are going to want to know what the findings mean for them—whether there are risks associated with emerging nanotechnologies, and what government and industry are doing about it.  If nanotech-promoters downplay or even discredit the work, the move is more likely to engender suspicion than allay fears in many quarters.  And once again, evidence-based decision-making will be in danger of being sacrificed in favor of maintaining a set agenda.

I am not so sure either side of the EHS/nanotechnology debate will be as balanced in its approach to this issue as Andrew Maynard, but I am hoping they will.

Hot Town

Appropriately enough, the three weeks that the song Light My Fire spent as the #1 song on the Billboard 100 in 1967 were the dog days of that summer. That phrase, by the way, refers to

the ancient Romans, who noticed that Sirius rose with the sun from July 3 to Aug. 11. As the major star of the "Big Dog" constellation, Sirius is often called the "dog star." It's the brightest star in the nighttime sky. The Romans assumed that the two stars were acting in league to create the "days of great heat."

I was reminded of Light My Fire by an email from frequent Spectrum contributor Kieron Murphy. Another contributor, Brian Santo (author of our popular May 2009 feature, "25 Microchips That Shook the World" and its hilarious backstory sidebar, "Where in the World Wide Web Is Al Phillips?"), who also received the email, responded:

Even though I was still in elementary school in 1970, I had an intuitive grasp of what was going on with The Beatles, and The Stones, and Hendrix and Joplin and many of the other great artists I heard on the radio. But I never really got The Doors. More specifically, I never really got Morrison. I understand Krieger and Manzarek showed sparks of brilliance. I understand that Morrison's Lizard King schtick was dangerous/sexy. But IMHO, the man was a diffident poet/writer at best, and if he hadn't died (let's assume, for now, that he's really in the grave in Paris that I have actually visited) he would have been off the charts for years and doing "This is not your father's Oldsmobile" commercials with a couple of kids he'd been legally forced to adopt.

It's revealing that Brian would refer to the radio, and it's funny that he would picture the song in, of all things, a car commercial. According to Wikipedia,

when Buick wanted to buy the piece for use in a 1968 TV commercial ("Come on, Buick, light my fire") and Morrison, who had been out of town, learned that other group members agreed, Morrison called Buick and threatened to have a Buick smashed with a sledgehammer on a TV show should the (presumably ready) commercial be aired.

I'm a little older than Brian - not a lot, but perhaps just enough to feel very differently. I was sitting with a friend in a pizza place on 37th Road in Jackson Heights, Queens, my first week of 7th grade, when I first heard "Light My Fire" on the radio. The pizzeria was, literally, ovenlike, the pizza was thin, blistering, and delicious; the time was one of those proverbial fry-an-egg-on-a-New-York-sidewalk afternoons; the song was just as fiery hot and yet slow and lyrical; it was clearly about sex, something that, as a twelve-year-old, I was coming to understand the importance of, if I didn't quite understand it itself; the lyrics were kind of silly but the melody was big and ballad-like and beautiful and it went on forever — I had never heard a seven-minute-long song on the radio, and as the keyboard solo gave way to the guitar solo it seemed impossible to believe it was still the same song playing. It was, I now realize, opulent and yet not in the least self-indulgent. In the pizzeria, my friend and I both stopped talking somewhere during the guitar solo and just listened.

In a 13-minute radio story in 2000, NPR reporter Guy Raz said it "broke the mold of the conventional hit pop song when Light My Fire went to the top of the charts."

Light My Fire clocked in at just over seven minutes. No one in the music industry believed it could work at that length.

John Densmore, the Doors drummer, told Raz, "In those days, if you wanted to be on AM radio, you had to be at three minutes." Raz says the band cut out the solos and "whittled it down to three minutes. But fans who owned the album swamped radio stations with requests for the full seven-minute version."

I wasn't one of those album owners — not yet. Sixteen months later, I turned 13, and of the three birthday albums I got from my friends, "The Doors" was the only one I had requested. The other two were Cream's "Wheels of Fire" and the Beatles' White Album — by then the psychedelic movment was in full sway, led by The Doors' eponymous album and the Beatle's Sgt. Pepper's Lonely Hearts Club Band. Sure, Sgt Pepper was the first rock album to win Album of the Year at the Grammy Awards and Rolling Stone magazine has named it the greatest album of all time. But Light My Fire was the song that changed radio forever. Guy Raz again:

No one had ever heard a song like it - seven minutes, free-form, psychedelic, Light My Fire was dark and brooding, haunting and romantic, at the same time. The song is a demarcation point in rock 'n' roll history. It shattered the acceptable boundaries of popular music. Themes of love, mortality, intoxication, and recklessness. All offer a glimpse into the turbulent era that was to come soon after its release.

With satellite radio, digital radio, and podcasts, radio is metamorphosizing today, as it did in the late 1960s. The changes today are technological, though, while back then, as AM gave way to FM, radio  — and music itself — became both more personal and more political.

What's different between now and then is how important radio was - more important, to music at least, than television or any other medium. (When the Doors or the Beatles appeared on Ed Sullivan, it certified a popularity that had been created by radio.) In 10 or 15 years, surely all radio programming will be delivered by the Internet, which will be given the AM and FM frequencies. It seems odd to think that once, the term "wireless" was synonymous with AM radio, and that the two leading communications technologies at the time of the seminal U.S. Communications Act of 1934, radio and telephony, will be digital afterthoughts, little more than a small fraction of the packets riding the TCP/IP radiowaves.

For me and my friends of forty years ago, our favorite DJs defined more than our musical tastes, they helped us think about drugs and sex, philosophy and fashion, war and patriotism. They sometimes set the very calendar we lived by. I remember how, well into the 1970s, each year my friends and I would wait impatiently for the first hot late-spring day. The radio would be set to 102.7. WNEW-FM's afternoon DJ Dennis Elsas would come on the air and play The Lovin' Spoonful's Summer in the City, and so would begin the dog days of summer.


Hot Chips for Games

IMAGE CREDIT: Wikimedia Commons

I'll be covering the 21st annual Hot Chips conference for the next couple of days.

Hot Chips is an industry nerd-off brings together designers and architects of high-performance hardware and software among the Spanish colonial architecture and rarefied air of Stanford University every August. The logic-heavy Powerpoints are interspersed with a few keynotes to remind everyone what’s at stake in all these mind-numbing comparisons of SIMD vs MIMD.

One of the big ideas this year appears to be the future of gaming. On Tuesday, Intel’s Pradeep Dubey will chair a keynote presented by Electronic Arts chief creative officer Rich Hilleman.

When I first saw the title of the keynote, “Let's Get Small: How Computers are Making a Big Difference in the Games Business,” I pinched my arm because I thought for a second I was experiencing some horrible Life on Mars style delusion/time travel. Computers making a big difference in the games business? Oh, you think so, doctor!

But it turns out to be more complicated than the title indicates. As usual, it all goes back to Moore’s Law.

Moore’s law says that as transistor size keeps shrinking, more of them can be squeezed onto a given area of silicon, and as long as the price of silicon remains the same, those trasistors will just get cheaper as they get smaller. That means the chips will get cheaper too. That’s why you can get so much processing power for ever-decreasing amounts of outlay.

These days you can get the kind of processing power in your mother’s basement that  just 10 years ago, was reserved for the Crays and Blue Genes and other monstrosities available only to government research facilities.

However--the cost of developing a PC or console game increases exponentially alongside Moore's Law. David Kanter, my go-to guru at Real World Technologies explained it thus:

Moore's law says transistors double in density roughly every 18 months. Graphics performance is perfectly parallel with Moore’s law, which means graphics performance too, roughly doubles every 18 months.

And when graphics performance doubles, you need higher-resolution artwork to render in a game. At that point frame rate over 60 FPS aren’t helpful, what you really want is more details and new effects to wow your gamers.

And if you want that higher-resolution artwork, you need to hire more artists. That rule also tracks with Moore’s law—more and more artists are necessary for each generation of chip.

The upshot is that the cost of developing artwork scales with transistor counts for GPUs, which are themselves driven by Moore's Law. This means that the cost of big-name games—like Grand Theft Auto, Quake, Doom, and the like—increases exponentially. That's a big problem for developers whose pockets are shallower than EA’s.

And that is one reason the games market for phones is exploding. For these little rinky dink displays (iPhone = 480 x 320 pixels, my phone = 220 x176), development costs are so low compared to a PC or console, anyone can make one (maybe even in their spare time).

Back in March, at the 2009 Game Developer’s Conference, ex-EA developer Neil Young (founder and CEO, ngmoco) delivered a keynote called “Why the iPhone just changed everything.”

He said that the iPhone--and the class of devices it represents--is a game changer on the order of the Atari 2600, Gameboy, PlayStation One, or Wii. He predicted that the iPhone will “emerge a gaming device as compelling as existing dedicated handheld game devices.”

Kanter suspects that at Tuesday's keynote, EA and Intel may discuss how ray tracing could make developing artwork easier and less expensive. Thoughts? Comments? Predictions? Do you buy the idea that gaming is splitting the world into empire of AAA game development vs. rebellion of mobile phone developers?


Business Articles on Nanotechnology Take on a Familiar Formula

I am simultaneously amazed and concerned when I read mainstream publications tackle the issue of business and nanotechnology. Last month we had the NY Times informing us that things were looking up for the commercialization of nanotech because it seemed industry and academic research centers were beginning to team up. What a novel idea.

Of course, this penetrating analysis followed the Grey Lady’s previous prediction over 18 months earlier that nanotech was going to finally experience its long-awaited boom with a series of IPOs…that never came.

But the latest bit of business journalism I’ve read on nanotech comes from one of the NY Times’ subsidiary publications, the Boston Globe. What is fascinating about this one is how it reads like a “Mad Lib” for nanotech articles formulated in 2001.

We get insights like the nanotechnology market in 2015 will be worth (fill in blank with number) trillion. Then they even find ways to throw in all the favored terms used in 2001, like “nanobots” or size definitions such as a “nanometer, equal to one-billionth of a meter”.

So, why am I am amazed and concerned at the same time? I am amazed because we can read articles that manage to be repeating articles written from nearly a decade before, or have such a flimsy grasp of the mechanisms of commercializing emerging technologies that they believe industry/lab partnerships are actually an innovative idea. And I am also concerned because I read business articles from publications like this on topics that I know far less about than nanotech, should I be worried? I think maybe, yes.

DNA Scaffolding Technique Promises Sub-22 nm Lithography

In a paper to be published in next month’s Nature Nanotechnology, Researchers at IBM’s Almaden Research Center and the California Institute of Technology have developed a way to use DNA origami structures as a quasi circuit board or scaffold for precisely assembling components at resolutions as small as 6 nm.

The attractiveness of the process is that it utilizes currently used lithography techniques. Spike Narayan, manager, Science & Technology at the  IBM Almaden Research Center is quoted in the IBM press release:

“The cost involved in shrinking features to improve performance is a limiting factor in keeping pace with Moore’s Law and a concern across the semiconductor industry,” he says. “The combination of this directed self-assembly with today’s fabrication technology eventually could lead to substantial savings in the most expensive and challenging part of the chip-making process.”

The BBC’s coverage of the same story followed Narayan’s quote above with the rather sobering reality that it could take as long as 10 years to see this technology integrated into the semiconductor industry.

Whenever you see the figure “ten years’ used in future projections you could just as easily add another zero to that number. It’s sort of like saying, “Who knows?”

Give Social Networking the Finger

Give social networking the finger.Fingerprint authentication isn't just for security anymore. Authentec makes fingerprint sensors for enterprise computers, and their main clients have until recently been the military or any company that really needs to keep its laptops secure.

In my last post about Authentec, I swooned about how they go the extra mile to protect you from finger-truncating impersonators and eyeball-gouging identity thieves. (The company doesn’t simply use a picture of the top layer of the skin; it uses radio frequencies to measure the valleys and ridges of the fingerprint beneath the outer layer of skin, or within the live layer. Because they’re measuring these RF fields within that live layer, a finger that has been separated from its owner won’t work in setting up that first RF field when a user contacts the sensor. Without the attached owner, there's no pattern and the finger is no good.)

Today, Authentec announced that they’re putting those military-grade fingerprint sensors into netbooks. Nothing says top secret like a fluffy little netbook, right? It’s the king of consumer-only applications, a cross between a lightweight laptop and an big-screen iPhone.

Here’s where the fingerprint sensor goes to work for consumer netbooks. Instead of protecting your identity a la The Bourne Identity or Angels & Demons, in your netbook the sensors take on a completely different capacity. They're putting your fingerprints to work for more mundane tasks.

It’s not just the one fingerprint that distinuishes you. The sensor easily differentiates among your ten fingerprints. Their software (called TrueSuite) lets you assign different fingers to different functions, including accessing facebook or twitter accounts, or your email. The program is even able to condense processes that would normally take multiple steps into the swipe of a single finger.

For example, say you want to log into your facebook account. Normally, you wake up your sleeping, locked laptop, type in your OS's password, open your browser, navigate to facebook, and type in your username and password.

With the fingerprint sensor, you skip 4 of those 5 steps. Instead of doing any of the above, you swipe your designated finger. The software reads your finger, and takes care of the rest. You set how it reacts to the swipe of your ten fingers: open gmail, facebook, twitter, flickr, picasa--all you have to remember is what job you gave which finger.

The software

Authentec is also working on the LED lights that surround the sensor, which glow when giving you a notification. Normally these would have limited use; i.e., if you swipe the wrong finger or you’re the wrong person, you'd get a red blinking light, if you did it right you'd get a green light. But the Authentec people devised a few new uses for these LEDs. You can set your own colors the same way you set the actions for your fingers.

Say you’re taking some time out of your busy schedule for an important episode of Walker: Texas Ranger. Your laptop has long since gone to sleep and locked itself. To find out if you have mail, you’d normally have to stand up (all the way!), walk across the room (nooo...) and wake up and unlock your computer. That could take up to 10 seconds! But, with this app, you can glance across the room and see that you have a red flashing LED, which means there is a message waiting from your boss, or a blue flashing light indicating a note from your mother. Granted, you still have to move the muscles that control eyeball directionality, but there’s no such thing as a free lunch.

This is probably the best thing that could have happened to fingerprints. I think it’s not such a bad idea to take fingerprint authentication out of highly secure environments and repurpose it for more mundane applications. Fingerprints will become almost meaningless as a security measure within less than 20 years.

The two main things that will undermine security at every turn:
1) Poor adminstration. Read this slashdot post to understand-- biometrics are just databases, and databases need to be securely and competently administered.

It's too difficult to manage a 2000 or even 200 member authentication database. The simplest administration is just not done because it is tedious or takes too much time. ... You have the human being that lets everyone into the building, security guards that think you work there because they've seen you before, meeting rooms filled with all-open network connections and a bunch of people that write down their password on a sticky note, even if it's as simple as their husband's name, brand of monitor or keyboard or something else.

2) Time. The younger you start the less secure your fingerprints will inherently become: "Many people are trying to regard biometrics as secret but they aren't. Our faces and irises are visible and our voices are being recorded. Fingerprints and DNA are left everywhere we go and it's been proved that these are real threats." Slashdotter Kadin2048 commented that

The fact that you can't change your fingerprints is a real problem if they start to use biometric systems for authentication. Particularly since there are biometric-ID systems used by children: in my area, they're currently testing and preparing to roll out a school-lunch system that uses fingerprints (it's a debit system -- no more stolen lunch money, and no way to tell who's on the subsidized lunch program or not). When you start using biometrics that young, you have a long time for them to possibly get compromised and spoofed.

The fingerprints you have, you own for life: so any system has to be built on the assumption that they will be compromised. In particular, future systems should be built knowing that people are going to come in who've already had all 10 fingerprints compromised already. The solution isn't to just come up with more biometric identifiers to use as secrets, the solution is to not use them as secrets at all.

Biometric identification can be used for convenience or for security, but it's probably best not to try both.


Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Load More