Technology Winners of 2011

IEEE Spectrum chose these tech projects, all new and relatively untested, as having the greatest chances for the coming year

10 min read

Technology Winners of 2011

model wearing glasses

Photo: Laster Technologies

Every year the staff of IEEE Spectrum searches the globe for technology projects that are practical, reasonably well financed, and close enough to market to offer a fair target—before they’re actually commercialized. That way we force ourselves to go out on a limb, thus avoiding the tech journalist’s standard temptation to shoot fish in a barrel. Here is this year’s catch—five projects we think will do well.

We’ll present awards to the two winners chosen by our reader panel on 3 May at the ACE awards ceremony, conducted jointly with EE Times.

Laster Technologies’ Smart Spectacles
These glasses let users dispense with a computer monitor altogether

A decade ago, tech sages predicted that many portable consumer devices would soon be integrated into the mobile phone. For once, the pundits were right: Today, of course, elements of the old PDA, MP3 player, digital camera, portable e-mail, Web browser, and camcorder now live on our smartphones.

Next in line, says a French company, are eyeglasses. When they get smart, we’ll be able to cut our computers loose from monitors, so that the entire package becomes really portable. We’ll also be able to project graphical information onto the world by augmenting reality, as it were.

Later this year, Laster Technologies—based in the Paris suburb of Gif-sur-Yvette—says it will be releasing its first-generation SmartVision spectacles, which will project VGA-resolution (or better) color images on the inside of the glasses. With a smartphone or tablet computer powering it (and connected via Wi-Fi or Bluetooth), SmartVision could, in essence, begin to put a menu bar on reality. Imagine, for instance, doing Minority Report –like information surfing without needing any fancy holographic displays or a head-up GPS interface that sits at the edge of your field of view.

The Laster prototype now on the marketplace provides the first glimpse of the same technology behind Laster’s consumer-market SmartVision eyewear. Reviewed by IEEE Spectrum last year, Pro Mobile is aimed now at industrial and medical applications.

A mechanic performing maintenance or an engineer working on complex wiring can use Pro Mobile to overlay a schematic of the engine or motherboard that stays fixed in space as he moves his head. A mini camera, placed between the eyes, follows his motion; the miniaturized equivalent of a late-1990s PC in the glasses’ thick arm powers the display.

According to Zile Liu, Laster’s CEO and cofounder, this setup costs several thousand euros. He adds that future versions of Pro Mobile and SmartVision will also track a wearer’s movements via accelerometers and GPS technology.

The images projected onto the clear glasses appear to the viewer to be sitting in space a few meters away. The device exploits an optical illusion known as a virtual image, the trick behind mirages and those parabolic mirror woks at science-supply stores. And Liu’s company, which he says has invested more than a million euros in its technology since 2005, has also developed a gesture-based interface—like the Xbox 360’s Kinect.

Liu projects all these products to be available in SmartVision by the end of the year in the 300 to 500 Euro price range.

"Our product will change the paradigm of the computer," says Liu. "Today the size of a PC is due to the size of the screen." Tomorrow, Liu says, a wearable visual interface like SmartVision could eliminate computers’ footprints altogether.

Just how big the "augmented reality" marketplace will get depends on whom you consult. ABI Research expects it to hit US $360 million within three years, and Juniper Research says it’ll get to $1.5 billion within four.

Augmented reality (AR) software companies, like France’s Theoris and Total Immersion, say Laster is making their work realizable.

"Industries are eager for AR," says software developer Laurent Chabin, from Total Immersion. "We’ve been hearing of projects since the 1980s."

"We need hardware manufacturers like Laster," says AR manager Fabrice Malaingre of Theoris. "The capabilities of [Laster’s] AR glasses...are one of the triggering factors that encourage the industry to jump today."

Seabed Rig Robotic Oil Driller
Robots do it better, down where it’s wetter

According to the U.S. Geological Survey, 40 percent of the world’s untapped oil and natural gas reserves are in the Arctic or in very deep water. Last year’s Deepwater Horizon spill cast in stark relief how technologies developed for drilling through 1000 feet or less of sea can perform poorly and unpredictably in deeper waters.

A Norwegian company says it’s developed a smarter and safer deep-water option. Instead of tethering a surface drilling platform to an undersea site a mile or more below, Røyneberg, Norway–based Seabed Rig is developing robotic technology to put the platform, pipe, and drill right on the seafloor itself. That way, the system can pump the oil and natural gas to ships waiting on the surface. What’s more, there’s no need to fix the apparatus to a particular spot on the ocean floor. The seven-employee start-up used its US $9 million development budget to make a prototype rig, unveiled last year.

CTO Kenneth Mikalsen came from a career in industrial robotics. He moved to oil and gas drilling in 2007, at Seabed Rig’s founding, when he discovered how this conservative industry needed the same upgrade that factories underwent in the 1980s and 1990s.

"You can find more automation in a modern farm today than you can on a drilling rig," Mikalsen says. As a result, he says, Seabed Rig doesn’t so much require brave new tech as much as it simply needs to get an antiquated industry to catch up to the 21st century: "We believe most of the technology we need to fully automate this drilling operation is out there. We just need to go out and get the right technologies and put them together." Those technologies include robotic arms with semiautonomous robotic control software developed for NASA deep-space and planetary missions.

Seabed Rig is now preparing for its rig’s phase three: undersea testing. Mikalsen says the general approach is something like that of a Mars rover. The rig itself can perform basic tasks, like driving its drill through layers of sediment. This spares humans the drudgery and danger, he says. (Accidents and spills are often the result of human error brought on by excessive fatigue, he notes.) And rather than spending months crammed into a tiny rig in the middle of the ocean, controllers can instead monitor the rig from an onshore site. The operators need to take the rig’s remote controls only when selecting the right drill joint, picking up a fallen pipe, or handling any other problem today’s robots can’t manage.

Neil Tardella of the Cambridge, Mass.–based Energid Technologies Corp. —developers of the rig’s software, which has been tested on NASA and U.S. Department of Defense robots for more than a decade—says it takes a small and nimble team like Seabed Rig to take the kind of R&D risks needed to shift a world of sea drilling that today relies on decades-old technology and manual labor.

"Once this technology is proved, it’ll revolutionize the industry," Tardella says.

ClariPhy’s digital processor
To a video-saturated world, it brings 100 gigabits per second of power

Internet usage is a monster that just keeps growing. Last year it rose 62 percent, while the bandwidth to carry that traffic increased by just 55 percent, according to the Internet tracking firm TeleGeography.

One clever communications chip company wants to give the Net a little breathing room. Aiming for the bandwidth trifecta—fast, cheap, and low-power—ClariPhy Communications, based in Irvine, Calif., is now preparing a chip that will squeeze more Netflixes, Mozys, and Carbonites through Internet pipes without anyone’s having to lay another inch of fiber.

The new chip will cram 100 gigabits per second through an optical fiber. That’s enough data headroom to transmit the entire printed contents of the U.S. Library of Congress in 15 minutes.

Paul Voois and Norman Swenson, the company’s CEO and CTO, respectively, saw this day coming when they founded ClariPhy in 2004, Then, as now, fiber-optic trunks carried the bulk of Internet traffic. The optical communications were still being done with simple electronics. A laser pulse meant 1; no pulse meant 0. But as the number of users of YouTube, Hulu, and Facebook mushroomed, the Internet needed more than simple electronics in its fiber backbones.

Today many Internet-bearing fiber cables put 4 bits on each laser pulse—one on each perpendicular polarization state and one on each hump of the wave’s phase (that is, on the sine and cosine parts). That way, today’s best digital signal processing (DSP) chips can pack up to 40 gigabits onto 10 billion laser pulses per second. But as even some 40-Gb/s networks begin to split at the seams, imperfections in the optical fibers themselves have begun to show.

Reason: Each optical fiber treats different polarizations differently, leading to varying travel times for the bits. The problem is particularly troublesome when the optical repeaters are widely separated; some will soon be up to 3000 kilometers apart. How, then, to retrieve those pristine 0s and 1s when one optical signal starts to overlap another?

"We’re the cleanup crew," says Swenson. ClariPhy’s 40-Gb/s DSP chip mathematically analyzes the electronic signals produced from the optics—using standard silicon complementary metal-oxide-semiconductor designs—to extract the same stream of bits that entered the fiber-optic line thousands of kilometers away.

To ramp the bandwidth up to 100 Gb/s, ClariPhy maintains the 4-bit-per-laser-pulse standard but increases the data traffic speed 2.5 times. The vast increase in throughput raises some complications. "When you get to 100 [Gb/s], you’re really pressing the limits of the signal-to-noise ratio," Swenson says.

That’s where ClariPhy introduces some clever tricks. After every 4 data bits, on average, the sending ClariPhy DSP chip encodes a single parity bit that specifies whether the sum of the latest train of 0s and 1s is an odd or even number. That means that 20 percent of the jumbled mess of waveforms consists of error-correcting information. And while the error correction is mixed in with the actual data, sophisticated statistics in ClariPhy’s on-chip algorithms can extract all the needed information. True, this method begins by decreasing overall bandwidth by 20 percent, but it ends by enabling a 250 percent speed increase. Who’s complaining when suddenly the world just got a lot faster?

Video: IBM
 

IBM’s Watson
It’s harder to get a computer to win at "Jeopardy!" than at chess—and more useful, too

When IBM first engaged in a man-machine competition, it chose what might have seemed the hardest imaginable goal: beating Garry Kasparov at chess. Indeed, when its Deep Blue chess machine won the match, in 1997, pundits spoke of a watershed event in the development of artificial intelligence.

In fact, IBM showed far greater moxie in February when it went after (and convincingly beat) champions of the TV game show "Jeopardy!" Here the object was to take a given answer and infer what must have been the question; such divination comes closer than chess cogitation to what humans do well—and what computers do badly. It’s also closer to the things that companies want artificial intelligence programs to do.

Here’s an example of a particularly devilish "Jeopardy!" clue. "Secretary Chase just submitted this to me for the third time. Guess what, pal. This time I’m accepting it." The clue’s punning category—"Lincoln Blogs"—already puts the humorless computer at a disadvantage. No algorithmic cleverness could possibly figure out that the clue represents a tongue-in-cheek, fictionalized blog entry by America’s 16th president.

According to IBM’s Eric Brown, an information retrieval expert, blunt computing force must in such cases substitute for human understanding.

"This clue is hard, right from the start," Brown says. "It doesn’t even identify the kind of thing being looked for." To unravel such intentionally obscure language, Watson, with its million lines of code and 200 million searchable pages of information, first breaks down each clue into its noun and verb phrases.

When IBM designed Deep Blue, it aimed for sheer performance. This time around, though, the company’s main goal wasn’t to make its machine ultrafast and ultraparallel (although its 2880-core, 80-teraflop system running Watson isn’t exactly a tortoise either). Instead, the Watson team wanted their machine to think smarter .

In 2007, the company brought together experts in software, machine learning, and linguistics at IBM’s Thomas J. Watson Research Center, in Yorktown Heights, N.Y. They spent a lot of time on working out algorithms capable of handling slippery "Jeopardy!" clues.

"Tokenizing" the Lincoln clue, Brown says, helps Watson discover that the clue’s focus must be something that can be submitted and accepted. A database query of Watson’s preindexed corpus of reference texts might bring up related but still inconclusive information: The person was Abraham Lincoln’s Secretary of the Treasury Salmon P. Chase. But the words "third time" constitute a linchpin designating Secretary Chase’s action in question. Answer: "What is a resignation?"

Equally important, Watson weighs its confidence in every answer and often chose not to buzz in after Alex Trebek finished reading the "Jeopardy!" clue aloud. (In Jeopardy, wrong answers penalize players the same amount of money as right answers win them.)

Watson and its DeepQA engine, Brown says, could someday find applications in helping professionals puzzle together everything from tech support to medical diagnoses.

Eduard Hovy, research associate professor of computer science at the University of Southern California, says Watson’s innovations could jump-start a host of computerized "deep reading" applications. "Over the next decade we will see systems increasingly able to read texts and extract more and more content," Hovy says. "In the shorter term we might see things like more accurate Web searches—giving you the answer rather than 500 documents."

Video: Willow Garage
 

Willow Garage Personal Robotics
Open source means giving away most of the store—to make a profit

In May, to the pomp and circumstance of Styx’s prototechno song "Mr. Roboto," 11 small-refrigerator-size robots graduated from the Menlo Park, Calif., headquarters of their maker, Willow Garage.

Heading into a world that expects to see, according to one estimate, a US $19 billion personal robotics marketplace by 2017, these dual-armed automatons (named PR2 to denote the second generation personal robot produced by the company) wheeled into the welcoming arms of representatives from their new host labs. Research institutions like MIT; Stanford; University of California, Berkeley; the University of Tokyo; and Katholieke Universiteit Leuven, in Belgium, all brought home their own PR2s, which today sell for $400 000 per pan/tilt head.

Like the 1973 Xerox Alto computer that seeded the PC revolution, the PR2 is an ambitious research platform. The company calls its open-source operating system ROS.

Willow Garage CEO Steve Cousins says the Alto serves as an instructive example for his 40-person company. Many of the Alto’s core innovations—windows, icons, folders, a mouse, and pointer—ended up making companies other than Xerox rich. But Xerox is primarily a document company. And the same famed Palo Alto Research Center that spawned the Alto also developed the laser printer, generating profits for Xerox that by themselves, Cousins says, probably more than covered PARC’s costs. Not a bad business model, in other words. Seeding most of its tech to other innovators for free while retaining (and licensing) a few others, Cousins says, is what Willow Garage is all about.

Encouraging spin-offs in particular is an important part of Willow Garage’s blueprint. Cousins cites a Harvard Business School study of PARC spin-off companies that discovered that the more control Xerox had over the spin-off in general, the less well it performed in the marketplace. "We’re going to keep just a small percentage [of spin-offs]," Cousins says.

On the other hand, anyone outside the company can build businesses around Willow Garage technology, too. "If you want to start Joe’s Robots, you just start it," Cousins says. "You can take [ROS]. But the hardware is not open source."

Robotics researcher Joshua Smith, at the University of Washington, was Willow Garage’s first paying customer. His team is developing add-ons to ROS that will enable PR2, among other things, to play Ping-Pong—and by doing so improve its quickness and coordination. And with each towel-folding and obstacle-navigating addition to ROS, PR2 gets smarter and better.

"The philosophy of the PR2 seems to be, let’s not worry about the cost right now—let’s build something expensive and really good," Smith says. "Will people develop memorable applications on the PR2? That will determine whether it’s the Alto of robotics."

About the Author

Mark Anderson is an author and science writer based in Northampton, Mass. In January 2011 he reported on a power-saving technique for flash memory.

This article is for IEEE members only. Join IEEE to access our full archive.

Join the world’s largest professional organization devoted to engineering and applied sciences and get access to all of Spectrum’s articles, podcasts, and special reports. Learn more →

If you're already an IEEE member, please sign in to continue reading.

Membership includes:

  • Get unlimited access to IEEE Spectrum content
  • Follow your favorite topics to create a personalized feed of IEEE Spectrum content
  • Save Spectrum articles to read later
  • Network with other technology professionals
  • Establish a professional profile
  • Create a group to share and collaborate on projects
  • Discover IEEE events and activities
  • Join and participate in discussions