Hey there, human — the robots need you! Vote for IEEE’s Robots Guide in the Webby Awards.

Close bar

Robotic Vision gets Sharper

New image sensor technology will let robots see in shadow

3 min read

4 March 2005--For a robot monitoring the periphery of an airport or a chemical plant, it's crucial that it be able to see intruders lurking in shadows. While human eyes are exceptionally good at deciphering such hidden details, robots don't have that edge--yet. Researchers at Intrigue Technologies Inc., in Pittsburgh, are hoping to level the playing field by developing a new image sensor that works more like the retina in a human eye than current sensors and will allow robots to see better in natural lighting.

In the controlled environment of the factory floor, where robots are most often found, light falls uniformly on objects, so their image sensors don't have to capture a wide range of light intensities. Outside in natural light however, an imaging device must contend with shadows and sunlight; conventional sensors, such as those in digital cameras, can't capture pictures well under these conditions. Areas in bright light get washed out, whereas areas falling in a shadow become too dark to show details. "A camera could capture bright images if the shutter speed was faster [for less exposure]," says Vladimir Brajovic, the president and CEO of Intrigue, which grew out of his work at The Robotics Institute at Carnegie Mellon University, in Pittsburgh. "Similarly it could capture shadows if it was exposed longer. But a conventional sensor cannot simultaneously capture both."

When image sensors, usually made of silicon-based charged-coupled devices (CCD) or complementary metal-oxide semiconductor (CMOS) circuits, take a picture, light falls on a geometric grid of millions of photodetectors on the surface of these devices. (Each detector corresponds to a pixel in the resulting image.) Light creates an electric charge in each detector in proportion to its intensity. After the detectors are exposed to the light for a time, circuitry reads the charges from the detectors and converts them into digital data that a machine can interpret.

In the human eye, more than 100 million neurons work similarly to the photodetectors on a chip, but they have more sophisticated processing capabilities. For example each neuron can continuously adjust to the intensity of the light falling on it.

Researchers working in the field called neuromorphic engineering try to recreate the workings of the eye and other neurobiological sensing systems in silicon chips. The term was coined in the mid-1980s by Carver Mead at the California Institute of Technology, in Pasadena. Neuromorphic vision chips, such as those being developed in the neuroengineering lab at the University of Pennsylvania, in Philadelphia, and by the Analog VLSI group at Johns Hopkins University, in Baltimore, have analog circuits at each detector. These circuits strive to mimic retina's processing powers by, for example, using the intensity of the light falling on a photodetector to regulate its own sensitivity.

But adaptation based on a single photodetector's input isn't enough, Brajovic explains. To really work like an eye, imaging chips need each photodetector to adjust its sensitivity relative to the intensity of light on the surrounding pixels, too. In his design, analog circuits at each photodetector communicate in an intelligent way with those of neighboring photodetectors, performing a complicated algorithm that provides feedback to the detectors, telling them how best to adapt their sensitivity to the incoming light. As a result, the sensor can capture good images even in poor natural lighting conditions and uncover details other chips would miss.

"This is novel," says Charles Higgins, who does biologically inspired engineering research at the University of Arizona, Tucson. "[Brajovic] has studied something about vision and something about optics and has come up with a way of doing something that people before have not been able to do so well and it's really quite dramatic."

Right now, Intrigue's eye exists only in softwarewhich is available on the Web and as an Adobe Photoshop plug-in. The software, Shadow Illuminator, takes a picture as an input, applies the smart pixel technology to each pixel in the picture, and produces a better output image. So far, it has processed sample images to show individuals concealed in shadows and revealed unclear features in medical X-rays. (The Web site lets you upload your own photos for enhancement.) But if the concept is to be used in robotic surveillance and security systems, autonomous vehicles, unmanned combat vehicles, and even in biometric recognition systems as Intrigue engineers hope, it must be made into a microchip. Intrigue has started the chip design and expects a prototype in 2006.

While Shadow Illuminator can simulate up to 25 million pixels, the chip will be limited to much less than that due to the space taken up by the adaptive circuits. "[Our] pixel is about three times bigger than conventional pixels," says Brajovic. Still, he is confident that the prototype chip in the works now, with only a 320- by 240-pixel grid, should suffice for vision applications where present-day sensors fail, such as sending out a robot on patrol in cloudy weather.

This article is for IEEE members only. Join IEEE to access our full archive.

Join the world’s largest professional organization devoted to engineering and applied sciences and get access to all of Spectrum’s articles, podcasts, and special reports. Learn more →

If you're already an IEEE member, please sign in to continue reading.

Membership includes:

  • Get unlimited access to IEEE Spectrum content
  • Follow your favorite topics to create a personalized feed of IEEE Spectrum content
  • Save Spectrum articles to read later
  • Network with other technology professionals
  • Establish a professional profile
  • Create a group to share and collaborate on projects
  • Discover IEEE events and activities
  • Join and participate in discussions