Robotic Vision gets Sharper

New image sensor technology will let robots see in shadow

3 min read

4 March 2005--For a robot monitoring the periphery of an airport or a chemical plant, it's crucial that it be able to see intruders lurking in shadows. While human eyes are exceptionally good at deciphering such hidden details, robots don't have that edge--yet. Researchers at Intrigue Technologies Inc., in Pittsburgh, are hoping to level the playing field by developing a new image sensor that works more like the retina in a human eye than current sensors and will allow robots to see better in natural lighting.

In the controlled environment of the factory floor, where robots are most often found, light falls uniformly on objects, so their image sensors don't have to capture a wide range of light intensities. Outside in natural light however, an imaging device must contend with shadows and sunlight; conventional sensors, such as those in digital cameras, can't capture pictures well under these conditions. Areas in bright light get washed out, whereas areas falling in a shadow become too dark to show details. "A camera could capture bright images if the shutter speed was faster [for less exposure]," says Vladimir Brajovic, the president and CEO of Intrigue, which grew out of his work at The Robotics Institute at Carnegie Mellon University, in Pittsburgh. "Similarly it could capture shadows if it was exposed longer. But a conventional sensor cannot simultaneously capture both."

Keep Reading ↓Show less

This article is for IEEE members only. Join IEEE to access our full archive.

Join the world’s largest professional organization devoted to engineering and applied sciences and get access to all of Spectrum’s articles, podcasts, and special reports. Learn more →

If you're already an IEEE member, please sign in to continue reading.

Membership includes:

  • Get unlimited access to IEEE Spectrum content
  • Follow your favorite topics to create a personalized feed of IEEE Spectrum content
  • Save Spectrum articles to read later
  • Network with other technology professionals
  • Establish a professional profile
  • Create a group to share and collaborate on projects
  • Discover IEEE events and activities
  • Join and participate in discussions
Illustration showing an astronaut performing mechanical repairs to a satellite uses two extra mechanical arms that project from a backpack.

Extra limbs, controlled by wearable electrode patches that read and interpret neural signals from the user, could have innumerable uses, such as assisting on spacewalk missions to repair satellites.

Chris Philpot

What could you do with an extra limb? Consider a surgeon performing a delicate operation, one that needs her expertise and steady hands—all three of them. As her two biological hands manipulate surgical instruments, a third robotic limb that’s attached to her torso plays a supporting role. Or picture a construction worker who is thankful for his extra robotic hand as it braces the heavy beam he’s fastening into place with his other two hands. Imagine wearing an exoskeleton that would let you handle multiple objects simultaneously, like Spiderman’s Dr. Octopus. Or contemplate the out-there music a composer could write for a pianist who has 12 fingers to spread across the keyboard.

Such scenarios may seem like science fiction, but recent progress in robotics and neuroscience makes extra robotic limbs conceivable with today’s technology. Our research groups at Imperial College London and the University of Freiburg, in Germany, together with partners in the European project NIMA, are now working to figure out whether such augmentation can be realized in practice to extend human abilities. The main questions we’re tackling involve both neuroscience and neurotechnology: Is the human brain capable of controlling additional body parts as effectively as it controls biological parts? And if so, what neural signals can be used for this control?

Keep Reading ↓Show less
{"imageShortcodeIds":[]}