Dynamic Vision Sensors Enable High-Speed Maneuvers With Robots

An innovative camera system reacts in microseconds to track fast moving robots

2 min read
Dynamic Vision Sensors Enable High-Speed Maneuvers With Robots

We love watching quadrotors pull off amazinghigh-speed, high-precision acrobatics as much as anyone. But we’re also the first to point out that almost without exception, stuff like this takes place inside a controlled motion-capture environment, and that the quadrotors themselves are blind bots being controlled entirely by a computer somewhere that’s viewing the entire scene at a crazy framerate and from all angles through an expensive camera setup.

It’s going to take something new and innovative for robots to be able to perform high-speed maneuvers outside of a lab. Something like a special kind of camera called a Dynamic Vision Sensor (DVS) that solves the problems that conventional vision systems face when dealing with rapid motion.

Conventional video cameras are bad at motion because of the way that they capture data. They’re basically just still cameras that pump out a whole bunch of pictures (frames) every second. Each one of these frames contain data that’s integrated over the entire period of time that the camera shutter was open for, which is fine, except that you have the same problem that still cameras have: if there’s something in the frame that’s moving appreciably during the time that the shutter of your camera is open for, it blurs itself.

Most of the time, this isn’t an issue for robots (or people), because we’re not attempting (or observing) high-speed maneuvers. But flying robots that are moving at very high speeds need a better solution to keep track of where they are, since it’s hard to keep track of your environment when your camera is telling you that everything around you is one giant smear of pixels.

DVS is a special type of camera that says, “okay, if we’re going to be moving really fast, we don’t care about anything except for the relative motion of things around us.” Instead of sending back frames, a DVS transmits data on a per-pixel basis, and only if it detects that the pixel has changed.

In other words, it’ll send back an outline of everything that’s changing at a very high temporal resolution (microsecond), taking care of both latency and motion blur. Here it is in action in a 2008 implementation, balancing a pencil:

And here’s a description of how it works, in the context of being used to estimate the pose of a quadcopter that’s doing flips at up to 1,200 degrees per second:

The spatial resolution of the camera used on the robot (a modified AR Drone, if you’re wondering) is only 128x128 pixels, but its temporal resolution is in single-digit microseconds. The OptiTrack cameras you see up on the walls were just used for recording ground truth data. Over 25 trials, the DVS and control system accurately tracked the robot 24 times, for an overall accuracy of 96 percent. Not bad.

At this point, the agility of independent aerial robots is limited almost entirely by the latency of onboard sensing systems, and from the sound of things, using a DVS solves that problem, at least for vision. Future work from these researchers at the University of Zurich will focus on increasing the resolution of the DVS, teaching it to work in arbitrary environments, and implementing closed-loop control.

“Event-based, 6-DOF Pose Tracking for High-Speed Maneuvers,” by Elias Mueggler, Basil Huber, and Davide Scaramuzza from the University of Zurich, was presented last month at IROS 2014 in Chicago.

[ Robots and Perception Group ]

The Conversation (0)

The Bionic-Hand Arms Race

The prosthetics industry is too focused on high-tech limbs that are complicated, costly, and often impractical

12 min read
A photograph of a young woman with brown eyes and neck length hair dyed rose gold sits at a white table. In one hand she holds a carbon fiber robotic arm and hand. Her other arm ends near her elbow. Her short sleeve shirt has a pattern on it of illustrated hands.

The author, Britt Young, holding her Ottobock bebionic bionic arm.

Gabriela Hasbun. Makeup: Maria Nguyen for MAC cosmetics; Hair: Joan Laqui for Living Proof

In Jules Verne’s 1865 novel From the Earth to the Moon, members of the fictitious Baltimore Gun Club, all disabled Civil War veterans, restlessly search for a new enemy to conquer. They had spent the war innovating new, deadlier weaponry. By the war’s end, with “not quite one arm between four persons, and exactly two legs between six,” these self-taught amputee-weaponsmiths decide to repurpose their skills toward a new projectile: a rocket ship.

The story of the Baltimore Gun Club propelling themselves to the moon is about the extraordinary masculine power of the veteran, who doesn’t simply “overcome” his disability; he derives power and ambition from it. Their “crutches, wooden legs, artificial arms, steel hooks, caoutchouc [rubber] jaws, silver craniums [and] platinum noses” don’t play leading roles in their personalities—they are merely tools on their bodies. These piecemeal men are unlikely crusaders of invention with an even more unlikely mission. And yet who better to design the next great leap in technology than men remade by technology themselves?

Keep Reading ↓Show less