Dynamic Vision Sensors Enable High-Speed Maneuvers With Robots

An innovative camera system reacts in microseconds to track fast moving robots

2 min read

Dynamic Vision Sensors Enable High-Speed Maneuvers With Robots

We love watching quadrotors pull off amazing high-speed, high-precision acrobatics as much as anyone. But we’re also the first to point out that almost without exception, stuff like this takes place inside a controlled motion-capture environment, and that the quadrotors themselves are blind bots being controlled entirely by a computer somewhere that’s viewing the entire scene at a crazy framerate and from all angles through an expensive camera setup.

It’s going to take something new and innovative for robots to be able to perform high-speed maneuvers outside of a lab. Something like a special kind of camera called a Dynamic Vision Sensor (DVS) that solves the problems that conventional vision systems face when dealing with rapid motion.

Conventional video cameras are bad at motion because of the way that they capture data. They’re basically just still cameras that pump out a whole bunch of pictures (frames) every second. Each one of these frames contain data that’s integrated over the entire period of time that the camera shutter was open for, which is fine, except that you have the same problem that still cameras have: if there’s something in the frame that’s moving appreciably during the time that the shutter of your camera is open for, it blurs itself.

Most of the time, this isn’t an issue for robots (or people), because we’re not attempting (or observing) high-speed maneuvers. But flying robots that are moving at very high speeds need a better solution to keep track of where they are, since it’s hard to keep track of your environment when your camera is telling you that everything around you is one giant smear of pixels.

DVS is a special type of camera that says, “okay, if we’re going to be moving really fast, we don’t care about anything except for the relative motion of things around us.” Instead of sending back frames, a DVS transmits data on a per-pixel basis, and only if it detects that the pixel has changed.

In other words, it’ll send back an outline of everything that’s changing at a very high temporal resolution (microsecond), taking care of both latency and motion blur. Here it is in action in a 2008 implementation, balancing a pencil:

And here’s a description of how it works, in the context of being used to estimate the pose of a quadcopter that’s doing flips at up to 1,200 degrees per second:

The spatial resolution of the camera used on the robot (a modified AR Drone, if you’re wondering) is only 128x128 pixels, but its temporal resolution is in single-digit microseconds. The OptiTrack cameras you see up on the walls were just used for recording ground truth data. Over 25 trials, the DVS and control system accurately tracked the robot 24 times, for an overall accuracy of 96 percent. Not bad.

At this point, the agility of independent aerial robots is limited almost entirely by the latency of onboard sensing systems, and from the sound of things, using a DVS solves that problem, at least for vision. Future work from these researchers at the University of Zurich will focus on increasing the resolution of the DVS, teaching it to work in arbitrary environments, and implementing closed-loop control.

“Event-based, 6-DOF Pose Tracking for High-Speed Maneuvers,” by Elias Mueggler, Basil Huber, and Davide Scaramuzza from the University of Zurich, was presented last month at IROS 2014 in Chicago.

[ Robots and Perception Group ]

The Conversation (0)