Yesterday, we posted about some dirt cheap micro air vehicles on Kickstarter. Cheap hardware is great, but to make it do cool stuff, you usually need expensive (or at least, very clever) software. Researchers at Cornell have come up with a way to enable robotic aircraft to navigate around outdoor obstacles using just a single camera and hardware that mimics neuron architecture.
So, why is perceiving obstacles extremely important for aerial robots, and why are current methods based on stereo vision fundamentally limited? Here's what the researchers have to say:
Perceiving obstacles is extremely important for an aerial robot in order to avoid collisions. Methods based on stereo vision are fundamentally limited by the ﬁnite baseline between the stereo pairs, and fail in textureless regions and in presence of specular reﬂections. Active range-ﬁnding devices are either designed for indoor low-light environments (e.g., the Kinect), or are too heavy for aerial applications. More importantly, they demand more onboard power, which is at a premium for aerial vehicles.
Nicely put. This is especially relevant for small and cheap robots, because all you need is a single camera, minimal processing power, and even more minimal battery power.
The new algorithm works by taking a single still frame from a camera stream and classifying the image into areas that are safe for the robot to pass through, and areas that aren't. To do this quickly and efficiently, the researchers are running their algorithm on a neuromorphic hardware platform based on the collective firing of a network of artificial neurons. Obstacles are separated out from backgrounds using a series of taught visual cues (like the fact that straight lines appear to converge at a distance, or the size of familiar objects), and he final platform will be able to process several frames per second using less than one watt of power. The system works quite well in practice: in 53 autonomous flights in obstacle-rich environments, the robot succeeded in reaching its objective without crashing into anything or killing anyone, with the final two flights being messed up by gusts of wind (they're working on compensating for that). Here's some video:
It works indoors on an AR Drone, too, as shown in some earlier research:
And here's some much earlier research doing 3D obstacle avoidance from monocular images on a robot car, which looks like fun:
So what's next?
Saxena plans to improve the robot's ability to respond to environment variations such as winds, and enable it to detect and avoid moving objects, like real birds; for testing purposes, he suggests having people throw tennis balls at the flying vehicle.
Heck yeah! SIGN ME UP!
"Low-Power Parallel Algorithms for Single Image based Obstacle Avoidance in Aerial Robots," by Ian Lenz, Mevlana Gemici, and Ashutosh Saxena from Cornell University, was presented last month at IROS 2012 in Portugal.
[ Cornell MAV ] via [ Cornell ]
Evan Ackerman is a senior editor at IEEE Spectrum. Since 2007, he has written over 6,000 articles on robotics and technology. He has a degree in Martian geology and is excellent at playing bagpipes.