In just about every video featuring drones making aggressive maneuvers around obstacles there’s some amount of “cheating” going on. By that we mean the drones are typically relying on an external motion-capture system, as well as beefy offboard computers and a rock-solid wireless link. For doing research on aggressive maneuvers and other drone capabilities, it’s totally fine to “cheat” like that. But at some point you’ll want your drones to be able to fly anywhere and not just inside the controlled environment of a very expensive robotics lab.
With that goal in mind—and just US $1700 in hardware—MIT PhD student Andrew Barry has managed to fire a fixed-wing drone at some trees and not hit them, using only two cellphones worth of onboard computing hardware and real-time image processing.
Real-time image processing using hardware small enough, and efficient enough, to be used on a drone has always been a challenge. Cameras that have the necessary frame rate and resolution to enable you to clearly see obstacles in the first place pour out a humongous number of pixels, each one of which needs to be analyzed to determine whether the drone has to worry about it. Barry’s insight was that a fast-flying drone doesn’t have to care about almost everything that it can see: it only needs to be concerned with a relatively small volume at a fixed distance in front of it.
To put this in context, consider what you’re doing when you run on rough ground. If you’re anything like me (which I flatter myself as “normal”), you’re probably perpetually looking at an area of ground a few meters in front of where you currently are as you run. The area closer to your feet you’ve already seen and analyzed, and you know how to run over it without faceplanting. The area farther away isn’t important because you’re not there yet. All you really need to pay attention to is this narrow range in front of you and you’ll be able to run indefinitely as long as there are no abrupt environmental changes.
MIT’s drone flies pretty much exactly like you run. Using stereo filtering from a pair of 376-by-240-pixel resolution, 120-frames-per-second cameras spaced 34 centimeters apart, the drone focuses its attention (for robots, this equates to obstacle avoidance algorithms) on pixels that are about 10 meters away and nothing else. It saves these pixels in memory, and the next image (taken 8.3 cm later if the drone is flying at 10 meters per second) adds more pixels beyond the previous set. In this way, the drone can very efficiently build up a 3D map of what’s directly in front of it, and take action based on that map. This technique is called “pushbroom stereo detection,” because the detection area is like a three-dimensional broom that’s being pushed forward.
The drone only remembers pixels for a second or two, so it’s not building a map of the area that it’s flying through (although it certainly could). The obstacle avoidance itself is dynamic, reactive, and computed entirely on the drone, which searches through an existing library of trajectories that it knows to be stable and chooses the best one (you can see this happening in the video). Because the detection horizon for obstacles is so short, the drone might not have enough time to take an effective evasive maneuver if (say) it approaches a building, but for trees and other relatively small and discrete obstacles, it seems like it should be able to continue avoiding things indefinitely. As the researchers point out, the detection horizon is primarily constrained by computer processing power, so as that improves, they’ll be able to scan multiple depths to plan more complex paths around multiple obstacles at varying distances.
This is a combination of related research on both the high-speed maneuvering around obstacles in flight and the pushbroom stereo obstacle detection. We’ve covered this research in the past (2012 and 2014, respectively), but here are the videos again, because it’s very cool to see the individual pieces that resulted in this new capability:
Incidentally, this is exactly what makes us think that delivery drones aren’t going to be ready for a while: getting stuff like this to work properly takes lots of incremental steps, and most of them aren’t easy. It’s possible that companies are throwing enough people and money at the problem to make substantial progress, but we’re just not seeing anything like this from the people who promise drone delivery.
So it’s exciting to see real progress from MIT, and we’d love to be wrong about the near future of drone delivery. If that’s going to happen, it’s going to take more research like this.
[ MIT ]
Evan Ackerman is the senior writer for IEEE Spectrum's award-winning robotics blog, Automaton. Since 2007, he has written over 6,000 articles on robotics and emerging technology, covering conferences and events on every single continent except Antarctica (although he remains optimistic). In addition to Spectrum, Evan's work has appeared in a variety of other online publications including Gizmodo and Slate, and you may have heard him on NPR's Science Friday or the BBC World Service if you were listening at just the right time. Evan has an undergraduate degree in Martian geology, which he almost never gets to use, and still wants to be an astronaut when he grows up. In his spare time, he enjoys scuba diving, rehabilitating injured raptors, and playing bagpipes excellently.