Quadrotors are capable of doing some incredible stunts, like flying through narrow windows and thrown hoops. Usually, when we talk about quadrotors doing stuff like this, we have to point out that there are lots of very complicated and expensive sensors and computers positioned around the room doing all of the hard work, and the quadrotor itself is just following orders.
Vijay Kumar’s lab at the University of Pennsylvania is often responsible for some of the most spectacular quadrotor stunts, but their latest research is some of the most amazing yet: They’ve managed to get quadrotors flying through windows using only onboard sensing and computing, meaning that no window is safe from a quadrotor incursion. None. Anywhere. You’ve been warned.
When you watch quadrotors flying indoors, if you look closely, you’ll almost always see a motion-capture system in the background: Arrays of external cameras mounted on the walls that work together to collect very precise positional information hundreds of times every second. In this screenshot, you can see part of the camera array, with active IR emitters glowing a scary red to illuminate reflective markers mounted on the quadrotor:
A Vicon motion-capture system for external localization of a quadrotor.Image: Hizook
With the data that a system like this provides, a computer has no problem issuing very precise commands to a quadrotor flying under remote control to get it to do just about whatever you want. The quadrotor itself is “dumb”: it has no idea where it is or what it’s doing, it’s just following orders, and is completely reliant on all of that external infrastructure to tell it what to do. This new video is different. With research by Giuseppe Loianno from Professor Kumar’s lab, along with Gary McGrath and Chris Brunner from Qualcomm, it features a (very small) quadrotor that’s smart enough to use its own sensors and onboard computation for localization, state estimation, and path planning, allowing it to perform stunts without any help:
On one hand, you’ve seen quadrotors perform nearly identical tricks as these years ago. What’s critical to understand, though, is how much work it’s taken to get all of the sensing and processing required to do this kind of thing down from an entire room to the hardware and software on board a 250-gram quadrotor (it carries an IMU, a Qualcomm Snapdragon, and Hexagon DSP). It’s taken six years of work, in fact, and it’s an enormous accomplishment, because it means that quadrotors will be able to do things like this in useful environments, as opposed to just inside carefully controlled and expensively outfitted robotics research labs.
There is one caveat here, which is that the location of the obstacles relative to the starting position of the quadrotor are provided to the robot in advance. We asked Vijay Kumar and lead author Giuseppe Loianno to clarify this for us:
The vehicle has a front-facing stereo camera that does dense mapping. Right now this is not integrated into the real-time planning and control framework (this will happen soon). In the paper, [we] solve the problem of state estimation, control and planning: how to take information from a single camera and IMU and process this information to estimate the state of the vehicle, and how to do plan and control using the onboard processors.
The hardest part of all of this, Vijay explained, is planning and control, which required a “particularly creative approach”:
When [the quadrotor] goes through the window, you will see it gain momentum and then flipping its attitude. This is all done autonomously in one smooth motion (as opposed to the old video in which the maneuver was decomposed into three primitives, that were then “stitched” together). The other challenge is getting position and velocity estimates from a camera and IMU and closing the feedback at 500 Hz. This builds on our previous work with Qualcomm which you have seen at CES, with a nonlinear controller. This is also distinct from the  video.
That last bit, about the quadrotor estimating its location and pose in 3D space 500 times a second using just a downward-facing VGA camera and IMU, is one of the most difficult parts of the autonomy problem, since the size of the gap you can squeeze through depends on knowing exactly where you are with a very high degree of certainty. It also depends on knowing exactly where the gap is, but that tends to be a much easier problem since the gap isn’t moving. The quadrotor, on the other hand, is moving quite aggressively at speeds of 4.5 m/s, accelerations of over 1.5 g, and roll and pitch angles of up to 90 degrees.
Obviously, we’d love to see all of this stuff working outside of a lab. I mean, I’ve got all of these half open windows in my house, and without quadrotors flying through them, they just seem sort of useless, you know? The researchers are currently working on real-time mapping strategies for obstacle detection and dynamic planning, which they’ve already demonstrated (on a larger platform) as part of DARPA’s FLA program. And since all of this stuff is running on smartphone-grade hardware, the chances of it doing something useful in the near future seems very very good.
“Estimation, Control, and Planning for Aggressive Flight With a Small Quadrotor With a Single Camera and IMU,” by Giuseppe Loianno, Chris Brunner, Gary McGrath, and Vijay Kumar from the University of Pennsylvania and Qualcomm, has been submitted to Robotics and Automation Letters and ICRA 2017.
[ Vijar Kumar Lab ]
Evan Ackerman is a senior editor at IEEE Spectrum. Since 2007, he has written over 6,000 articles on robotics and technology. He has a degree in Martian geology and is excellent at playing bagpipes.