Modeling Uncertainty Helps MIT's Drone Zip Around Obstacles

This drone keeps track of what it doesn't know to quickly plan aggressive maneuvers

5 min read

MIT's NanoMap drone
Catch that drone!
Photo: Jonathan How/MIT CSAIL

It’s not too hard to make a drone that can fly very fast, and it’s not too hard to make a drone that can avoid obstacles. Making a drone that can do both at once is much more difficult, but it’s necessary in order for them to be real-world useful.

At MIT CSAIL, Pete Florence (in Russ Tedrake’s lab) has developed a new motion planning framework called NanoMap, which uses a sequence of 3D snapshots to allow fast-moving (10 m/s) drones to safely navigate around obstacles even if they’re not entirely sure where they are.

Here’s a video of MIT’s drone in action. Don’t worry if you don’t catch all the details, as we’ll take a crack at explaining what’s going on afterwards:

I don’t mind telling you, this is one of those papers which reminds me that I have a degree in geology rather than robotics. Oof. So, let’s start with the key idea of NanoMap, which the paper helpfully makes explicit right there in the abstract:

The key idea of NanoMap is to store a history of noisy relative pose transforms and search over a corresponding set of depth sensor measurements for the minimum-uncertainty view of a queried point in space.

If that doesn’t completely make sense, I’m right there with you, but this figure should help a bit with the basic idea:

MIT's NanoMapThis figure shows how NanoMap evaluates motion plans (blue line), given a series of depth sensor measurements over time (gray triangles). For each sample point (red dot), the history of measurements is searched until a view is found (orange triangle) that contains the sample point.Image: MIT CSAIL

As the drone moves forward, it takes a continuous sequence of depth sensor snapshots (at something like 30 Hz, depending on the sensor), represented by the gray triangles* above. See that little curvy blue line? Let’s say that’s the trajectory that you want the drone to fly along next. To get to the first point in that trajectory (the red dot in the second diagram from left), the drone has a good enough view of what’s going on from right where it is. But to plan farther ahead, the drone needs information about areas outside of the current field of view of its depth sensor. NanoMap then starts looking backwards through its collection of snapshots, until it finds one that shows the area it needs to plan into. If it can’t find a good snapshot, then it’ll have to slow down and look around a bit, but if it does find one, it has the information it needs to move much more aggressively, since it already knows where obstacles might be.

“If NanoMap wasn't modeling uncertainty and the drone drifted just 5 percent away from where it was expected to be, the drone would crash more than once every four flights. Meanwhile, when it accounted for uncertainty, the crash rate reduced to 2 percent.”

The problem with this technique is that the farther back the drone has to look to find the right snapshot, the less certain it’s going to be about where exactly it was (relative to where it is now) when that snapshot was taken, and consequently where it needs to move to avoid any obstacles that the snapshot contains. This stems from the fact that autonomous robots tend to be lousy at independently keeping track of their own position. Because IMUs aren’t perfect, and and cameras and lidar aren’t perfect, a robot that’s estimating where it is relative to where it’s been will become increasingly less accurate.

Many robots that localize themselves on maps that they build (via simultaneous localization and mapping, or SLAM) use what’s called loop closure to try and compensate for this. They wander around, building up a map that gradually gets less accurate as the robot’s sensors drift and it becomes less certain about its location. After the robot has wandered for a while, it loops back around, and once it recognizes where it’s started, it can compare the starting point it remembered with what it sees now and close the loop, calibrating (to some extent) the rest of the map. Here’s a video of this in practice—note how the map snaps into something much more right-angle-y (and accurate) as soon as the robot recognizes that it’s been in that big oval room before:

This is the conventional way of doing autonomous navigation: Compensating for uncertainty through multiple views of the same landmarks—spaces or objects—that can be calibrated against one another, ending up with a nice big accurate map. NanoMap does away with this entirely. Instead, it just uses that sequence of snapshots and compensates for uncertainty by modeling what the uncertainty actually is for each snapshot, looking for the snapshot with the lowest uncertainty, and then incorporating the uncertainty into the motion plan. The higher the uncertainty, the farther away the drone will fly from obstacles, since the space in which each obstacle could potentially be gets bigger.

With drones, most of the uncertainty comes from the accuracy of measuring acceleration with an IMU. Not getting an acceleration measurement quite right (which you never really do) leads to inaccuracies in estimations of velocity and position that get worse over time. This is called drift, and the more extreme the maneuvers that your drone is attempting, the more severe the drift, meaning that drones that move faster and more dynamically find it much more challenging to accurately estimate where they are, not surprisingly.

MIT's NanoMap droneThe MIT drone used an Intel RealSense r200 (for outdoor environments) and an Asus Xtion (for indoor environments) as the depth camera sensor. A Hokuyo 2D lidar sensor aided with obstacle detection but was used in a memoryless fashion.Photo: Jonathan How/MIT CSAIL

In testing, the researchers found that their uncertainty modeling really started to pay off when drift got much worse than 20 cm/s or so. Up to about 75 cm/s of drift, planning with NanoMap and incorporating uncertainty was able to keep the drone from crashing 97-98 percent of the time. With drift well over 1 m/s, the drone was only safe 10 percent of the time, but that was three times more robust than testing without uncertainty modeling. The press release sums it up overall:

If NanoMap wasn’t modeling uncertainty and the drone drifted just 5 percent away from where it was expected to be, the drone would crash more than once every four flights. Meanwhile, when it accounted for uncertainty, the crash rate reduced to 2 percent.

So that’s pretty good, right? MIT has already shown how well this works through flight experiments conducted as part of DARPA’s FLA program, and NanoMap is open source and available for you to play with on Github.

“NanoMap: Fast, Uncertainty-Aware Proximity Queries with Lazy Search over Local 3D Data,” by Peter R. Florence, John Carter, Jake Ware, and Russ Tedrake from MIT CSAIL, will be presented at ICRA 2018 in Brisbane, Australia.

* Technically, these are called “frustums,” or pyramids with the tops chopped off. Frustum is an awesome word that can refer to other decapitated shapes as well, like lampshades, which are frustum-ized cones. 

The Conversation (0)