Close

Autonomous Racing Drones Dodge Through Forests at 40 kph

Training in simulation gives these drones impressive flying skills

3 min read
Animation of a colorful quadrotor drone flying around a tree in a snow-covered landscape

It seems inevitable that sooner or later, the performance of autonomous drones will surpass the performance of even the best human pilots. Usually things in robotics that seem inevitable happen later as opposed to sooner, but drone technology seems to be the exception to this. We've seen an astonishing amount of progress over the past few years, even to the extent of sophisticated autonomy making it into the hands of consumers at an affordable price.

The cutting edge of drone research right now is putting drones with relatively simple onboard sensing and computing in situations that require fast and highly aggressive maneuvers. In a paper published yesterday in Science Robotics, roboticists from Davide Scaramuzza's Robotics and Perception Group at the University of Zurich along with partners at Intel demonstrate a small, self-contained, fully autonomous drone that can aggressively fly through complex environments at speeds of up to 40kph.


The trick here, to the extent that there's a trick, is that the drone performs a direct mapping of sensor input (from an Intel RealSense 435 stereo depth camera) to collision-free trajectories. Conventional obstacle avoidance involves first collecting sensor data; making a map based on that sensor data; and finally making a plan based on that map. This approach works perfectly fine as long as you're not concerned with getting all of that done quickly, but for a drone with limited onboard resources moving at high speed, it just takes too long. UZH's approach is instead to go straight from sensor input to trajectory output, which is much faster and allows the speed of the drone to increase substantially.

The convolutional network that performs this sensor-to-trajectory mapping was trained entirely in simulation, which is cheaper and easier but (I would have to guess) less fun than letting actual drones hammer themselves against obstacles over and over until they figure things out. A simulated "expert" drone pilot that has access to a 3D point cloud, perfect state estimation, and computation that's not constrained by real-time requirements trains its own end-to-end policy, which is of course not achievable in real life. But then, the simulated system that will be operating under real-life constraints just learns in simulation to match the expert as closely as possible, which is how you get that expert-level performance in a way that can be taken out of simulation and transferred to a real drone without any adaptation or fine-tuning.

The other big part of this is making that sim-to-real transition, which can be problematic because simulation doesn't always do a great job of simulating everything that happens in the world that can screw with a robot. But this method turns out to be very robust against motion blur, sensor noise, and other perception artifacts. The drone has successfully navigated through real world environments including snowy terrains, derailed trains, ruins, thick vegetation, and collapsed buildings.

"While humans require years to train, the AI, leveraging high-performance simulators, can reach comparable navigation abilities much faster, basically overnight." -Antonio Loquercio, UZH

This is not to say that the performance here is flawless—the system still has trouble with very low illumination conditions (because the cameras simply can't see), as well as similar vision challenges like dust, fog, glare, and transparent or reflective surfaces. The training also didn't include dynamic obstacles, although the researchers tell us that moving things shouldn't be a problem even now as long as their speed relative to the drone is negligible. Many of these problems could potentially be mitigated by using event cameras rather than traditional cameras, since faster sensors, especially ones tuned to detect motion, would be ideal for high speed drones.

A colorful quadrotor drone flies through a forest

The researchers tell us that their system does not (yet) surpass the performance of expert humans in these challenging environments:

Analyzing their performance indicates that humans have a very rich and detailed understanding of their surroundings and are capable of planning and executing plans that span far in the future (our approach plans only one second into the future). Both are capabilities that today's autonomous systems still lack. We see our work as a stepping stone towards faster autonomous flight that is enabled by directly predicting collision-free trajectories from high-dimensional (noisy) sensory input.

This is one of the things that is likely coming next, though—giving the drone the ability to learn and improve from real-world experience. Coupled with more capable sensors and always increasing computer power, pushing that flight envelope past 40 kph in complex environments seems like it's not just possible, but inevitable.

The Conversation (0)

Letting Robocars See Around Corners

Using several bands of radar at once can give cars a kind of second sight

10 min read
Horizontal
Illustration of the modeling of a autonomous vehicle within a urban city intersection.

Seeing around the corner is simulated by modeling an autonomous vehicle approaching an urban intersection with four high-rise concrete buildings at the corners. A second vehicle is approaching the center via a crossing road, out of the AV’s line of sight, but it can be detected nonetheless through the processing of signals that return either by reflecting along multiple paths or by passing directly through the buildings.

Chris Philpot

An autonomous car needs to do many things to make the grade, but without a doubt, sensing and understanding its environment are the most critical. A self-driving vehicle must track and identify many objects and targets, whether they’re in clear view or hidden, whether the weather is fair or foul.

Today’s radar alone is nowhere near good enough to handle the entire job—cameras and lidars are also needed. But if we could make the most of radar’s particular strengths, we might dispense with at least some of those supplementary sensors.

Keep Reading ↓ Show less