A version of this article was originally published on Medium. The views expressed here are solely those of the authors and do not represent positions of IEEE Spectrum or the IEEE.

We here at Skydio have been developing and deploying machine learning systems for years due to their ability to scale and improve with data. However, to date our learning systems have only been used for interpreting information about the world; in this post, we present our first machine learning system for actually acting in the world.

Using a novel learning algorithm, the Skydio autonomy engine, and only 3 hours of “off-policy” logged data, we trained a deep neural network pilot that is capable of filming and tracking a subject while avoiding obstacles.

We approached the problem of training a deep neural network pilot through the lens of imitation learning, in which the goal is to train a model that imitates an expert. Imitation learning was an appealing approach for us because we have a huge trove of flight data with an excellent drone pilot—the motion planner inside the Skydio autonomy engine. However, we quickly found that standard imitation learning performed poorly when applied to our challenging, real-world problem domain.

Standard imitation learning worked fine in easy scenarios, but did not generalize well to difficult ones. We propose that the signal of the expert’s trajectory is not rich enough to learn efficiently. Especially within our domain of flying through the air, the exact choice of flight path is a weak signal because there can be many obstacle-free paths that lead to cinematic video. The average scenario overwhelms the training signal.

CEILing can leverage acausal (future) data, which enables it to “see” farther into the future and therefore train an even smarter pilot

How can we do better? Our insight is that we don’t have just any expert, we have a computational expert: the Skydio Autonomy Engine. Therefore instead of imitating what the expert does, we understand what the expert cares about. We call this approach Computational Expert Imitation Learning, or CEILing.

Why is CEILing better than standard imitation learning? Let’s consider a didactic example in which a teacher is trying to teach a student how to do multiplication. The teacher is deciding between two possible lesson plans. The first lesson plan is to give the student a bunch of multiplication problems, along with the answer key, and leave the student alone to let them figure out how multiplication works. The second lesson plan is to let the student attempt to solve some multiplication problems, give the student feedback on the exact mistakes they made, and continue until the student has mastered the topic.

Which lesson plan should the teacher choose? The second lesson plan is likely to be more effective because the student not only learns the correct answer, but also learns why the answer is correct. This allows the student to be able to solve multiplication problems they have never encountered before.

Skydio neural network pilot simulationSimulation trials of the deep neural pilot based on the Computational Expert Imitation Learning, or CEILing, approach developed by the authors.Gif: Skydio

This same insight applies to robot navigation: Some deviations from the expert should be penalized more heavily than others. For example, deviating from the expert is generally okay in open space, but a critical mistake if it is towards an obstacle or causes visual loss of the subject. CEILing lets us convey that information from the expert instead of blindly penalizing deviations from the expert’s trajectory. This is why CEILing trains a deep neural pilot that generalizes well with little data.

Now one question to ask is why even use CEILing to train a deep neural pilot? Why not just have the computational expert be the pilot? The primary reason we are excited about CEILing is that CEILing could train a pilot that is actually better than the computational expert pilot.

How is this possible? Consider a scenario in which a drone needs to fly through a forest at high speed. This is a challenging environment because thin objects, such as tree branches, are difficult to see from far away. Although the current Skydio autonomy engine is able to perceive and avoid these thin branches, sometimes the branches can only be detected when the drone is already quite close, which forces the drone to execute an aggressive maneuver. In contrast, our deep neural pilot could be able to smoothly avoid these thin branches altogether because it will have learned that flying towards treeswhich are large and easily seenis correlated with encountering thin branches. In short, CEILing can leverage acausal (future) data, which enables it to “see” farther into the future and therefore train an even smarter pilot.

Although there is still much work to be done before the learned system will outperform our production system, we believe in pursuing leapfrog technologies. Deep reinforcement learning techniques promise to let us improve our entire system in a data-driven way, which will lead to an even smarter autonomous flying camera.

Gregory Kahn is a Ph.D. student at UC Berkeley advised by Sergey Levine and Pieter Abbeel, and was a research intern at Skydio in Spring 2019.

Abraham Bachrach is co-founder and CTO at Skydio.

Hayk Martiros leads the autonomy team at Skydio, whose work focuses on robust approaches to vision-based autonomous flight.

The Conversation (0)

The Bionic-Hand Arms Race

The prosthetics industry is too focused on high-tech limbs that are complicated, costly, and often impractical

12 min read
Horizontal
A photograph of a young woman with brown eyes and neck length hair dyed rose gold sits at a white table. In one hand she holds a carbon fiber robotic arm and hand. Her other arm ends near her elbow. Her short sleeve shirt has a pattern on it of illustrated hands.

The author, Britt Young, holding her Ottobock bebionic bionic arm.

Gabriela Hasbun. Makeup: Maria Nguyen for MAC cosmetics; Hair: Joan Laqui for Living Proof
DarkGray

In Jules Verne’s 1865 novel From the Earth to the Moon, members of the fictitious Baltimore Gun Club, all disabled Civil War veterans, restlessly search for a new enemy to conquer. They had spent the war innovating new, deadlier weaponry. By the war’s end, with “not quite one arm between four persons, and exactly two legs between six,” these self-taught amputee-weaponsmiths decide to repurpose their skills toward a new projectile: a rocket ship.

The story of the Baltimore Gun Club propelling themselves to the moon is about the extraordinary masculine power of the veteran, who doesn’t simply “overcome” his disability; he derives power and ambition from it. Their “crutches, wooden legs, artificial arms, steel hooks, caoutchouc [rubber] jaws, silver craniums [and] platinum noses” don’t play leading roles in their personalities—they are merely tools on their bodies. These piecemeal men are unlikely crusaders of invention with an even more unlikely mission. And yet who better to design the next great leap in technology than men remade by technology themselves?

Keep Reading ↓Show less