Why You Should Be Glad That Quadrotors Have Learned to Dodge Swords

A Stanford roboticist (and fencer) discusses drones, swords, and why mixing them is such a great idea

4 min read

Why You Should Be Glad That Quadrotors Have Learned to Dodge Swords
Image: Stanford ASL

When that quadrotor fencing video showed up everywhere last month, we asked Ross Allen, the Stanford PhD candidate (and fencer) responsible for the research, if he’d be willing to talk to us about it. He said sure, except his thesis defense was that Friday, so would we mind waiting a bit? It’s been a bit, and after a successful defense, Dr. Allen is somehow not sick and tired of robots and answered a bunch of our questions about quadrotors, swords, and why mixing them is such a great idea.

Here’s the video that you (and a couple hundred thousand other people) probably saw a few weeks ago:

The swordplay is cool, but even cooler is the fact that this is the first demonstration of truly “real-time kinodynamic planning” on a quadrotor system navigating an obstructed environment. Or at least, that’s what this recent paper from Allen (along with his colleague Marco Pavone) at Stanford’s Autonomous Systems Laboratory says. We asked him to explain this to us in a way that we might have a shot at understanding, and he did such a fantastic job that we’re just going to quote him:

Think of a maze, like a pen-and-paper maze that you would solve as a kid. Solving this type of maze is solving a very basic path planning problem: navigating a point (the tip of your pen) through obstacles (the maze "walls") to an objective (the exit of the maze). You could make this problem more complicated by imagining a 3D maze. An example of such a problem would be moving a piano through a cluttered apartment without bumping it into other furniture or slamming it into walls. You could imagine coming up with a continuous path for the piano, involving translations and rotations, that could navigate the piano through the apartment.

Now let’s make the problem a bit harder. Now imagine your planning problem is to drive a car through an urban environment with buildings, cars, and pedestrians and reach an objective in as short of time as possible. This isn’t really the same as moving a piano because we can’t arbitrarily translate and rotate my car. The car has to follow its steering path, I can’t "slide" my car sideways. If I want to stop my car, I have to hit the brakes and it takes time to slow down. If I take a corner too fast, I’ll skid out and lose control. You can see this is still a planning problem—  navigating some object (or robot) to a goal while avoiding obstacles—  but now there are these additional constraints on HOW my agent (or robots) can move. We term these constraints "kinodynamic" constraints (some are constraints on kinematics, such as the steering, and some are constraints based on the dynamics, such as the braking to slow down). When solving these types of problems, we are solving "kinodynamic planning problems."

What our research has done is to develop a framework for solving such kinodynamic problems that drastically reduces the computations that have to be executed during operation of the robot. We’ve demonstrated this framework on a quadrotor system navigating an indoor environment with dynamic, even adversarial obstacles. This is arguably the first demonstration of real-time planning for a quadrotor system which represents a kinodynamic system.

The video and GIF that got so much attention really doesn’t display the full capabilities of our research; they just give a flashy, easy-to-understand demonstration. The video below shows some more of the full capabilities when the quadrotor tries to navigate around a wall but I continue to block it’s path. It keeps recomputing paths until the optimal path shifts to the other side of the wall and it navigates away from me:

The example here shows “kinodynamic planning” applied to a quadrotor, but the research is generalized and can be applied to any system you want. Allen suggests that shipping and distribution infrastructure are just a few of the applications that could benefit:

You could imagine an autonomous cargo ship navigates through a busy port to an autonomous crane that safely unloads containers onto awaiting autonomous trucks for delivery. Each step in this infrastructure is faced with it’s own unique planning problem (the ship avoiding other marine craft, the crane moving large suspended loads around other containers, trucks driving on busy roadways) yet each planning problem can be approached with the same framework, just slightly tailored to the specific application (as it is for the quadrotor system).

“For one, this is not a precursor to a Terminator robot. . . . We’re not using machine learning to train the quadrotor how to fight or anything like that. Our robot won’t get any smarter without us programming it with better algorithms. In other words, it’s autonomous but it is not learning how to be more autonomous.”

And of course, you can add delivery drones and camera drones to the list of applications that you can imagine, too: “I’m convinced that delivery drones are in the imminent future and that they will have an immense impact on consumer purchasing,” Allen tells us, with the qualifier that “the planning and control of such delivery drones are only a portion of the hurdles that must be overcome; sensing and policy being the other hurdles.” Those are some pretty Big Hurdles, I think, but it’s refreshing to hear an optimistic perspective on consumer delivery drones from someone involved in such cutting edge research.

From here, Allen says he’ll be pursuing opportunities in “autonomous cars, drones, and rockets, along with research positions.” Our vote would be for rockets, because rockets. There’s still plenty more interesting stuff to be done with this research, though, like throwing a bunch of quadrotors into the mix all at once and looking for emergent cooperative behaviors, and trying to get everything running onboard the drone itself, rather than relying on motion capture systems.

Finally, we asked Allen if there was anything he wanted to clear up after approximately one bajillion people on the Internet watched him fence with a quadrotor without any context whatsoever:

For one, this is not a precursor to a Terminator robot. While we use machine learning in our research, it is for the rather mundane sounding task of “reachability analysis.” We’re not using machine learning to train the quadrotor how to fight or anything like that. Our robot won’t get any smarter without us programming it with better algorithms. In other words, it’s autonomous but it is not learning how to be more autonomous.

Always good to hear that the very latest in robotic research has not resulted in the Terminator.

Yet.

[ Stanford ASL ]

Many thanks to Dr. Ross Allen for speaking with us.

The Conversation (0)