How to Fly a Drone With Your Face

Send your drone flying by making a ridiculous face at it

3 min read

Send your drone flying by making a ridiculous face at it
Image: Simon Fraser University

It’s nice that consumer drones are getting easier and easier to use, incorporating more safeguards and autonomy and stuff. Generally, though, piloting them does still require some practice and skill, along with free hands and a controller that’s probably more expensive than it should be. This is why we’ve been seeing more research on getting drones set up so that unaltered, uninstrumented, and almost entirely untrained users can still do useful things with them.

At Simon Fraser University, roboticists are seeing how far they can push this idea, and they’ve come up with a system for controlling a drone that doesn’t require experience, or a controller. Or even hands. Instead, you use your face, and it’s totally intuitive and natural. As long as it’s intuitive and natural for you to make funny faces at drones, anyway.

Here is how to control a drone with your face in Canada:

PicNeutral faces (above) and trigger faces (below).Image: Simon Fraser University

Ready: The user’s identity and facial expressions are learned and input is provided through touch-based interaction. Hold the drone at eye level, gaze deeply into its camera, and give it your best neutral look. Hold this neutral look until the drone is satisfied that you are consistently neutral. This should take less than a minute, unless you get the giggles. Next, rotate the drone so that it’s sideways, and make a “trigger” face, which is unique from your neutral face. If you’re super boring, you can make a trigger face by just covering one eye, but come on, you’re better than that.

Aim: The robot starts flying and keeps its user centered in its camera view, while the user lines up the trajectory and chooses its power by “drawing back” analogous to firing a bow or slingshot. Place the drone on the ground in front of you, and it’ll take off and over menacingly in front of you. Try and move from side to side to escape, and the drone will remorselessly yaw to keep you in view. Once you have it pointed exactly the wrong way, back away slowly and imagine that there’s a rubber band between you and the drone and it’s getting stretched more and more. 

Fly: The user signals to the robot to begin a preset parameterized trajectory. The robot executes the trajectory with parameters observed at the end of the Aim phase. When the drone is facing in the direction you don’t want it to go and you think you’re far enough away, make your trigger face, and the drone will fly off backwards (directly away from you) on a ballistic trajectory, the strength of which is moderated by how far away from the drone you are when you made the face, rather like a slingshot. 

Besides the kind of “slingshot” ballistic trajectory shown in the video, the drone could also be commanded to do a “beam” trajectory, where it travels in a straight line, or a “boomerang” trajectory, where it flies out and makes a circle before (hopefully) coming back again. The particular drone used here happened to be a Parrot Bebop slightly modified with an LED strip to provide visual feedback, and while the vision processing was done offboard, there’s no reason why it needed to be, because it doesn’t require a lot of power, according to the researchers.

In tests, this technique for controlling a drone works surprisingly well. The researchers had users try and send the drone through an 0.8-meter diameter hoop located 8 meters away. Most people managed to get the drone within about a meter of the hoop most of the time, although the researchers point out that “the robot did not fly perfectly straight due to the inevitable errors of real-world robotics.” Damn that pesky real world and all its realness! 

And finally, here are some conclusions, in no particular order, and lifted straight from the paper because I can’t possibly improve on them:

While the demonstrations in the paper have sent the robot on flights of 45 meters outdoors, these interactions scale to hundreds of meters without modification. If the UAV was able to visually servo to a target of interest after reaching the peak of its trajectory (for example another person, as described in another paper under review) we might be able to “throw” the UAV from one person to another over a kilometer or more... The long term goal of this work is to enable people to interact with robots and AIs as we now interact with people and trained animals, just as long imagined in science fiction… Finally, and informally, we assert that using the robot in this way is fun, so this interaction could have applications in entertainment.

I love that last assertion, because it’s too often overlooked in research—no matter what you’re working on, don’t forget how much fun robots are.

“Ready—Aim—Fly! Hands-Free Face-Based HRI for 3D Trajectory Control of UAVs,” by Jake Bruce, Jacob Perron, and Richard Vaughan from Simon Fraser University in Canada, was presented at the IEEE Canadian Conference on Computer and Robot Vision.

The Conversation (0)