Lessons Learned from Observing 90 Untrained Participants Abusing a Flying Robot

No robots were harmed (much) during this research

4 min read

Lessons Learned from Observing 90 Untrained Participants Abusing a Flying Robot
Three participants controlling the drone (from left): crashing into a wall; subconsciously trying to steer the robot by leaning the upper body; and a successful landing.
Image: WSU/IRL Lab

We stole this headline from the title of a paper that was presented last month at the AI for Human-Robot Interaction symposium in Washington, D.C., because we couldn’t think of a better way to frame this research. To figure out how instructions given to humans can change the way that the humans perform a task, researchers from Washington State University’s Intelligent Robot Learning Laboratory gave people a drone, told them to fly it through an obstacle course, and then watched them do a terrible job.

The overall idea is this: the researchers wanted to see whether one set of instructions and information about how to perform a task would cause participants to perform that task better than another set of instructions might.

In this particular experiment, the WSU researchers, with collaborators from Elon University and the University of Maryland, Baltimore County,asked 90 people to fly a Parrot AR Drone through a simple obstacle course. Half of the people were told that the robot was “an expensive piece of lab equipment” that they should try to avoid damaging, while the other half were told that the robot was “an inexpensive toy robot” and that some damage was expected (both of these things are sort of true; the AR Drone sells for about US $300 and is used by both researchers and hobbyists). The hypothesis was that the group who though the robot was expensive would complete the obstacle course more slowly, but also more accurately, while the group who thought the robot was a toy would fly faster and crash more.

The obstacle course itself looked like this:

img

Each participant was given a brief training session on the drone, which was configured to limit the maximum horizontal and vertical speed. After they got the controls figured out, they were told to complete the course as quickly as possible without hitting any of the obstacles, and got three attempts to do so. The researchers focused on the second attempt, figuring that the first run would mostly be a practice run and that most participants would be too comfortable by the third run to show much variation.

Somewhat surprisingly, results showed no correlation between the speed with which participants completed the course and whether they were told that the drone was expensive or a toy: there was less than a second of difference between the two groups, at 29.8 seconds versus 30.7 seconds. In other words, knowing that the drone was expensive did not (statistically, and for this particular experiment) make it more likely that users would be more careful and methodical when flying it. However, the researchers did notice some differences between the two groups:

“Informal observation suggested that some participants aimed to complete the course as quickly as possible without worrying about damaging the robot, while others who were uncomfortable with flying the UAS focused more on not damaging it and worried less about time.

Another observation was that some participants did not pay attention to the instructions: they forgot the correct way to manipulate the interface for flying the UAS after practicing outside before the test. There was not correlation between instructions and perceived nervousness in the post-experiment survey. However, it should be noted that some participants gave explicit feedback to our instructions with saying that it was good to know that the UAS was a toy or they would try to avoid damaging it since it was expensive.”

Not very surprisingly, people who responded positively to a post-flight survey question about whether they were nervous or not took 25 percent longer to complete the course, and were more fun to watch while they did it:

“We observed that participants who visually appeared nervous tended to: 1) move the UAS forward a little and then hover it for a long time, 2) forget the correct way to control the UAS, and 3) vocalize louder (and more often) when hitting the obstacles.”

And least surprisingly of all, participants over the age of 30 took more than 10 seconds longer than participants under 30 (older subjects seemed more deliberate in trying to avoid making mistakes), and participants who said that they played more than 3 hours of video games a week were 9 seconds faster than those who did things like go outside for fun.

“One possible explanation could be that participants who played more video games treated the course as a game rather than a task, achieving higher performance through more excitement and a relaxed attitude. If true, this would motivate our future design of more comfortable and intuitive robot interfaces for helping people provide high quality demonstrations.”

The number of crashes, strangely, did not correlate with anything at all. It does, however, reinforce the fact that the AR Drone is a beast:

“We were also quite surprised that after 90 participants and multiple collisions and crashes, the single robot used for the experiments is still able to fly accurately—it is accurate to say that no robots were harmed (much) during these experiments.”

Good, I feel better now.

Next, the researchers are going to try altering the stress levels of the participants “by making disapproving vocalizations when the robot crashes, or saying calming phrases when the participant makes a mistake.” I volunteer to do the former. Also, they’d like to move away from self-reporting of stress, and instead use more direct measurements, like heart rate or galvanic skin response. Eventual practical applications could include a robot that can change its behavior depending on your stress, like by moving more slowly, or increasing its autonomy, if it senses its human pilot starting to get freaked out.

[ Paper | Poster ]

The Conversation (0)