In a crisis control center, several teams of firefighters in Montelibretti, Italy, used laptops to guide a robotic ground vehicle into a smoke-filled highway tunnel. Inside, overturned motorcycles, errant cars, and spilled pallets impeded the robot’s progress. The rover, equipped with a video camera and autonomous navigation software, was capable of crawling through the wreckage unguided while humans monitored the video footage for accident victims. But most of the time the firefighters took manual control once the robot was a few meters into the tunnel.
Although the search was just an experiment, microphones recorded clear signs of stress during several tests of the scenario: The firefighter driving the rover spoke at a higher pitch, and members of some teams also interfered with one another’s radio transmissions. And while the human drivers may have improved the robot’s performance, they should have been more focused on the search for victims, says artificial-intelligence expert Geert-Jan Kruijff of the German Research Center for Artificial Intelligence, in Saarbrücken, who consulted on the experiment. The drivers were micromanaging their robots.
The same thing has already happened in the real world: After the Fukushima nuclear power station’s meltdown in 2011, a human driver refused to use a ground robot’s autonomous navigation and managed to get the rover tangled up in its own network cable. At a disaster scene with lives on the line, human rescuers need to learn to trust their robotic teammates or they’ll have their own meltdowns, says Kruijff.
“We’ve done a lot of work on autonomy,” Kruijff says, referring to robots’ ability to navigate, “but if the user doesn’t use it, what good is it?” He figures that rescue robots will need to better understand their human teammates and communicate in a more sophisticated way.
The first step is to gather data to help figure out how to predict when human robot wranglers are overwhelmed. That’s not easy, Kruijff and his colleagues say, and the team is considering different ways a robot can measure stress and attention in its handler. Just as the robots build three-dimensional maps of the physical environments they explore, their software must also build real-time maps of the psychological bottlenecks their human partners face during a rescue mission. Certain things, such as voice-pitch changes, are easy to measure unobtrusively. But strapping cuffs on firefighters to measure their blood pressure or using saliva swabs to measure their cortisol levels would be more trouble than it’s worth.
The next step is to have the robot decide what and how best to communicate. A driver straining to interpret video of a rubble field might be less likely to ignore an audible warning about a nearby victim than a pop-up message on the screen, Kruijff suggests. His team, part of an international consortium that focuses on improving human-robot cooperation in dynamic environments (NIFTi), is considering various configurations of alerts.
Robots also need to better communicate their own abilities and intentions, says rescue robot researcher Robin Murphy of Texas A&M University, in College Station. At a chemical train wreck where her team tested an autonomous helicopter, a rescuer found that the robot bounced too much to take photographs when it first reached an assigned waypoint. The pilot’s attempts to compensate led to more bouncing. In their next iteration of the control software, she explains, they included an icon on the rescuer’s screen indicating when the helicopter’s autopilot was correcting for gusts of wind before settling into place.
Murphy says that one of the strengths of the NIFTi approach is that it has created a series of tests with working firefighters in Italy and Germany. “Too much of our work in robotics has been in the lab,” she says. “Rescue robotics...doesn’t lend itself to reductionism....You’ve got to be in the field with the users, see the robot as a joint cognitive system, and then find out what the scientific problems are.”
About the Author