A vision of fully autonomous, self-driving cars allowing human owners to nap or read in the car seems to come from the future. But David Mindell, a historian and electrical engineer at MIT, says that the idea of such fully autonomous vehicles roaming the streets represents a more rigid vision left over from the last century. Mindell casts some doubt over the current course along which Google and other huge tech companies are racing to build self-driving cars that don’t require any human supervision.
In his new book, released this month, titled, “Our Robots, Ourselves: Robotics and the Myths of Autonomy” (Viking/Penguin), Mindell envisions a future in which humans are kept in the loop for (mostly) self-driving cars and other robotic technologies, rather than taking them completely out of the equation. To back up his argument, he points to historical examples such as the U.S.-Soviet space race, remotely-controlled underwater submersibles, autopilot systems in commercial aviation, and the rise of drones.
But Mindell is no armchair historian. He draws in part upon his own experiences in developing and piloting remotely-operated and autonomous underwater vehicles used to explore the undersea sites of ancient Greek shipwrecks, the ill-fated passenger liner Lusitania that was sunk by a German U-boat during World War I, and the submerged graveyards of World War II battleships. He is also a qualified civil aviation pilot who has logged more than 1,000 hours of flying time. Among his many titles, he’s an IEEE Senior Member .
This interview has been edited and condensed for clarity.
IEEE Spectrum: You say that today’s drive for fully autonomous vehicles represents a 20th century narrative. Why is that?
David Mindell: I argue that full autonomy is an old idea. The real frontier is collaboration, which includes autonomy but different levels of autonomy at different moments under the control of a human operator. There is no natural kind of progress; it’s up to us to choose the kinds of progress we want to make in technology. I think this [collaborative] progress is both more productive and humane.
The robot is a 20th-century, labor automation idea. We have plenty of robots, but they’re not freestanding, fully autonomous workers. The U.S. Department of Defense put out a report a couple of years ago saying that there are no fully autonomous systems, just like there are no fully autonomous soldiers, sailors and Marines. Everything is embedded in relationships between humans and technology.
Spectrum: What do you think of all the tech companies and automakers that have invested heavily in the idea of fully autonomous self-driving cars?
Mindell: Most of the automobile companies are not pushing for full autonomy. Most of them are pretty realistic about building up automated features and still letting drivers manage them. It’s not an easy problem to solve, but it’s a worthwhile problem. Whereas going to sleep in the trunk [of a self-driving car] is maybe not the way to think about it. Why not use technology to engage people more deeply in the world rather than cocoon them?
Spectrum: There may be some people who like sitting in the cocoon rather than being more engaged as they drive. What about them?
Mindell: I think you’ll see fully autonomous vehicles in niche applications such as Disney World, college campuses, military bases, or senior citizen centers—places with well-controlled conditions and environments that are not changing much. But everything we know about dangerous machines under the control of complex software systems says we still want people there to mitigate the risks to human life.
In a sense, full autonomy is the idea that engineers have understood the [driving] task and environment completely before the trip begins. We know people on the frontline see things in the environment that are difficult to foresee. Why not allow for their input?
Spectrum: How about the supposed safety benefits of fully autonomous, self-driving cars?
Mindell: Who has demonstrated a fully autonomous car that is safer? Any tech system that you multiply by the overall scale of automotive use in this country or across the world is going to have even the minute flaws magnified in thousands of deaths. Every reason we should think about new technology is to keep making driving safer. There’s no evidence that taking human judgment out of the loop is going to make it safer.
Do people make mistakes? Yes. Stupid mistakes? Yes. But people are also making small corrections and improving the system by reacting to small failures and small uncertainties and unpredictable things, including other people. It would be crazy to get rid of the risk mitigation factors that people provide. Those are not dramatic examples because they don’t prevent accidents in an obvious way. But we’re a long way from a [fully autonomous] software system that can manage that.
Spectrum: What about historical examples of how autonomous systems performed compared with semi-autonomous systems?
Mindell: The Soviets, in the ‘60s and ‘70s, had spacecraft that were more automated than those operated by NASA—mainly because they had less-advanced [analog computer] technology. NASA’s Apollo moon program had digital, fly-by-wire computers and software. But those advanced technologies enabled NASA to have a better, more nuanced inclusion of astronauts in the loop rather than automating them out. You see time and again that the most advanced technology is the most flexible.
Spectrum: That’s very interesting. Many people tend to think that full autonomy represents the most advanced technology.
Mindell: This is part of the thesis of my book. We shouldn’t assume the most automated technology is the most advanced. Want to build an airplane that can take off, get around weather and land by itself? We solved that 20 years ago. But doing it in the social context of taking off into the same crowded airspace that others are using, flying over people’s heads and landing at a busy airport? We’ve barely begun to solve that problem. Time and time again, for most autonomous systems to be really valuable and useful and economically viable, they need to operate in close proximity to human systems.
Spectrum: What do you think of the current focus of Google and other tech companies pursuing self-driving cars?
Mindell: Overall, robotics is still focused on full autonomy as the ultimate goal. Researchers should be working on a “perfect five” with trusted, transparent, flexible collaboration between people and autonomous systems. (The “perfect five” refers to the middle of a scale for automation that ranges from very low at level 1, to fully autonomous at 10; the concept is based on the work of Tom Sheridan, professor of mechanical engineering at MIT.)
Such systems should have the ability to turn on autonomy when it can be helpful. Autonomy can reduce human workload and fatigue, but humans should still be present in the system. That’s an empirical argument based on everything we’ve seen in the last 40 years of autonomous systems. People are always thinking that full autonomy is just around the corner. But there are 30 to 40 examples in the book, and in every one, autonomy gets tempered by human judgment and experience.
Spectrum: You’ve said that the best way forward involves a mix of humans, remotely-controlled systems and autonomous robots. Do you think the future you’re hoping for is the one we’re likely to see?
Mindell: I’m hoping the likely future is the one I’m arguing for. There is a quote in the book from the chief of BMW saying “People buy our cars because people like driving them; we’d be crazy to cut them out of the loop.” I think the world is ready for a more nuanced approach to robotics.