CMU Develops Autonomous Car Software That's Provably Safe
Autonomous cars behaving themselves during the DARPA Urban Challenge
It's one thing to ramble on (like we do) about how autonomous cars are way safer than human driven cars, but it's another thing to prove it. Like, mathematically. A research group at Carnegie Mellon has created a distributed control system for autonomous highway driving and then verified that it's safe. In other words, the software itself provably cannot cause an accident.
To do this, the CMU group started with a simulation of just two cars (equipped with sensors and short range inter-vehicle communications) in a lane, and then proved that their software kept those cars from having an accident 100 percent of the time. With this as a base, they slowly expanded the simulation, adding more and more layers like multiple cars and lane changes until they had an entire complex autonomous control system, each module of which is definitely safe.
So far, the system is only able to deal with entering, exiting, speed changes, and lane changes on straight line highways, so it's going to be of limited use unless you live in Kansas. It's also dependent on sensor technology that is only just starting to be introduced into vehicles, and I imagine that the "provably" bit starts to break down when dealing with unexpected situations, like a moose jumping off of an overpass onto the hood of your car. But it's a start, and a fundamental technique that can be built upon.
This type of thing also seems like it may have the potential to streamline the introduction of autonomous cars from an insurance and legal standpoint, since it offers some degree of protection for manufacturers: If an accident occurs and the software provably cannot be at fault, that leaves either a sensor hardware failure, or (more likely) a human simply pushed the wrong button.
Image credit: KWC