Hey there, human — the robots need you! Vote for IEEE’s Robots Guide in the Webby Awards.

Close bar

Cars May Think, But Will They Achieve Artificial Stupidity?

Would we even want a machine that emulated the typical driver's strengths—and weaknesses?

2 min read

Cars May Think, But Will They Achieve Artificial Stupidity?
Illustration: iStockphoto
The late Herbert Simon, a pioneer in artificial intelligence, once fretted that every time a machine showed a hint of brain, people redefined brainpower. “They move the goalposts,” he said.
 
The same thing may well be happening with cars that think. Some people maintain that while autonomous cars may be good at handling routine problems on the highway, like lane-keeping and front-collision avoidance, they are far from being able to negotiate their way around pedestrians, cyclists and small dogs. They say such cars will be great at routine tasks but not at those that require tact or judgment.
 
Simon would have had none of it.
 
“My grandparents would have called a machine that can play chess intelligent,” he told me back in 1998, the year after IBM’s Deep Blue beat Garry Kasparov at chess. “They wouldn’t have said it was a parlor trick, or that chess didn’t really involve thinking.” 
 
Then he raised a point that today’s robocar critics should consider: our own weaknesses. “If machines could criticize us, they’d start with our ‘typical human mistakes,’” he said.
 
He was alluding to what chess masters once sneered at as “typical computer mistakes.” For instance, a program would be playing quite decently and then, unaccountably, give up a man.  Reason: after looking a fixed number of moves ahead and counting up the men, it had concluded that it was about to lose one when in fact it could right the balance on the next move--one move beyond its “search horizon.” So, regarding the man as lost, it threw it away the best way it could find.
 
In the 1970s I saw grandmaster William Lombardy, playing without his queen, still nearly beat Northwestern University's Chess 4.0 program by exploiting this horizon effect. The program jettisoned first a pawn and then a piece (I believe it was a knight) before pulling itself together and winning. (A pawn and a piece are a lot of material, but they’re no match for a queen.)
 
Programmers later found they could manage the horizon effect by having the machine keep looking along a particular line of analysis until all captures of men and checks to the king had been tried. It wasn’t a perfect solution, but it was good enough. Just ask Kasparov.
 
Simon recited a long list of typical human mistakes, of which I remember these few. We mold our reasoning to yield the conclusions we want; we put off until tomorrow what we ought to do today (itself a kind of horizon effect). We follow the herd. We are often tired, preoccupied, emotional or just stooopid.
 
Machines? Never. 
 
So, how does this apply to the debate over robotic cars?  Only that by holding them to human standards we sometimes underrate them--and overrate ourselves. An automated car that perfectly emulated the typical human driver’s strengths and weaknesses would never get approval from the government, the insurance companies or the car-buying public. 
 
We’re not good enough for us. 
 
The Conversation (0)