There were two very interesting stories about human-computer interaction in the Washington Post over the past two days concerning the recent Washington DC Metro and Air France crashes. The first in yesterday's paper called "When Fail-safe Fails" was written by Charles B. Perrow, emeritus professor of sociology at Yale, and the author of "Normal Accidents" and "The Next Catastrophe."
(You can read a review of "The Next Catastrophe" done for IEEE Spectrum here.)
Professor Perrow writes in his Post article that,
"The ultimate question in these tragedies is: Can we really trust computers as much as we trust ourselves? For some things, perhaps not. But if we want to travel faster and in more comfort, we have to let ever more computerization into our lives. And that means that we have to focus more on the humans who interact with the computers."
Dovetailing with Professor Perrow's article, Shankar Vedantam, a staff writer for the Post, discusses this issue in some detail in an article in today's edition, called, "Metrorail Crash May Exemplify Automation Paradox." The story quotes John D. Lee, a professor of industrial and systems engineering at the University of Wisconsin at Madison, who describes the automation paradox in this way,
"The better you make the automation, the more difficult it is to guard against these catastrophic failures in the future, because the automation becomes more and more powerful, and you rely on it more and more."
It is easy to do - in fact, we probably all have been seduced by automation in one way or another over the past year.
Some of you may recall that I blogged this past January about the British Maritime Accident Investigation Branch (MAIB) having to issue safety warning about the misuse of computerized navigation systems by ships' officers. I have also written a couple of times in the Risk Factor about drivers following what their GPS systems were showing and telling them instead of their own eyes to their detriment.
Both Washington Post stories are interesting and if you have the time, you should read them.
Contributing Editor Robert N. Charette is an acknowledged international authority on information technology and systems risk management. A self-described “risk ecologist,” he is interested in the intersections of business, political, technological, and societal risks. Along with being editor for IEEE Spectrum’s Risk Factor blog, Charette is an award-winning author of multiple books and numerous articles on the subjects of risk management, project and program management, innovation, and entrepreneurship. A Life Senior Member of the IEEE, Charette was a recipient of the IEEE Computer Society’s Golden Core Award in 2008.