Human-Computer Relationships And System Safety

DC Metro, Air France Crashes Caused by Automation Paradox?

2 min read
Human-Computer Relationships And System Safety

There were two very interesting stories about human-computer interaction in the Washington Post over the past two days concerning the recent Washington DC Metro and Air France crashes. The first in yesterday's paper called "When Fail-safe Fails" was written by Charles B. Perrow, emeritus professor of sociology at Yale, and the author of "Normal Accidents" and "The Next Catastrophe."

(You can read a review of "The Next Catastrophe" done for IEEE Spectrum here.)

Professor Perrow writes in his Post article that,

"The ultimate question in these tragedies is: Can we really trust computers as much as we trust ourselves? For some things, perhaps not. But if we want to travel faster and in more comfort, we have to let ever more computerization into our lives. And that means that we have to focus more on the humans who interact with the computers."

Dovetailing with Professor Perrow's article, Shankar Vedantam, a staff writer for the Post, discusses this issue in some detail in an article in today's edition, called, "Metrorail Crash May Exemplify Automation Paradox." The story quotes John D. Lee, a professor of industrial and systems engineering at the University of Wisconsin at Madison, who describes the automation paradox in this way,

"The better you make the automation, the more difficult it is to guard against these catastrophic failures in the future, because the automation becomes more and more powerful, and you rely on it more and more."

It is easy to do - in fact, we probably all have been seduced by automation in one way or another over the past year.

Some of you may recall that I blogged this past January about the British Maritime Accident Investigation Branch (MAIB) having to issue safety warning about the misuse of computerized navigation systems by ships' officers. I have also written a couple of times in the Risk Factor about drivers following what their GPS systems were showing and telling them instead of their own eyes to their detriment.

Both Washington Post stories are interesting and if you have the time, you should read them.

The Conversation (0)

The Future of Deep Learning Is Photonic

Computing with light could slash the energy needs of neural networks

10 min read

This computer rendering depicts the pattern on a photonic chip that the author and his colleagues have devised for performing neural-network calculations using light.

Alexander Sludds
DarkBlue1

Think of the many tasks to which computers are being applied that in the not-so-distant past required human intuition. Computers routinely identify objects in images, transcribe speech, translate between languages, diagnose medical conditions, play complex games, and drive cars.

The technique that has empowered these stunning developments is called deep learning, a term that refers to mathematical models known as artificial neural networks. Deep learning is a subfield of machine learning, a branch of computer science based on fitting complex models to data.

Keep Reading ↓ Show less