The passengers and crew of Malaysia Airlines Flight 124 were just settling into their five-hour flight from Perth to Kuala Lumpur that late on the afternoon of 1 August 2005. Approximately 18 minutes into the flight, as the Boeing 777-200 series aircraft was climbing through 36 000 feet altitude on autopilot, the aircraft—suddenly and without warning—pitched to 18 degrees, nose up, and started to climb rapidly. As the plane passed 39 000 feet, the stall and overspeed warning indicators came on simultaneously—something that’s supposed to be impossible, and a situation the crew is not trained to handle.
At 41 000 feet, the command pilot disconnected the autopilot and lowered the airplane’s nose. The auto throttle then commanded an increase in thrust, and the craft plunged 4000 feet. The pilot countered by manually moving the throttles back to the idle position. The nose pitched up again, and the aircraft climbed 2000 feet before the pilot regained control.
The flight crew notified air-traffic control that they could not maintain altitude and requested to return to Perth. The crew and the 177 shaken but uninjured passengers safely returned to the ground.
The Australian Transport Safety Bureau investigation discovered that the air data inertial reference unit (ADIRU)—which provides air data and inertial reference data to several systems on the Boeing 777, including the primary flight control and autopilot flight director systems—had two faulty accelerometers. One had gone bad in 2001. The other failed as Flight 124 passed 36 571 feet.
The fault-tolerant ADIRU was designed to operate with a failed accelerometer (it has six). The redundant design of the ADIRU also meant that it wasn’t mandatory to replace the unit when an accelerometer failed.
However, when the second accelerometer failed, a latent software anomaly allowed inputs from the first faulty accelerometer to be used, resulting in the erroneous feed of acceleration information into the flight control systems. The anomaly, which lay hidden for a decade, wasn’t found in testing because the ADIRU’s designers had never considered that such an event might occur.
The Flight 124 crew had fallen prey to what psychologist Lisanne Bainbridge in the early 1980s identified as the ironies and paradoxes of automation. The irony, she said, is that the more advanced the automated system, the more crucial the contribution of the human operator becomes to the successful operation of the system. Bainbridge also discusses the paradoxes of automation, the main one being that the more reliable the automation, the less the human operator may be able to contribute to that success. Consequently, operators are increasingly left out of the loop, at least until something unexpected happens. Then the operators need to get involved quickly and flawlessly, says Raja Parasuraman, professor of psychology at George Mason University in Fairfax, Va., who has been studying the issue of increasingly reliable automation and how that affects human performance, and therefore overall system performance.
”There will always be a set of circumstances that was not expected, that the automation either was not designed to handle or other things that just cannot be predicted,” explains Parasuraman. So as system reliability approaches—but doesn’t quite reach—100 percent, ”the more difficult it is to detect the error and recover from it,” he says.
And when the human operator can’t detect the system’s error, the consequences can be tragic.