Learning From Failure

When success becomes too much of a good thing

You're probably familiar with the 1940 collapse of the original Tacoma Narrows Bridge, in Washington state, even if you don't recognize the name: the black-and-white footage of this suspension bridge twisting and buckling dramatically before finally disintegrating has become an icon of engineering failure.

What you're probably less aware of is that the Tacoma Narrows Bridge incident was just the most photogenic of a sequence of significant bridge failures that have occurred at roughly 30-year intervals since 1847, when metal began replacing stone as the material of choice for crossing spans. And it's not just bridges that exhibit cycles consisting of long periods of success punctuated by disaster: spacecraft, nuclear power plants, and other highly engineered artifacts have followed a similar pattern.

In his latest engaging and readable book, Success Through Failure , (see, cover image)design guru Henry Petroski analyzes this cycle and other flaws in the things around us to show that the old truism "nothing succeeds like success" is in fact a recipe for doom. When a new technology arrives on the scene promising to solve an outstanding problem, it has not yet been sanctified by some spectacular triumph; rather, the supporters and critics vigorously debate the pros and cons of the unproven technology. As a consequence, Petroski argues, even when the new approach is ultimately adopted, its proponents are keenly aware of the limits, tradeoffs, and underlying assumptions that shaped the approach, and therefore they tend to design the first round of systems with such caveats in mind.

Once the innovation has proven itself, however, it's not long before designers, emboldened by a series of successes, begin to stretch the technology further and further. The people who were around for the initial debates--and who know where the bodies are buried, so to speak--retire or move on. Early designs featuring the new method can come to be perceived as overcautious or overengineered by a new crop of designers. The result is almost inevitable: the technology is pushed beyond its limits as some forgotten assumption is violated.

This is what happened with the Tacoma Narrows Bridge. By 1940, after the construction of a string of successful suspension bridges that reached across longer and longer spans, the heavy masonry towers, reinforcing cross-stays, and large, boxy trusses of the Brooklyn Bridge (a 19th-century exemplar of suspension bridge technology) were long gone. Instead, the Tacoma Narrows used an inexpensive, lightweight design that featured a shallow, two-lane road deck that ran for 1.8 kilometers (making it one of the longest bridges in the world at the time of its construction). The pared-down design was no match for steady winds, which set up a disastrous fluttering resonance with the ribbonlike deck, leading to the bridge's infamous collapse [see photo, " A Technology Too Far"].

A similar process occurred leading up to the Challenger and Columbia space shuttle disasters. As shuttles flew successful missions despite exhibiting behavior that was outside the original design specifications, NASA slowly shifted problems--burnt booster O-rings in Challenger's case, the shedding of external tank insulation in Columbia's--from urgent causes of concern to routine service issues, as the original engineers who helped design and build the shuttle ended up in management or retired.

Success, it turns out, is a lousy teacher compared with failure. Petroski uses countless interesting case histories to show how failure motivates technological advancement; for example, how the dismal success rate of the original surgical treatment for removing clots from leg arteries, which involved making incisions that could run the entire length of a patient's limb, prompted a young surgical technician to invent the balloon catheter. This device was a huge leap forward for surgery in general, and if the original clot removal technique hadn't had such a poor track record, the device might never have been invented. Failure, it seems, says Petroski, is essential to successful innovation.

But of course, we're interested in having as high a ratio of successes to failures as possible. And though it may be abstractly comforting to think of just how educational the disaster that has befallen your project may prove to be, it's of little help when you have to deal with angry employers, clients, customers, or, in the worst-case scenarios, random passersby who have experienced your project's failure firsthand.

What's needed is an active approach. Citing the software industry's experiences in anticipating the Y2K bug, Petroski suggests keeping a few "old-timers" around to preserve institutional memories. But his main message is: make sure you're on top of your design assumptions; if your only reason to believe that extending a technology is going to work is that extending it worked before, then your project is at serious risk of becoming an instructive failure. I recommend you keep a copy of Petroski's book on hand and flip through it next time you're feeling seduced by success.