Last week, Virgin Galactic unveiled a new version of its SpaceShipTwo, which is designed to carry paying customers to the edge of space. This new vehicle makes its debut more than a year after a devastating accident that took the life of co-pilot Michael Alsbury.
If the recovery from past spaceflight disasters is any guide, this craft will be flown in a far less risky mode, with more safety features incorporated into the hardware and more safety awareness inculcated into the human minds controlling it. But the real question is what will happen when the next vehicle rolls down the line, and how safe the company’s flights will be 5 or 10 years from now.
The Federal Aviation Administration’s minimalistic approach to regulating the safety of the space tourism industry was called into question in the wake of the Virgin Galactic accident. But the bulwark against future disaster doesn’t rest in federal regulations, codified checklists, or safety gadgets. Instead it rests where it always must, in the hearts and minds of the people who make daily decisions in support of the fabrication, testing, preparation, and operation of such machinery. It is that culture, now understandably sharpened by the still-fresh loss of a human life, that will be the most effective barrier against future accidents.
Only a few weeks ago, NASA commemorated the Apollo 1 fire and the break-up of the space shuttles Challenger and Columbia, a trifecta of fatal spaceflight disasters whose anniversaries all fall within a week of one another. The most frightening thing about these events, beyond the grievous human toll, was the common cultural environment that enabled the disasters to occur.
When NASA’s safety culture is successful, it places the burden of proof on establishing that an accident won’t occur: in other words, when in doubt, assume the worst until proven otherwise. If you haven’t tested the flammability of cabin equipment in pure oxygen at 15-psi, assume it’s dangerous and don’t do it. If you haven’t proven that O-rings will properly seal at low temperatures, assume they won’t. If you haven’t verified that there was no damage to critical heat shielding on a spaceship’s belly, assume there has been. If Apollo 1, Challenger, and Columbia had been operated within the boundaries of traditional safety principles, it’s quite possible none of these tragedies would have occurred.
Humans have an insidiously irrepressible habit of relaxing after successes, making convenient assumptions, and not rigorously validating every uncertainty. For SpaceShipTwo, the critical failure was in the assumption—perhaps unspoken—that no pilot would unlock the tail brake during powered flight. The feathered design tilts the aft stabilizers up 90 degrees from the horizontal to ensure reentry stability. But according to the National Transportation Safety Board’s post-accident assessment, inadequate attention was paid to how unexpected manual inputs to the tail-feathering function could have lethal consequences. This wasn’t ‘pilot error.’ It was design and training error.
Age-old engineering wisdom says that "If it can go wrong, it will go wrong." And in the end, it is the human vigilance guided by this principle—not the safety locks and backup systems and meticulous checklists—that is the only reliable defense against decisions that can kill people. That's one big lesson of spaceflight: we know how to do it, but its continued success depends on keeping complacency at bay.
The English poet Rudyard Kipling expressed this theme a hundred years ago, in his “Hymn of Breaking Strain.” In failure, “the blame of loss, or murder, is laid upon the man—not on the stuff, the man.” But from “the shame of being broken,” he wrote, people “stand up, and build anew.” As we saw during the unveiling of SpaceShipTwo #2, we still do.