Thanks to an authoritative U.S.-Canada report, we now know that negligence by a utility in Ohio and lax oversight by a rookie regulator precipitated the blackout that darkened much of the North American upper Midwest and Northeast a year ago. Paradoxically, however, when the same remarkable event is seen in a wider historical and statistical perspective, it is no less natural than a sizable earthquake in California. Major outages occurred in the western U.S. grid just eight years ago. And last fall, electric power systems collapsed in Denmark, Italy, and the United Kingdom within weeks or months of the U.S. blackout.
The 14 August 2003 blackout may have been the largest in history, zapping more total wattage and affecting more customers than any before, but if history is any guide, it won't be the last. "These kinds of outages are consistent with historical statistics, and they'll keep happening," says John Doyle, professor of control and dynamical systems, electrical engineering, and bioengineering at the California Institute of Technology in Pasadena. "I would have said this one was overdue."
"We will have major failures," agrees IEEE Fellow Vijay Vittal, an electrical engineering professor at Iowa State University in Ames, who is an expert on power system dynamics and control. "There is no doubt about that."
The numbers on blackouts bear out this fatalism. Extrapolating from the small outages that occur frequently, one might expect a large power grid to collapse only once in, say, 5000 years. But between 1984 (when North American utilities began to systematically report blackouts) and 2000, utilities logged 11 outages affecting more than 4000 megawatts--making the probability of any one outage 325 times greater than mathematicians would have expected. Thus, statistically speaking, the blackout on 14 August, which, according to the U.S. Department of Energy, cost between US $4 billion and $6 billion, was no anomaly [see graph, Only Too Likely"].
Only Too Likely:
Work at Carnegie Mellon University shows that the likelihood of large failures is greater than one would expect on the basis of extrapolations from small failures. The brown curve is fit to actual outages that affected more than 500 megawatts of power; the blue curve is an exponential distribution fit to failures smaller than 800 MW. The silhouette in the background is of the New York City skyline
In the mid-1990s--well before FirstEnergy in Akron, Ohio, got sloppy with its tree-trimming and monitoring systems last summer--mathematicians, engineers, and physicists set out to explain the statistical overabundance of big blackouts. Two distinct models emerged, based on two general theories of systems failure.
One, an optimization model, championed by Caltech's Doyle, presumes that power engineers make conscious and rational choices to focus resources on preventing smaller and more common disturbances on the lines; large blackouts occur because the grid isn't forcefully engineered to prevent them. The competing explanation, hatched by a team connected with the Oak Ridge National Laboratory in Tennessee, views blackouts as a surprisingly constructive force in an unconscious feedback loop that operates over years or decades. Blackouts spur investments to strengthen overloaded power systems, periodically counterbalancing pressures to maximize return on investment and deliver electricity at the lowest possible cost.
Which of these models better explains the mechanism behind large blackouts is a matter of intense--sometimes even bitter--debate. But their proponents agree on one thing: the brave, can-do recommendations of the U.S.-Canada task force report won't eliminate large blackouts. If either conscious optimization or unconscious feedback sets up power systems to fail, then large cascading blackouts are natural facets of the power grid. Stopping them will require that engineers fundamentally change the way they operate the power system. "I don't think there are simple policy fixes," says Doyle.
Of course, the very idea of accepting the inevitability of blackouts is utterly rejected by utility officials and politicians. Certainly the mainstream view among power system engineers continues to be that the answer to reliability problems is to make the grids more robust physically, improve simulation techniques and computerized real-time controls, and improve regulation. What the systems theorists suggest is that even if all that is done and done well--as, of course, it should be--the really big outages still will happen more often than they should.
The Suspicion That Nasty Surprises lurk in the inner workings of power grids began to take shape in the early 1980s with the growth of research into nonlinear systems, a field that became known as chaos theory. The term was a misnomer, for chaos experts were describing layers of order hidden in the apparent disorder of everything from turbulent fluids to celestial mechanics.