Two weeks after Hurricane Sandy ravaged the East Coast, and still a few hundred thousand people of the millions initially hit are without power. Those in the dark, told repeatedly that their power would be back in “a couple of days”, “the weekend,” “Friday,” etc., have been purchasing emergency generators and extra blankets—there hasn’t been a lot of reason for them to hope that their utility’s latest reassurances are true.
Meanwhile, city and state governments have been blasting utilities for not recovering fast enough, in particular, the Long Island Power Authority is being charged with failure to properly manage the recovery. But given a hurricane like Sandy, it’s hard to say what “fast enough” really means.
Utilities and governments, who would like to know how effective their recovery efforts have been and how to improve them, could be helped by something as simple as a change in the kind of data gathered and processed in the aftermath of disasters that cause widespread outages, says Chuanyi Ji, an associate professor in Georgia Tech’s School of Electrical and Computer Engineering.
Ji and former Ph.D student Supaporn Erjongmanee began looking at the impact of hurricanes on the communication infrastructure in 2005, after Hurricane Katrina. She and her students collected data on Internet outages in collaboration with colleagues from AT&T (at time, BellSouth), looking at patterns of failure and recovery in an attempt to pinpoint the period of peak impact—the point at which failures were occurring at their fastest rate and recovery at its slowest.
The research, which Ji continued using data from 2008's Hurricane Ike, led to the conclusion that Internet service disruptions were typically caused by power failures. The group also determined that, on the whole, the Internet was fairly reliable, and its work would be more useful if applied to a network more susceptible to severe weather—the power network.
Ji and Ph.D student Yun Wei discovered that one major parameter being tracked in disasters is simply, how many customers are without power? This might make for a nice graph—a steadily (hopefully) dropping number as repairs are made—but it doesn’t provide sufficient information nor insights, Ji said. It doesn’t tell how severe outages are, whether a recovery is fast or slow, or how vulnerable or resilient a regional network is.
“If we can figure out what variables best characterize failure and recovery processes, we would know what to plot for customers and providers. This requires intelligent approaches and smart data processing. For example, we need to plot failure rates, not simply how many users are without power,” Ji says. “If we know how many new failures occur every half an hour, we have information that can tell us how severe outages are, and whether recovery is able to respond accordingly.”
Ji and Wei also suggest utilities put failure rate and recovery together and look at the interaction of the two; this way, they say, utilities see how fast components are failing, how fast or slowly recovery progress, and the disparity between the two. Outage numbers alone don’t give you that information.”
With the right kind of data and intelligent methods in hand, utilities can reassure customers that the rate of recovery is increasing, perhaps, or, based on historical data, that the recovery will either pick up quickly, or slow down, within some amount of time. This would let customers plan their coping strategies better. The utilities can also compare current curves against historical events, to verify that recovery is going well or, perhaps, to test new procedures or technologies and their impact on failure and recovery. They would also have a clear picture of the resilience of their network, compared to other impacted regions.
In the long run, Ji said, the kinds of models developed using large-scale data and intelligent methods could lead to identifying potential failures ahead of time, enabling utilities to be proactive.
``Real-data is pertinent to this research but often unavailable, “ Ji said. “A utility provider provided us data from a hurricane in the past. That helped us greatly with this research.” But she’s scrambling to get data from utilities impacted by Hurricane Sandy to incorporate into her model. “With their data and our algorithms,” she says, “we can contribute to a smarter and more resilient grid.”
Photo: Power crews repair lines after Hurricane Charley in Florida. Source: FEMA
Fig: According to Chuanyi Ji’s model and available data, the typical period of rapid recovery of a power network affected by a hurricane is 12.65 hours; after that, recovery is slow and can take days. In this graph, the red curve represents a typical recovery rate; other curves represent overly optimistic or pessimistic projections.
Tekla S. Perry is a senior editor based in Palo Alto, Calif., where she’s been covering the people, companies, and technology that make Silicon Valley a special place for more than 30 years. Perry started reporting on California tech companies from IEEE Spectrum’s New York office in the early 1980s, before relocating to the Bay Area full time in 1986. She has the privilege of having a front-row seat as tech history is being made, including the early days of video games, the growth of the personal computer industry, the rise and fall of Xerox PARC, and the incredible startup boom in Silicon Valley today. She has conducted in-depth interviews with a host of tech pioneers, including Gordon Moore, Andy Grove, Robert Noyce, David Packard, Irwin Jacobs, Andrew Viterbi, Jim Clark, Ray Dolby, Alan Kay, Adam Osborne, Gene Amdhal, Gary Kildall, Gordon Bell, Steve Wozniak, Marissa Mayer, Elon Musk, and Nolan Bushnell.
Besides covering Silicon Valley and startups in print and in her blog, View From the Valley, Perry follows trends in consumer electronics technology around the world. An IEEE member, she holds a bachelor’s degree in journalism from Michigan State University.