The fall of the Twin Towers on 9/11 cut off Internet service in South Africa. A cyberattack in 2003 shut down a section of the Internet, halting the Davis-Besse nuclear power plant in Ohio. Later that year, an apparently minor fault at FirstEnergy, an electric utility in Ohio, plunged 50 million North Americans into darkness.
Each case posed a problem of particular concern for homeland security: how to guard critical infrastructure that is so vast and complex that we cannot afford to protect every part or anticipate the ultimate effects of a disruption?
Modern societies depend on such infrastructure for power, communications, transportation, and public health, yet everywhere governments are still addressing vulnerabilities one component at a time.
It is wiser to examine an infrastructure network as a whole rather than look at its various assets. Such an analysismight indicate that you should concentrate efforts on hardening critical pathways, for instance. More radical still, it may lead you to totally rebuild an infrastructure sector.
Any such analysis must draw on the science of networks, a field of mathematics that gained its original inspiration from the study of the social connections among people. I will begin with an overview of this work, focusing only on aspects that apply to homeland security policy.
A network consists of connection points, or nodes, and the links between them. Nodes can be people, bridges, power plants, telecommunications switches, Internet servers, or water treatment plants. Links can represent relationships between pairs of people, roads linking bridges, power transmission lines connecting power plants to consumers, telephone lines, Web page links, or pipelines in a water system. The network abstraction is purposeful, because recent results from the science of networks can then be applied to critical infrastructure analysis.
That leads to systemwide results and strategies in the short term, and architectural recommendations in the long term. It also shifts attention away from ”vulnerability analysis” to ”architectural resiliency analysis.”
There are three basic kinds of networks, and they vary in their degree of structure.
”Random networks” are the simplest of all. You get them by selecting arbitrary pairs of connection points or nodes, and linking them directly. It is hard to damage such a network, because destroying one link leaves others to compensate. Yet it is also hard to defend a random network, because the nodes and links, being of equal value, must all be protected.
Nearly all infrastructure networks of high interest, notably the massive power grid and the telecommunications networks underlying the Internet, are nonrandom.
”Small world networks” are so named because it is generally possible to get from any node to any other node by passing through a small number of intermediate nodes. The idea was first brought to general notice in an experiment conducted at Yale University in the early 1960s, by the social psychologist Stanley Milgram.
Milgram assigned each of his students the task of getting a letter to a designated stranger by sending it to an acquaintance who lived as near as possible to the addressee, and to ask that acquaintance to hand it off to one of his own acquaintances who lived nearer still. The letters reached their destinations after an average of just six handoffs. The notion was made famous in John Guare’s play Six Degrees of Separation .
Small world networks depend on a limited number of nodes that are tightly coupled with adjacent nodes through many links. This pattern is known as clustering, hence small world networks have a high cluster coefficient. In social networks, the clusters are well-connected people; in electric grids, they are highly linked relays between local networks. Clustering in a power grid can promote a cascade that leads to a blackout.
It is easier to damage a small world network than a random one, but it is also easier to protect it: you just harden the nodes that form clusters. Indeed, it is possible to prevent nearly all cascade failure by protecting a rather small percentage of the entire network.
”Scale-free networks,” which are based on an extreme kind of network architecture, are the subject of studies by Albert-László Barabási at the University of Notre Dame, South Bend, Ind. Such a network contains a small number of hub nodes linked to many other nodes through many direct links, like subway lines that link major stations via express service. Typically just one or two nodes connect, in many ways, to the remaining nodes. The rare nodes are considered critical, because they hold most of the network together. For example, the Internet is characterized by a few highly connected nodes and a massive number of nodes with few links.
Scale-free structure makes such networks simultaneously vulnerable to attack and easy to protect. A scale-free network is vulnerable to the loss of its hubs, because hub failure has the greatest impact on dismantling the entire network. On the other hand, if a hub is hardened against failure, it means that it is nearly impossible to severely damage the entire network. It is relatively inexpensive to protect the entire scale-free network, because hardening of one or two hubs is sufficient to secure nearly the entire network.
It may seem as though the differences in network structure are patently obvious, but that is true only when the networks are small enough to eyeball. The Web, for instance, was long regarded as a network with more than 200 nodes and billions of randomly connected links; it was quite an achievement for Barabási and his students to show that it was scale-free. Size and complexity in the electric grid are what allowed small initial failures to grow into the power outage of 2003; the same factors could create disasters in other aspects of homeland security, such as public health. As such problems are unintended, they have been termed ”natural accidents” by the Yale sociologist Charles Perrow (see his article, ”Shrink the Targets!” in the September 2006 issue of IEEE Spectrum). Of course, terrorists can exploit the same vulnerabilities to produce disasters.
There are four ways for network science to inform policy:
The investment portfolio approach hardens critical hubs, say, the 10 percent deemed to have the most influence, even if that means leaving all other nodes unprotected.
This strategy would handle the telecommunications network, for instance, by ignoring most of its telephone switching assets and funding the protection of perhaps several dozen carrier hotels—the scale-free network hubs—throughout the nation. The investment makes a little money go a long way.
The restructuring approach deliberately randomizes scale-free or small world networks. We might, for example, adopt a policy of distributed generation, whereby electricity is generated closer to where it is consumed, so that any attack on the network would have merely local consequences.
Distributed generation through the application of technology such as wind, solar, wave action, fuel cells, and high-tech coal burning would simultaneously reduce dependence on foreign fuels and dependence on massive nation-spanning power grids. History tells us that power grid failures are more often caused by faults in the intermediate transmission and distribution networks than power plants. Less dependence on long-distance transmission lines means less dependence on the most error-prone parts of the sector. In a random network, failures are most likely local and therefore have only a local effect.
The stabilizing approach adds links to a network to exploit a recent advance in network science known as synchronization. A network is said to be synchronous if a small perturbation in one or more nodes dampens out over time rather than intensifying. Synchrony leads to stability in the overall network.
The opposite of synchrony is chaos, and indeed these concepts all grow out of chaos theory, a feature of which is the so-called strange attractor—a state to which the system tends to move (although it may only orbit that state, never reaching it). Mathematical research suggests that randomly adding a few links tends to convert a chaotic network into a synchronous one. And, if a network can synchronize, might it be possible for networks to self-correct when disrupted by a failure? This is the idea underlying the policy of stability.
The advantage of this strategy is obvious: adding a few random links ought to be far cheaper. However, the approach is new and less well understood.
The ”netwar” approach uses what we know of networks to fend off attacks or even mount counterattacks. The idea applies both to physical structures, such as that of infrastructure, and to social structures, such as that of an army or government. It is in the latter regard that this approach has become best known, largely through the research of my colleagues John Arquilla and David Ronfeldt.
They begin with the observation that traditional, top-down bureaucratic organizations have done poorly in contests against less hierarchical adversaries, including terrorist groups and insurgencies. Such adversaries often take the form of networks in which long chains of command have been ”disintermediated”—that is, they have removed the middle layers of decision makers to allow actors in the field to make decisions faster and better.
Network warfare may even disintermediate entire governments by spreading propaganda without central guidance, engaging the entire population quickly and perhaps enlisting them in a war effort. That method has been the hallmark of modern insurgencies, notably the one now under way in Iraq. It has also characterized the work of Internet hackers.
Such warfare can be countered by an appropriate application of network science to the design of countermeasures. As a concrete example, consider the problem of protecting the Internet from attack by the malicious software called worms, self-replicating and self-activating programs that spread on their own and damage information systems. Worms are the worst kind of cyberthreat, because they often lurk undetected for months within thousands of computers.
Applying the investment portfolio approach would fortify the hubs of the Internet, greatly reducing the incidence of worms. That would be much more effective than installing antiviral software on individual computers.
Alternatively, we could apply a maxim of netwar: it takes a network to defeat a network. We could install software that travels throughout the Internet like a worm, killing any worms it encounters. If we launch a ”worm killer” from an Internet hub, it would reach the far corners of the Internet and counter the effects of all the malicious worms. This approach would be orders of magnitude less expensive than current approaches, but currently it is illegal.
What is to be done?
Three specific policy recommendations can be derived from the foregoing analysis. First, in networks that are already structured or for various reasons must continue to be so, we can minimize risk by concentrating all our investment on the hubs, leaving other elements alone. Second, if we can decrease the structure of a network by randomizing it, we should do so, because we localize the effects of any failure and make it difficult for any opponent to attack the entire system. Third, often the best way to defend a network is by establishing ”network counterweapons” that undo the work of the adversary, as in the case of beneficent worm killers.
Network science addresses the challenge of critical infrastructure protection at the system level rather than the component level. Therefore, it cuts across jurisdictional lines and political boundaries. In that sense, it is a better approach than those based on multi-agency collaboration, interregional cooperation, and local, state, and federal information sharing and decision making. Networks beat hierarchies in the business world, and now we are beginning to realize that networks beat hierarchies in the world of politics and wars.
About the Author
Ted G. Lewis teaches in the Center for Homeland Defense and Security at the Naval Postgraduate School in Monterey, Calif. He has a Ph.D. in computer science from Washington State University. His book Critical Infrastructure Protection in Homeland Security: Defending a Networked Nation , published in April by John Wiley & Sons, has more on how these theories may apply to national security.
To Probe Further
The standard work on the military aspects of networks is John Arquilla and David Ronfeldt, eds., Networks and Netwars: The Future of Terror, Crime, and Militancy , Rand Corp., 2001.
The mathematician who showed that the Internet is scale-free, Albert-László Barabási, elaborates on his insights in Linked: How Everything Is Connected to Everything Else and What It Means for Business, Science, and Everyday Life , Plume, 2003.
Mark Buchanan, a journalist trained as a physicist, makes network theory approachable in Nexus: Small Worlds and the Groundbreaking Science of Networks , W.W. Norton, 2002.
Another popular work in his area is Six Degrees: The Science of a Connected Age , W.W. Norton, 2003, by Duncan J. Watts.