The Critical Threat to Critical Infrastructure

A Techwise Conversation With Steve Chabinsky of the FBI’s Cyber Division

Loading the podcast player...

Steven Cherry: Hi, this is Steven Cherry for IEEE Spectrum’s “Techwise Conversations.” This is show number 75.

Let’s start with a quote. The “cyber threat is one of the most serious economic and national security challenges we face as a nation.”

That’s not a statement from the computer trade press, trying to sell copies of a magazine; it’s not coming from a security company trying to get you to upgrade your antivirus software. That came from President Barack Obama. Here’s another: “Some of the most critical threats facing our nation today emanate from the cyber realm. We’ve got hackers out to take our personal information and money, spies who want to steal our nation’s secrets, [PDF] and terrorists who are looking for novel ways to attack our critical infrastructure.”

That was said by Shawn Henry, executive assistant director of the Federal Bureau of Investigation, at an information systems security conference held last month in Baltimore. In that speech, he noted that the FBI has a cybersquad in each of its 56 field offices and that more than 1000 agents and analysts have received special training related to cybersecurity.

Bringing it all together, there’s a special FBI Cyber Division. My guest today, Steve Chabinsky, is the Cyber Division’s deputy assistant director. It’s always nice to have a fellow Steve on, especially one who spells the name correctly. Steve, welcome to the podcast.

Steve Chabinsky: Very good, Steven. Good to be here.

Steven Cherry: Steve, the hackers after our information and money are probably the most immediate threat to our listeners, and the spies who want to steal our nation’s secrets are worth talking about, but I want to start with this question of critical infrastructure. Back in our September 2006 issue, we at Spectrum looked at ways in which terrorists could kill people, a lot of people all at once, and the scenarios the experts we talked to came up with invariably involved critical infrastructure: nuclear and other power plants, chemical processing plants and transport, the electrical grid itself. Have things gotten better or worse since 2006?

Steve Chabinsky: I think the focus has gotten better, so the awareness of users of critical infrastructure and the designers of critical infrastructure is heightened, and there are a lot of resources that are focusing on the problem. Unfortunately, the awareness on the terrorist side has also increased. We’re seeing increased dialogue from terrorist organizations that very much are focused on how they could attack the West in nontraditional ways. So not just through kinetic bombs but through the Internet, very much focused on critical infrastructure, banking, and finance.

Steven Cherry: So the large part of the problem in critical infrastructure is that it uses the Internet or it uses systems that are in turn connected to the Internet. You would think that it’s kind of common sense to not do that, but common sense doesn’t always carry the day, right?

Steve Chabinsky: That’s right. When we talk about cybersecurity, I think that there’s this notion that it’s all about the Internet and that it’s all about data. And although it is about the Internet and data to some extent, it’s much broader than that. It’s really a technology issue, and what we’re realizing is, is that throughout our very daily lives we’re relying more upon technology-enabled devices and technology-enabled infrastructure. So one part of the problem that you identified are these process control systems that are controlling the electric power grid, nuclear power, water treatment facilities—just as examples. But it goes even further than that, because we’re talking about technology-enabled airplanes, cars, biomedical devices. And each of these technology-enabled services or products tend to have one frightening thing in common: They have remote diagnostic capabilities, and increasingly so. So that somebody doesn’t have to get in a car and drive a thousand miles to change something, or it’s so a doctor doesn’t have to perform surgery again. And there was one case very recently that was publicized, in which there was a remote attack against an insulin pump. That really should be concerning to society.

Steven Cherry: I read another one recently: A security consultant in Australia wrote on his blog about testing a Boeing 747 that had a new video system that used the Internet Protocol. The consultant got through the video system to other systems on the plane, including the engine management system—so that’s just the sort of thing that you’re talking about. This consultant wrote “for those who don’t know, 747s are big, flying Unix hosts.”

Steve Chabinsky: You’re exactly right, people are recognizing the potential in transportation to have enormous impacts through the computer systems. I think that most people readily recognize that airplanes are computerized, that there’s so much functionality, there are a lot of processes going on at any given moment—they see the cockpit and envision the computers in the background. But what most people aren’t recognizing is that the same holds true of our automobiles and systems such as auto-lock brakes have computerized processes behind them—cruise control, which is connected to the accelerator, has chip-enabled properties. And if people who want to harm us have the ability to access these systems, they could cause a lot of havoc up to and including death.

Steven Cherry: In his speech, Shawn Henry said that right now computer security has become an endless game of defense which is both costly and unsurvivable in the long term, and that under the current Internet structure we can’t tech our way out of the cyberthreat. He suggested changing the playing field, and he gave a great metaphor about the way gas pumps work: The pumps for diesel are a different size, so you physically can’t pump diesel into a gas tank. How would that work for the Internet and critical infrastructure?

Steve Chabinsky: The predominant point of cybersecurity nowadays seems to be playing defense. And if you look at the risk model, risk has three parts to it: You’ve got vulnerabilities, threats, and consequences. And when you continue to focus on reducing the vulnerabilities to zero, it’s a never-ending game. And it’s never-ending for a variety of reasons. One, it’s almost impossible to create any system that’s impenetrable, and we don’t want to have to design the Internet and the applications to withstand any sort of threat actor against them. The only time in the physical world that we have impenetrable systems—we call those bunkers right? That’s not any way to design the Internet. The Internet was never meant to be a bunker and force people into this endless game of trying to patch every system and make every system so that it’s impenetrable, [which] has proven over time to be unsustainable. And the notion that we’re trying to broaden discussion about is whether or not there are opportunities across the risk spectrum to be able to lower risk for people. And one of the areas that we really need to look at is threat reduction, and if you’re going to go after reducing threat, you really need to be able to have a threat actor that’s deterred from even trying to break in. In the physical world, the primary security measure is not fortifying everything, it’s the notion that most people are good and for most bad guys, they won’t act because they believe they’ll get caught. But in the cybersecurity world, what seems to be lacking are any meaningful options that look at the security model and provide alternatives to different users and different uses. So, different uses meaning, if you’re using the Internet to conduct business transactions, buying a book over the Internet, that doesn’t need a 100 percent security solution. The cost benefits of the market might be working more or less as designed right now. But if you’re running the critical infrastructure—an electric power grid—that really needs heightened security, and the alternative architectures don’t exist right now.

So what seems to be missing is any deterrent for threat actors to try to break in, because there are two design elements that have not been the focus of the technology research community and have not been properly commercialized. And those two factors are assurance and attribution. And by assurance, I mean the concept that you can be assured that your data, your software, your hardware can be trusted, that is, hasn’t been altered, and that it’s been designed and manufactured to spec. So right now we’re in this odd world where people’s computer systems can be broken into to their root level, and the user doesn’t even recognize that anyone has broken in or altered any data. That’s very different from the physical world, where if someone broke into your business and stole everything, certainly you would realize that when you came to your business the next day. And so there’s this difficulty even in recognizing that a threat actor has broken into your system, has changed your software, has altered your hardware or changed your data. And even if a user is fortunate enough to know that someone has altered their data, software, or hardware, there’s very limited abilities to track back to the identity of who the offending actor is, and so that environment favors the adversary. And in any environment that favors the adversary’s continued attempt to break in, you’ll see that it’s a never-ending game of defense that’s unsustainable in the long run.

Steven Cherry: In the physical world there’s that rare instance where the “mission impossible” goes in and they don’t steal somebody’s medication but they change his medication to, say, a poison, and then he takes his pill and he dies. That’s the sort of thing that’s pretty rare in the physical world, but you’re saying it sort of goes on all the time on the Internet.

Steve Chabinsky: That’s exactly right, because the way we’re conducting security right now, if you think about our intrusion-detection systems, there’s so much data that’s being logged that no less an authority than the National Institute of Standards and Technology put out a publication that says there’ll be too much information to track, you’re going to have to prioritize what intrusions and attempted intrusions you’re going to be able to review. And when you think about that, it really shows the predicament we’re in. Could you imagine in the physical world if someone did break in and tried to put poison into somebody’s food or if they stole everything, and you see your cameras focused, and you had one guy that tried to come in with a machine gun, the second guy tries to come in with a machete, the third guy tries to come in with chain mail and so on, and you bring this to the police, and the police say, “Which one do you want us to look at?” And that’s what’s happening in the computer security world. It’s overwhelming, the quantity of information that’s being generated with these intrusion logs, and so effectively because of that we tend to focus on successful intrusions and not attempts. So the vulnerability security measure tends to be very reactive; it’s not very preventative. By preventative, I mean there’s not the deterrence to the threat actors, so they don’t feel prevented from trying to act. And when they’re unsuccessful, most systems administrators don’t have the time or resources to look into attempted intrusions, so they save their efforts until there’s actually a successful intrusion. That’s backwards, right? You don’t want to have a security program which basically provides unlimited bites at an apple.

Steven Cherry: Steve, one of the challenges in computer security is that a lot of systems, including the Internet itself, allow a large measure of anonymity, and the Internet is the communication system of choice for terrorists and criminals, but often we want to be anonymous or pseudonymous for legitimate privacy reasons—whistle-blowers, for example. So privacy is one of the rights the FBI exists to defend. How do you walk the line between legitimate privacy and dangerous privacy?

Steve Chabinsky: I think that the users of the Internet really need to have the option to determine what their privacy profile is. That’s what’s missing. So I don’t take a stance—nor does the FBI—on who remains private or not in terms of Internet architectures. Under the law, we have certain capacities based on the constitutional statute where there are occasions that [if] we determine a crime is being committed, we can provide an order and get information about somebody. That’s under traditional constitutional processes. But what’s occurring now with the Internet is that when we go with a court order to try to get information, that information just doesn’t exist because the protocols that are being used on the Internet aren’t capturing the information that would show attribution. And so the dialogue that’s really needed in the country is, What systems would benefit from having the type of data available? Should legal processes be used, or should the victim themselves have the information that they could then hand over to law enforcement at their option?

Steven Cherry: Steven, Shawn Henry’s October speech mentioned the spies who want to steal our nation’s secrets, and then a week later the Office of the National Counterintelligence Executive issued a report that specifically named China and Russia, not just as the locales for individual hackers, but that these countries were hacking into U.S. corporations as a matter of national policy. Is there an undeclared war going on in cyberspace already?

Steve Chabinsky: Well, I wouldn’t call anything a war at this point, because I would use that term really to connote actions that are destroying our infrastructure; if someone breaks into our electric power system and starts blowing up generators and taking actions that have use-of-force connotations, that have that type of impact, that’s where I would use that type of terminology. So if a nation-state starts acting in those sorts of ways, that it’s very destructive and starts using the Internet to replace what would otherwise be effected through instrumentalities of kinetic warfare, that’s what I would call war. But here what we’re seeing is a level of espionage that is so different in quantity that many people are considering it different qualitatively—that it’s different in kind. It’s so great when nation-states are able to take all of your nation’s secrets and intellectual property, your research and development, that it alters national economies. And that, I think, is what we’re seeing here. So we’re seeing not just from a couple of nation-states, we’re seeing a lot of nation-states that are empowered to do this, and perhaps more concerning over the long term are nation-states in which this poses an asymmetric opportunity. Nation-states which have no kinetic capabilities, that don’t otherwise have any type of economic foundation, who can alter the playing field by breaking into our system unnoticed, or even if noticed, without any attribution or determination of who they are and therefore no consequences. So I think this is a vast espionage effort that’s being used by multiple nation-states—in no way limited to one or two.

Steven Cherry: Very scary. Steve, I quoted President Obama at the beginning, and I’ll close with a quote from him. He said, “America’s economic prosperity in the 21st century will depend on cybersecurity,” so I guess I’ll close by saying, thanks for being on the front lines of those battles, and good luck to us.

Steve Chabinsky: Well, I thank you for the opportunity to speak with you and to address your audience. I really do believe that technologists and economists are going to provide the way forward here and that we have to really review the specific engineering choices that we’re making, especially as they impact assurance and attribution, because unless we can determine who’s behind these attacks and understand when we’re under attack or when our systems are being intruded upon there’s really nothing discouraging the adversary. And ultimately, security will never be able to work if bad guys have impunity from acting, and it’s your audience that can make the difference. So I appreciate your time and the time provided from all of your listeners. Thank you.

Steven Cherry: We’ve been speaking with deputy assistant director Steve Chabinsky of the FBI’s Cyber Division. For IEEE Spectrum’s “Techwise Conversations,” I’m Steven Cherry.

Announcer: “Techwise Conversations” is sponsored by National Instruments.

This interview was recorded 15 November 2011.
Segment producer: Barbara Finkelstein; audio engineer: Francesco Ferorelli
Follow us on Twitter @spectrumpodcast