Computers: It's Time to Start Over

Computer scientist Robert Watson, putting security first, wants to design with a “clean slate”

Loading the podcast player...

Steven Cherry: Hi, this is Steven Cherry for IEEE Spectrum’s “Techwise Conversations.”

If you think about it, it’s weird. Everything about computer security has changed in the past 20 years, but computers themselves haven’t. It’s the world around them that has. An article to be published in the February 2013 issue of Communications of the ACM sums up the situation pretty succinctly:     

“The role of operating system security has shifted from protecting multiple users from each other toward protecting a single…user from untrustworthy applications.…Embedded devices, mobile phones, and tablets are a point of confluence: The interests of many different parties…must be mediated with the help of operating systems that were designed for another place and time.”

The author of that article is Robert Watson. He advocates taking a fresh start to computing, what he calls a “clean slate.” He’s a senior research associate in the Security Research Group at the University of Cambridge, and a research fellow at St John's College, also at Cambridge. He’s also a member of the board of directors of the FreeBSD Foundation, and he’s my guest today by phone.

Robert, welcome to the podcast.

Robert Watson: Hi, Steven. It’s great to be with you.

Steven Cherry: Robert, computer security meant something very different before the Internet, and in your view, we aren’t winning the war. What’s changed?

Robert Watson: Right. I think that’s an excellent question. I think we have to see this in a historic context.

So in the 1970s and 1980s, the Internet was this brave new world largely populated by academic researchers. It was used by the U.S. Department of Defense, it was used by U.S. corporations, but it was a very small world, and today we put everyone and their grandmother on the Internet. Certainly the systems that we designed for those research environments, to try and solve really fundamental problems in communications, weren’t designed to resist adversaries. And when we talk about adversaries, we have to be careful, but, you know, I think it’s fair to say that there were, you know, very poor incentives from the perspective of the end user. As we moved to banking and purchasing online, we produced a target, and that target didn’t exist in the 1990s. It does exist today.

Steven Cherry: Your research is focused on the operating system. But how much of computing security is built into the operating system currently?

Robert Watson: We’ve always taken the view that operating system security was really central to how applications themselves experience security. And in historic systems, large multiuser computer systems, you know, we had these central servers or central mainframes, lots of end users on individual terminals. The role of the OS was to help separate these users from each other, to prevent accidents, perhaps to control the flow of information. You didn’t want trade secrets leaking from, perhaps, one account on a system to another one. And when we had large time-sharing systems, we were forced to share computers among many different users. Operating systems have historically provided something called access control. So you allow users to say this file can’t be accessed by this user. This is a very powerful primitive. It allows us to structure the work we do into groups, interact with each other. Users are at their own discretion to decide what they’re going to share and what they won’t.

So the observation we make on these new end-user systems like phones is that what we’re trying to control is very different. The phone is a place where lots of different applications meet. But I’m downloading software off the Internet, and this is something we’ve always, you know, encouraged users to be very cautious about. We said, “Don’t just download random programs through the Internet. You never know where it will have come from.” You know, you have no information on the provenance of the software. And on phones today, we encourage users to download things all the time. So what has changed now? Well, we’ve deployed something called sandboxing inside of these phones so that every application you download runs inside its own sandbox. And that is a very different use of security. And it is provided by the operating system, so it’s still a function of the operating system. So a phone is trying to mediate between these applications, prevent them from doing what people sort of rather vividly describe as “bricking” the phone. So you have integrity guarantees that you want. You don’t want to damage the operation of the phone. But you also don’t want information to spread between applications in ways that you don’t want.

Steven Cherry: Now, let’s talk about Clean Slate. This is research you’re conducting for the Department of Defense in the U.S., along with noted computer scientist Peter Neumann. Neumann was recently profiled in The New York Times, and he was quoted as saying that the only workable and complete solution to the computer security crisis is to study the past half-century’s research, cherry-pick the best ideas, and then build something new from the bottom up. What does that mean?

Robert Watson: That’s a great question. I mean it is an interesting problem. You know, the market is controlled by what people are willing to pay for a product. And one of the things we know about the computer industry is that it’s very driven by this concept of “time to market.” You want to get things to the consumer as soon as possible. You don’t do everything 100 percent right. You do it 90 percent right or 70 percent right, because you can always issue updates later, or once you’re doing a bit better in the marketplace, replace the parts, and your second-generation users will expect something a little bit better than what we call early adopters, who are willing to take risks as they adopt technology. So there’s a cycle there that means that we’re willing to put things out that aren’t quite ready. So when we look at algorithms to search for desired values in some large space—and we have this term which is called hill climbing, and the idea of hill climbing is that wherever you are, you look around your set of strategic choices. Do you adjust this parameter? Do you adjust that parameter? And you pick the one that seems to take you closest to the goal that you’re getting to. And you just repeat this process over time, and eventually you get to the top of the hill. So there’s a risk in this strategy. It’s not a bad strategy. It does get you to the top of a hill, but it might get you to the top of the wrong hill.

So what the Clean Slate approach advocates is not throwing the whole world away, but instead taking a step back and asking, Have we been chasing, you know, the wrong goals all along? Or have we made the right choice at every given moment given where we were, but we ended up at the top of the wrong hill? And that’s really what it’s all about. Peter talks about a crisis, and I think it is a crisis. We can see what is effectively an arms race between the people building systems and the people who are attacking systems on a daily basis. Every time you get a critical security update from your vendor or a new antivirus update—these things happen daily or weekly—they reflect the discovery and exploitation of vulnerabilities in the software that we rely on to do our jobs. So we’re clearly, as the defenders, at something of a disadvantage.

And there’s an asymmetric relationship, as we like to say. The attacker has to find just one flaw in order to gain control of our systems. And we, as defenders, have to close all flaws. We must make no mistakes, and we cannot build systems that way; it’s just not a reliable way of doing it. It doesn’t solve the problem. Antivirus is fundamentally responsive. It’s about detecting somebody’s broken into your machine and trying to clean up the mess that’s been left behind by poorly crafted malware that can’t defend itself against a knowledgeable adversary. It presupposes that they’ve gotten in, that they’ve gotten access to your data, they could have done anything they want with your computer, and it’s the wrong way to think about it. It’s not to say that we shouldn’t use antivirus in the meantime, but it can’t be the long-term answer, right? It means that somebody else has already succeeded in their goal.

Steven Cherry: Yeah, I guess what you want to do is compartmentalize our software, and I guess the New York Times article talked about software that shape-shifts to elude would-be attackers. How would that work?

Robert Watson: You know, we could try to interfere with the mechanisms used to exploit vulnerabilities. So, you know, a common past exploit mechanism, something called a buffer overflow attack. So the vulnerability is that the bounds are calculated incorrectly on a buffer inside of the software, and you overflow the buffer by sending more data than the original software author expected. And as you overflow the buffer, you manage to inject some code, or you manage to insert a new program that will get executed when the function that you’re attacking returns. So this allows the adversary to take control of your machine. So we could eliminate the bug that left a buffer overflow, but imagine for a moment that we’re unable to do that. Well, we could interfere with the way the buffer overflow exploit works. We could prevent it from successfully getting code into execution. So this is something we try to do: Many contemporary systems deploy mitigation techniques. It’s hard to get an operating system that doesn’t. If you use Windows or you use iOS, [or you] use Mac OS X, they all deploy lots of mitigation techniques that attack exploit techniques.

So the one that we’re particularly interested in is one called compartmentalization. And the principle is  fairly straightforward. We take a large piece of software, like a Web browser, and we begin to break it into pieces. And we run every one of those pieces in something called a sandbox. A sandbox is a container, if you will, and the software in the sandbox is only allowed to do certain things with respect to the system that runs outside the sandbox. So a nice example of this is actually in the Chrome Web browser. So in Chrome, every tab is rendered inside a separate sandbox. And the principle is that if a vulnerability is exploited by a particular Web page, it’s not able to interfere with the contents of other Web pages in the same Web browser.

So originally this functionality was about robustness. What you don’t want is a bug in the rendering of any one page to make all your other tabs close, right, crash the Web browser, require you to effectively, well you almost reboot your computer in some sense as you get started up in your Web sessions again. But Google noticed that they could align these sandboxes with the robust units that they were processing each tab in, try and prevent undesired interference between them. So that’s kind of a rudimentary example of compartmentalization. And it does work, but there were some problems with it.

What we’d really like to do, though, is align these sandboxes or compartments with every individual task that we’re trying to accomplish and the specific rights that are needed. And there’s an interesting principle called the principle of least privilege, which was an idea first really talked about in the mid-1970s, sort of proposed at MIT. And what the principle says is every individual piece of software should run with only the rights that it requires to execute. So if we run software that way, then we’re actually—we can be successful at mitigating attacks, because when you exploit a vulnerability in a piece of software, whether it’s a buffer overflow or maybe something more subtle or maybe something in the logic of the program itself, we just got the rules wrong, you now gain some rights. But you gain only the rights of that particular compartment.

For example, we’d really like not to be able to see what is going on in your online banking. It would seem natural to us as users that that should be the case. But it requires very granular sandboxing. This is part of where our Clean Slate research comes in. Current computer systems were not designed to provide that granularity of sandboxing.

Steven Cherry: You’ve used the word “fundamental” a couple of times, and I think what you’re advocating is really fundamental. It’s in some ways changing the entire 60-year paradigm of computing, abandoning what’s sometimes called the von Neumann architecture. This is a different Neumann, John von Neumann, who coinvented game theory as well as the modern computer. According to, you know, basically we don’t even put code and data in separate sandboxes. Am I right in thinking it’s that fundamental, and do you think the discipline of computer science is really ready for such a fundamental change?

Robert Watson: Well, it’s an interesting question. So, you know, the von Neumann architecture, as you suggest, originally described in the paper in the mid 1940s on the heels of the success of systems like ENIAC and so on. And what John von Neumann says is if we store the program—you know, there are a number of aspects in the architecture—if we store the program in the same memory that we store data in, we gain enormous flexibility. Provides access to ideas like software compilers that allow us to describe software at a high level and have the computer itself write the code that it’s later going to run. It’s a, you know, pretty fundamental change in the nature of computing.

I don’t want to roll back that aspect of computing, but we have to understand that many of the vulnerabilities that we suffer today are a direct consequence of that design for computers. So I talked a moment ago about this idea of code injection attacks at the buffer overflow where I, as the attacker, can send you something that exploits a bug and injects code. This is a very powerful model for an attacker because, you know, suppose for a moment we couldn’t do that. I’d be looking for vulnerabilities that directly correspond to my goals as the attacker. So I have to find a logical bug that allows the leaking of information. You know, I could probably find one, perhaps. But it’s much more powerful for me to be able to send you new code that you’re going to run on the target machine directly, giving me complete flexibility.

So, yes, we want to revisit some of these ideas. I’d make the observation that the things that are really important to us, that we want to perform really well on computers, that have to scale extremely well, so there could be lots and lots of them, are the things that we put in low-level hardware. The reason we do that is that they often have aspects of their execution which perform best when they’re directly catered to by our processor design. A nice example of this is graphical processing. So, today, every computer, every mobile device, ships with something that just didn’t exist in computers 10 or 15 years ago, called a graphical processing unit, a graphics processing unit, a GPU. So today you don’t buy systems without them. They’re the thing that makes it possible to blend different images, you know, render animations at high speed and so on. Have the kind of snazzy, three-dimensional graphics we see on current systems. Hard to imagine life without it.

The reason that was sucked into our architecture design is that we could make it dramatically faster by supporting it directly in hardware. If we now think security is important to us, extremely important to us because of the costs and the consequences of getting it wrong, there’s a strong argument for pulling that into hardware if it provides us with dramatic improvement in scalability.

Steven Cherry: Well, Robert, it sounds like we’re still in the early days of computing. I guess in car terms we’re still in maybe the 1950s. I guess the MacBook Pro is maybe a Studebaker or Starliner, and the Air is a 1953 Corvette. And it’s up to folks like you to lay the groundwork for the safe Volvos and Subarus of tomorrow. In fact, also for making our cars safe from hackers, I guess, but that’s a whole other show. Thanks, and thanks for joining us today.

Robert Watson: Absolutely. No, I think your comparison is good, right. The computer world is still very much a fast-moving industry. We don’t know what systems will look like when we’re done. I think the only mistake we could make is to think that we are done, that we have to live with the status quo that we have. There is still the opportunity to revise fundamental thinking here while maintaining some of the compatibility we want. You know, we can still drive on the same roads, but we can change the vehicles that we drive on them. Thanks very much.

Steven Cherry: Very good. Thanks again.

We’ve been speaking with Robert Watson about finally making computers more secure, instead of less.

For IEEE Spectrum’s “Techwise Conversations,” I’m Steven Cherry.

This interview was recorded 5 December 2012.
Segment producer: Barbara Finkelstein; audio engineer: Francesco Ferorelli

Read more “Techwise Conversations” or follow us on Twitter.

NOTE: Transcripts are created for the convenience of our readers and listeners and may not perfectly match their associated interviews and narratives. The authoritative record of IEEE Spectrums audio programming is the audio version.

Advertisement
Advertisement