Hardware Trick Could Keep Cloud Data Safe

Machiavellian memory management confuses attackers who want to steal private data

5 min read
Hardware Trick Could Keep Cloud Data Safe
Image: Getty Images

07NW.HardwareSecurityImage: Getty Images

One thing is certain in this day and age: Somewhere out there, somebody wants your data. So how do you make sure all of it stays protected, especially now that ever more computing is done in the cloud?

At MIT, researchers say they’ve designed a chip that solves this problem. They presented it last June at the International Symposium on Computer Architecture, in Tel Aviv. Dubbed Ascend, the component hides the way CPUs request information in cloud servers, making it immensely difficult for attackers to glean information about the data stored there. Such a hardware-reliant scheme is an unusual proposition in the realm of cloud security, which is dominated by software solutions.

The researchers assume that sensitive data on cloud servers is already encrypted—typically the first line of defense when it comes to data security. Ascend goes a step further, its designers say, by dealing with sneak attacks that can happen through various so-called side channels. In a side-channel attack, an observer measures things like computation time, memory traffic, and power consumption to infer the behavior of a program running on that hardware, and from that the watcher can glean some information.

According to Srinivas Devadas, a professor of computer science and electrical engineering at MIT who is part of the Ascend team, an attacker might examine requests made by a computer program to access certain “memory addresses”—specific numbers assigned to each of the bytes in a computer’s main memory that are arranged in a giant indexed array. When a computer is told to perform a process, the memory controller—acting as an intermediary between the main memory and the processor—reads an address and grabs the data corresponding to that memory address for the CPU to use. After the user does this multiple times for different pieces of encrypted data, the attacker is able to guess what the coded information actually holds. “If you’re searching on the Internet and looking at particular websites over and over, an observer can kind of tell what you’re interested in,” says Devadas.

One way to potentially guard against such an attack is to access every single memory address whenever a program needs data, thus hiding the user’s real intentions—and his data. But Devadas says this is an unrealistic solution that trades too much computational overhead for security. “Let’s say the memory had the size of a megabyte. If I accessed every location in those megabytes…then I would be accessing a million memory locations for every one,” he says.

What Ascend does instead is alter the pattern of memory-access events. It does this by rearranging all the memory addresses into a binary-tree structure. This configuration resembles a family tree, where each node is connected to only one “parent” node but may spawn several “child” nodes. Each memory address is randomly assigned to a node, and each node has a place along a path that can start from the root and end at the tip of the tree. In this new architecture, when a process is started, the memory controller doesn’t just read the address of the data that’s being requested; it reads an entire path of addresses from root to tip that includes the one it’s really interested in. “Instead of touching every node in the tree, which is basically every memory location,” Devadas says, “you’re only touching a single path in the tree at any given time to access something.” The technique, known as Oblivious RAM, had existed in theory for a while, but this is the first time that applications for it have been integrated directly into a processor design.

Ascend also protects against a timing attack—a side-channel attack in which the time it takes a cloud server to complete a calculation leaks information to an enemy. With Ascend, memory access is strictly periodic—occurring, for example, every 10 microseconds or every microsecond, depending on what type of problem is being solved—so there’s no meaningful timing information that an attacker can derive.

Tim Sherwood, a professor of computer science at the University of California, Santa Barbara, who was not involved in the research, says that the work the Ascend team has done is quite novel. “If you want to make it hard for an adversary to tell what’s going on on a chip, you could make it look like everything that that chip could possibly do is happening every cycle,” he says. “Then you only select out the parts that you want to keep. But the adversary can’t distinguish that from the fact that all these other things are all happening every single cycle.”

Devadas admits that the amount of resources Ascend utilizes is not insignificant; it is appreciably more than simply reading one memory address per data request. But, he says, “that’s the cost of security.” The slowdown in processing is dependent on the types of applications being run and the resources that each one requires. Some applications are compute-bound, which affects Ascend less, while others are memory-bound. Preliminary tests run by the MIT group have shown that when an application was constrained by computation, the overhead was less than a factor of 2. When it was constrained by memory access, the overhead was a factor of 10. To put this into context, “instead of taking 1 minute to calculate a million digits of pi, Ascend would take 2 minutes,” says Devadas. “If you are comparing a set of images against a given image and you are accessing memory a whole lot, the rate of image comparisons could be as much as 10 times less.”

Devadas and his collaborators have yet to build a physical prototype, but they have run experiments on a number of virtual Ascend configurations. So far, the group has discovered that its scheme is best served by a larger than usual on-chip memory cache and fewer processing elements. Devadas explains that overhead for memory-bound applications can be reduced the most by “changing the on-chip cache size, increasing it.”

Essentially, the position the MIT group is taking when it comes to cloud computing is that incorporating security into hardware—as opposed to using software solutions—is the better option. According to Devadas, this is because unlike buggy software, shoddily built hardware cannot be fixed with quickly released patches. Therefore, designing and building hardware components requires more careful planning, and the product is usually much more stable than software. With Ascend, “security has been baked in, and you can trust it,” he says.

Sherwood did acknowledge one possible vulnerability in Ascend: Suppose an attacker got hold of the component and physically dismantled it. The person might then be able to reverse-engineer how it is programmed and be able to choose which memory addresses to read. But he also remarked that this would be a much more significant undertaking.

Johns Hopkins University cryptography researcher Matthew D. Green agrees that the Ascend design is valuable—especially, he asserts, since making secure software has proven to be such a challenge. “It may not even be possible,” Green says. “I’m not saying that hardware is perfect, just that it’s a lot more expensive to attack hardware…to the point that people probably aren’t going to be able to do it in practice.”

The Conversation (0)