Computer security is war. Security experts are in a constant arms race against the ingenuity of hackers. What’s more, the security specialists are often working with one hand tied behind their backs because, unlike their adversaries, they are constrained by privacy concerns. But researchers at Princeton University have developed a new chip architecture that could one-up the hackers while easing the tension between the opposing poles of security and privacy.
Princeton electrical engineering professor Ruby B. Lee and her co-workers invented what she calls the Secret Protection computer architecture. Computers built using the new technology can receive sensitive information for a short time (designated by a trusted authority), after which the information is cleared out of the device that accessed it. Lee expects the first adopters of this architecture to be makers of devices for first responders, such as firefighters and EMTs. But, she says, it can be adapted to any mainstream microprocessor to prevent infection by worms and viruses and to theft-proof movies and other downloaded media.
In the first-responder scenario, an EMT may encounter a person who has been in a car accident and who may be allergic to a medication but is unconscious and unable to communicate. With Lee’s architecture, a device given to the EMT by a hospital, for instance, could obtain a patient’s entire medical record, but the EMT could access only the information relevant to this emergency. And after a preset period of time, that information would vanish from every memory location in the machine. ”The hospital can deliver these partial medical records so the guy in the field doesn’t know everything about you,” says Roger Golliver, a senior principal engineer at Intel and cryptography expert.
The vanishing act is possible because in Lee’s design, the security is built in as a primary hardware mechanism, not as a software afterthought. The Secret Protection architecture relies on two elements embedded in the device hardware: the storage root key and the storage root hash.
The storage root key functions like a master key that enables access to secure files. ”It’s something that opens a safe that contains another key for each file,” Golliver says. The storage root hash, by contrast, is like an alarm system, ensuring that the contents of the safe haven’t been tampered with. If the hash is compromised, the data is erased or marked as suspicious.
Lee’s system runs constant checks to make sure these hashes haven’t been hacked. Because the root key, the root hash, and the software they protect are linked in Lee’s architecture, ”you can wipe out the roots, but if you do that, you also wipe out all the trusted software they protect,” says Lee.
Lee adds that her architecture can go into any mainstream microprocessor, but Golliver has some doubts. ”In her model, a trusted authority provides the device,” he says. ”That’s harder to do in the PC world.” Privacy advocates, he recalls, went ballistic in 1999 when Intel put a hardware-based processor serial number into each of its Pentium III chips. The resulting uproar forced Intel to abandon the practice.
Lee counters that with her system, no key is burned into the device by the manufacturer; rather, a PC owner could act as his own ”trusted authority,” initializing the device himself in order to protect his information.