In a high-security computer center, there’s one machine that’s nearly impossible to break into. The systems that store and serve up cryptographic keys are physically protected from even the kinds of subtle attacks that belong in spy movies: X-rays, drill bits a fraction of a millimeter wide, electromagnetic snooping.
These so-called hardware security modules (HSMs) are protected by a battery-powered mesh of micrometer-scale wires embedded in special resin, and they store cryptographic keys in volatile memory that is automatically wiped if the mesh experiences even a minute amount of damage. The tiniest drill bit, for example, will result in open circuits, short circuits, or other changes in resistance that the system instantly detects.
It’s hard to say if anyone has ever succeeded in penetrating an HSM, because it’s not the sort of thing that companies crow about. But engineers from three institutions in Munich think they can do better. In particular, they see potential problems with the reliance on a battery and the memory system.
In today’s HSMs, a resistance-changing penetration “destroys any secret material stored inside the circuitry,” explains Georg Sigl, director of the Fraunhofer Institute for Applied and Integrated Security (AISEC) and chair of security in information technology at Technische Universität München. However, when the battery attached to the HSM dies, “everything inside is destroyed as well. The lifetime of the system is determined by the battery life time, and that includes storage time.”
This dependence on the battery also limits what kinds of environments the system can survive on its way to being plugged into a secure computer center. If it gets too cold during transport, the battery could die, or its environmental surveillance circuits could trip. The system would then destroy its cryptographic keys, which are installed before the HSM is sealed, and the system would essentially be dead on arrival.
The requirement of a stable environment “also limits applications to fixed mountings in stable environments,” says Matthias Hiller, head of the physical security group at AISEC. “You can’t use it in moving vehicles or on a person.”
Their solution, called B-Trepid, replaces the stored key with one that is generated by the structure of the envelope itself. Instead of relying on resistances in the envelope’s mesh, B-Trepid calculates the capacitances between the mesh’s wires. These femtofarad capacitances vary from envelope to envelope in unpredictable ways, so that each envelope has a unique electrical signature. This signature is impossible to reproduce in practice, so it counts as what’s called a physical unclonable function—characteristics unique to each item that can act as a cryptographic key.
When B-Trepid is attached to its computer network and turned on, the external mesh generates a unique key that is used to encrypt all the data within the system. When it’s off, there’s no key and therefore nothing to steal. So there’s no need for a battery. And if the envelope is penetrated, which Sigl’s team did using a 0.3-millimeter drill, its capacitances shift. This alters the physical unclonable function (PUF), automatically rendering any data within the system unreadable.
“We currently have prototypes, and the concept works,” says Sigl. “We now must develop serious production capabilities for the [envelope] and improve integration into systems.”
The Fraunhofer team, including Vincent Immler, Johannes Obermaier, and Jan Koenig, presented the system this week at the IEEE International Symposium on Hardware Oriented Security and Trust, in Washington, D.C.