One of the most frequently mentioned fears about future quantum computers is that they will someday crack our encryption codes and lay all our digital secrets bare. Despite it being a truly far-off possibility, cryptographers are already taking the threat very seriously.

The solution seems to be to develop one or more classes of encryption schemes that classical computers can use but quantum computers can’t crack. Less than two weeks ago, the U.S. National Institute of Standards and Technology reported that its search for quantum-proof algorithms had reached the semifinals stage. Following a year-long evaluation, the agency has narrowed the field down to 26 algorithms, most of which fall into three broad families.

Now, at the IEEE International Solid-State Circuits Conference on Monday in San Francisco, engineers from MIT have reported the creation of an encryption system that performs one of these schemes on a chip small enough and energy-efficient enough to guard battery-powered nodes on the Internet of Things from future quantum attack.

The MIT engineers focused on one family of post-quantum algorithms, called lattice-based cryptography. The name comes from a way to picture the problems that would need to be solved to crack this kind of encryption. Imagine a two-dimensional grid with points scattered around it. It might not seem too difficult to find the shortest vector between a random spot on the lattice and the nearest point, but expand the grid into three dimensions, and then 1,000, and then 10,000, and it becomes enough to occupy today’s computers for years.

“Lattice-based cryptography is a promising candidate because of its small public key and signature sizes,” MIT doctoral student Utsav Banerjee told engineers at the conference. The problem that computers must solve in order to crack the encryption is called the learning with error problem. Encryption using that problem requires the generation of matrices of numbers that each have certain characteristics. Banerjee says that generating those numbers was the biggest computational constraint, while storing the vectors for the key exchange computations took up the most area on the chip.

In order to reduce the computational load, the MIT group first implemented an efficient pseudo random number generator, using the SHA-3 hash function, that was two to three times as energy efficient as two other candidates. To fit the encryption algorithm, the random numbers must fit certain statistical properties. That required three different algorithms designed to reject numbers that didn’t fit. The team chose algorithms that produced significant energy savings when implemented in silicon. One proved to use merely 1/16th the energy of a competing algorithm.

Solving the key exchange equation was only 30 percent of the computation, but took up more than 75 percent of the chip area, mostly as banks of SRAM memory. It could have been worse. By redesigning the connections among the memory banks (and proving that this action wouldn’t compromise security and efficiency), Banerjee and his colleagues were able cut about 124,000 logic gates from the total.

The MIT team, which includes Abhishek Pathak and dean of engineering Anantha P. Chandrakasan, tested their silicon with four of NIST’s candidate algorithms—Kyber, NewHope, R-EMBLEM, and LIMA. The chip was 28 times as energy efficient running Kyber as an ARM Cortex-M4 doing the same algorithm in software. For NewHope it was 37 times as efficient. On NewHope, it consumed an average of only 516 microwatts.

Banerjee’s next goal is to ensure that the chip is immune to what are called side-channel attacks. These are ways to steal data indirectly through things such as changes in a chip’s power consumption, how it radiates energy, or how long certain actions take. He’s already shown that key parts of the system produce their results at a constant rate, so it should be immune to timing attacks. However, he hasn’t tested it yet for power and electromagnetic radiation.

## Comments