The August 2022 issue of IEEE Spectrum is here!

Close bar

Theory Lowers the Speed Limit for Quantum Computing

A lower theoretical speed limit for quantum entanglement puts new bounds on future quantum computers

3 min read
Theory Lowers the Speed Limit for Quantum Computing
Photo-illustration: Randi Klett; Images: iStockphoto

Today’s quantum computing systems have just begun hinting at how future versions might outperform classical computers at solving certain complex problems. But new research has lowered the theoretical speed limit that future quantum computers will eventually run up against.

Quantum computing systems have the potential to perform certain calculations much faster than classical computers by using quantum bits, or qubits—things that rely on the phenomenon known as superposition to represent information as both 1 and 0 at the same time. Such systems could also exploit another physical phenomenon known as quantum entanglement. In entanglement, a single qubit shares its information state with many other qubits through quantum connections. But the latest calculations by the U.S. National Institute of Standards and Technology place a new speed limit on how quickly entanglement can be established between distant qubits.

“If we can someday make quantum computers as easily as silicon processors, the theoretical limits we’re exploring might inform what the ideal quantum computing architecture is”

“Previous results suggested that the time needed for entanglement to spread throughout a system can be very small when interactions between qubits are long-ranged, leaving open the possibility of very fast transfer of information when interactions are long-ranged,” says Michael Foss-Feig, a physicist at the National Institute of Standards and Technology (NIST) in Gaithersburg, Maryland. “Our result places a tighter constraint on how much time you need to distribute information and entanglement across a system of a given size.”

Foss-Feig was lead author of the NIST paper that appeared in the 13 April issue of the journal Physical Review Letters. He and his colleagues based their work on the research described in two previous papers that had examined the theoretical speed limits of the spread of quantum information. The first paper, published in 1972, discovered a finite speed limit for quantum information—known thereafter as the Lieb-Robinson bound—in cases of short-distance interactions between neighboring qubits.

A second study, published in 2005, suggested that the time needed for quantum information to propagate might only scale logarithmically, or a minuscule amount, with distance. In other words, the second paper’s results suggested that quantum computers could get a “qualitatively important speedup” by incorporating long-range interactions between qubits.

But the recent NIST research restated the speed limit of quantum information’s spread for long distances. Its mathematical proof shows that the time required for quantum information to spread across the system increases almost in proportion to the system size, bringing the speed limit for long-range interacting systems much closer to the limit for short-range interacting systems. Foss-Feig explains:

Our contribution has been to recognize that the bounds from the 2005 paper, while important, were qualitatively not tight. Those bounds suggested that quantum information could propagate much faster than is possible. So we’ve refined that picture and pushed it much closer to the picture that exists for short-range qubit interactions. We speculate that in many cases you could push the bounds for long-range interacting systems all the way up to the bounds for short-range interactions. We’ve already gone a good fraction of the way there.

Today’s quantum computing systems have already helped confirm the speed limit for short-range interactions because of their experiments with entangling neighbor qubits. For example, researchers hired by Google have been testing systems with entanglement between neighboring qubits.

The NIST group hopes to further refine their speed limit calculations in the future. But there is a caveat – their calculations are based on the assumption that long-range entanglement interactions decay at a particular rate. If the entanglement interactions don’t decay at all with distance, a qubit could theoretically transfer information instantaneously to another qubit very far away.

For his part, Foss-Feig doesn’t believe that any discussions of theoretical speed limits should dampen the enthusiasm for quantum computing. The current challenges of building practical quantum computers relate to other issues such as boosting the amount of time that qubits remain in their quantum states and reducing the number of errors.

“If we can someday make quantum computers as easily as silicon processors, the theoretical limits we’re exploring might inform what the ideal quantum computing architecture is,” Foss-Feig said. “But it doesn’t place any serious constraints on anything we’re doing with current quantum computing systems.”

The Conversation (0)

Quantum Error Correction: Time to Make It Work

If technologists can’t perfect it, quantum computers will never be big

13 min read
Quantum Error Correction: Time to Make It Work
Chad Hagen
Blue

Dates chiseled into an ancient tombstone have more in common with the data in your phone or laptop than you may realize. They both involve conventional, classical information, carried by hardware that is relatively immune to errors. The situation inside a quantum computer is far different: The information itself has its own idiosyncratic properties, and compared with standard digital microelectronics, state-of-the-art quantum-computer hardware is more than a billion trillion times as likely to suffer a fault. This tremendous susceptibility to errors is the single biggest problem holding back quantum computing from realizing its great promise.

Fortunately, an approach known as quantum error correction (QEC) can remedy this problem, at least in principle. A mature body of theory built up over the past quarter century now provides a solid theoretical foundation, and experimentalists have demonstrated dozens of proof-of-principle examples of QEC. But these experiments still have not reached the level of quality and sophistication needed to reduce the overall error rate in a system.

Keep Reading ↓Show less
{"imageShortcodeIds":["29986363","29986364"]}