Quantum computers are currently error-ridden machines, greatly limiting practical applications. In a study published today in Nature, researchers at Google and their colleagues reveal they have, for the first time, developed a quantum processor that can reliably fix errors faster that it generates them.
The qubits at the heart of quantum computers are very error-prone pieces of technology. Currently, quantum computers typically suffer roughly one error every thousand operations, far short of the one-in-10-billion error rates needed for the machines to run long enough for many practical applications, the new study notes.
Scientists often hope to compensate for these high error rates by spreading quantum computations across many redundant qubits. These quantum error correction strategies would help quantum computers detect and correct mistakes, so that a cluster of “physical” qubits can altogether behave as one low-error “logical” qubit, serving as the foundation of a fault-tolerant quantum computer.
“Quantum error correction is the path towards large-scale quantum applications, including things like drug discovery, material design, improved optimization, and so on,” says Kevin Satzinger, a research scientist at Google Quantum AI. “Think better pharmaceuticals or better batteries.”
“When I first saw the error rate go down, and go down dramatically, that was the first time I thought to myself, ‘Damn, this is really going to work.’” —Michael Newman, Google
However, quantum error correction schemes are not foolproof. Each quantum error correction strategy is only useful if the hardware’s error rates are low enough to benefit from it. This error threshold depends on the specific strategy and the nature of the errors. Above that critical threshold, adding more qubits will only increase, not decrease, the number of errors.
“Quantum error correction has been around for about 30 years, and fundamental to it working is the idea that more error correction should lead to better error rates,” says Michael Newman, a research Scientist at Google Quantum AI. “But this hasn’t been the case until now.”
One of the most popular quantum error correction schemes that scientists are exploring is called the surface code, in which qubits are arranged in a two-dimensional checkerboard pattern; units of information are encoded into sections of this lattice. It offers an error threshold of roughly 0.6 to 1 percent.

Now scientists at Google Quantum AI and their colleagues have developed a new quantum computer architecture called Willow that is capable of quantum error correction below the surface code’s error threshold.
In the new study, the researchers executed surface codes on two Willow quantum processors. One used 72 superconducting transmon qubits—which are less sensitive to the electric charges that can cause qubits to lose their quantum properties, making them useless—and the other 105.
The scientists found their 105-qubit grid experienced an error rate of roughly 0.143 percent per cycle of error correction, an error rate about half that of the 72-qubit grid. In other words, for the first time, adding more physical qubits to a quantum processor actually reduced error rates, just as one would hope would happen with quantum error correction.
“I’ve always been a believer in quantum error correction, but it’s different to believe something than to see it happen,” Newman says. “When I first saw the error rate go down, and go down dramatically, that was the first time I thought to myself, ‘Damn, this is really going to work.’”
In addition, the 105-qubit processor is the first qubit array to have a longer lifetime than its individual physical qubits, lasting more than twice as long as its best physical qubit. This helps show that quantum error correction is improving the system overall, the researchers say.
In order to achieve practical applications, quantum computers have to correct errors before they finish their quantum computations. The scientists note their new processors are capable of essentially performing quantum error correction mid-computation, with error-correcting cycle times 1.1 microseconds long. In addition, tests performed over the course of 15 hours suggest these processors could remain stable over the long timescales needed for large-scale fault-tolerant quantum algorithms.
“We have built a system that can scale in principle, but which we must now scale in practice.” —Kevin Satzinger, Google
To estimate Willow’s performance, the researchers used a benchmark called random circuit sampling. Although it has no known real-world applications, the researchers note this benchmark is the most difficult task for a classical computer that can also be done on a quantum computer today. They found the 105-qubit Willow processor could perform a benchmark computation in under five minutes that would take today’s fastest supercomputer 10 septillion (or 1025) years, a time that vastly exceeds the age of the universe. (This benchmark is the same one Google used in 2019 to claim that one of its quantum computers was the first to show quantum supremacy.)
The research team attributes these advances to a number of upgrades, such as better fabrication techniques and the use of neural networks to account for device noise. This in turn led to a host of improvements, such as the ability for the qubits to stay in superposition for nearly 100 microseconds, roughly five times better than Google’s previous generation of quantum processors, Sycamore.
In the future, the researchers would like to show their quantum-error-corrected systems actually performing a quantum computation, Satzinger says. However, he cautions that their research is still a long way from large-scale quantum applications—they still need to scale their system up to many thousands of qubits, and to further improve the error rates of their hardware. “We have built a system that can scale in principle, but which we must now scale in practice,” Satzinger says.
- Quantum Computer Error Correction Is Getting Practical - IEEE ... ›
- IBM Offers Quantum Error Suppression Out Of The Box - IEEE ... ›
- Quantum Echoes: Google's New Algorithm - IEEE Spectrum ›
Charles Q. Choi is a science reporter who contributes regularly to IEEE Spectrum. He has written for Scientific American, The New York Times, Wired, and Science, among others.



