Quantum computers must overcome the challenge of detecting and correcting quantum errors before they can fulfill their promise of sifting through millions of possible solutions much faster than classical computers.
“With our recent four-qubit network, we built a system that allows us to detect both types of quantum errors,” says Jerry Chow, manager of experimental quantum computing at IBM’s Thomas J. Watson Research Center, in Yorktown Heights, N.Y. Chow, who, along with his IBM colleagues detailed their experiments in the 29 April issue of the journal Nature Communications, says, “This is the first demonstration of a system that has the ability to detect both bit-flip errors and phase errors” that exist in quantum computing systems.
The IBM system consists of four quantum bits, or qubits, arranged in a 2-by-2 configuration on a chip measuring about 1.6 square centimeters (0.25 square inch). Each qubit is roughly the equivalent of a bit of information that, in classical computing, can represent values of either 1 or 0. Like classical bits, qubits can experience “bit-flip errors” that switch the values around. But qubits are also vulnerable to a different type of error called a “phase error,” because they take advantage of the quantum physics phenomenon called superposition that allows them to temporarily exist as both 1 and 0 simultaneously. A phase error can change the superposition “sign” of the phase relationship between the 0 and 1 values.
Detecting quantum errors is anything but straightforward. Classical computers can detect and correct their bit-flip errors by simply copying the same bit many times and taking the correct value from the majority of error-free bits. By comparison, the fragility of quantum states in qubits means that trying to directly copy them can have the counterproductive effect of changing the quantum state.
Researchers get around that problem by relying on entanglement, the quantum physics phenomenon that allows a qubit to share its quantum state with many other qubits through a quantum connection. In this case, IBM built its four-qubit grid architecture to exploit the information shared between entangled neighboring qubits. Two of the qubits are the main “data” qubits; the other two exist as “measurement” qubits. One of the measurement qubits can detect bit-flip errors in either neighboring data qubit. The other measurement qubit detects phase errors in the data qubits.
Though IBM’s four-qubit system has successfully detected both types of quantum errors, it cannot yet correct its mistakes. A Google team featuring researchers from the University California, Santa Barbara, previously demonstrated the first error correction for quantum computing in the 4 March 2015 issue of the journal Nature. Google built an architecture consisting of a linear array of nine qubits that also used both data and measurement qubits to detect errors. Such architecture supports “surface code” error correction, using classical computing to help correct the quantum errors.
Still, Google’s arrangement of qubits in a single line had its own drawback: The system could not detect both bit-flip and phase errors at the same time. IBM’s 2-D grid arrangement of four qubits has shown how detection of both quantum error types could be accomplished. The Google researchers have expressed similar ambitions of building a 2-D array of many qubits arranged in a checkerboard pattern, so that it could handle both types of quantum errors.
Both IBM and Google have built their quantum computing architectures based on superconducting quantum circuits. These architectures represent qubits as Josephson junctions—two layers of superconductor separated by a thin insulating layer. Superconducting qubits may have an edge over rival quantum computing architectures because they can be manufactured using many of the existing tools used in building classical computers. That could make it easier to scale up to large quantum computing systems with many qubits.
Chow and his IBM colleagues have already begun experimenting with an eight-qubit array. They eventually hope to build arrays with 13 to 17 qubits on a side, which would be large enough to encode a single logical qubit that would be protected against errors.
For now, both IBM and Google seem confident that the superconducting qubit architecture can scale up well into the future. Their error detection and error correction efforts are paving the way for larger quantum computing systems that could surpass classical computers in solving certain problems and do so reliably.
Such advances seem timely as we celebrate the 50th anniversary of Moore’s Law, which predicted the annual doubling of circuit components that can be packed into the integrated circuits of classical computers. The anniversary has provided opportunity for plenty of renewed speculation about when Moore’s Law might end. (See the IEEE Spectrum special report on the 50th anniversary of Moore’s Law.)
“A lot of interest in quantum computing comes from seeing the end of the tunnel with Moore’s Law,” says Chow. “Now we’re seeing what potential lies beyond.”
Jeremy Hsu has been working as a science and technology journalist in New York City since 2008. He has written on subjects as diverse as supercomputing and wearable electronics for IEEE Spectrum. When he’s not trying to wrap his head around the latest quantum computing news for Spectrum, he also contributes to a variety of publications such as Scientific American, Discover, Popular Science, and others. He is a graduate of New York University’s Science, Health & Environmental Reporting Program.