The July 2022 issue of IEEE Spectrum is here!

Close bar

Quantum Computers Exponentially Faster at Untangling Insights

Classical computers cannot overcome the “quantum advantage” in simulating chemistry and physics experiments

2 min read
Two sets of iterations of images of a cat. On the left, the images are grayscale and even the sharpest front iteration is pixellated and incomplete. On the right, the cat is in color, sharp and each version is relatively complete.
Google Quantum AI Hook

Using Google’s Sycamore quantum processor, a new study reveals that quantum computers need exponentially fewer experiments than classical machines to reveal insights about viruses, black holes, and more.

Quantum computers can theoretically achieve a quantum advantage where they can find the answers to problems no classical computer could solve even if given thousands of years or more. This advantage can grow exponentially when a quantum computer links together a greater number of qubits—the quantum-mechanically entangled bits that such a computer uses.

One quantum-computing application that has drawn plenty of attention is code breaking. However, when Nobel laureate Richard Feynman first proposed the idea of quantum computers, he envisioned them modeling quantum systems such as molecules—for instance, undertaking chemistry and physics simulations that might yield insights into next-generation batteries or new drugs.

“My primary aspiration is to build a quantum artificial superintelligence,” says study coauthor Robert Hsin-Yuan Huang, a theoretical quantum physicist and theoretical computer scientist at Caltech, in Pasadena, Calif. “We are very far from being able to achieve that goal. But I always feel that there are many questions we can explore now to bring us closer to that dream. Understanding how quantum technology could improve our ability to learn from the physical world is a very important first step towards this ambitious goal.”

In the new study, researchers focused on how both classical and quantum computers might analyze data collected about quantum systems during experiments. In both conventional and quantum-enhanced experiments, sensors may collect multiple readings of a quantum system. However, conventional experiments can analyze such readings only one at a time, whereas a quantum-enhanced experiment can entangle these multiple readings and analyze them all at once.

In experiments employing up to 40 qubits and 1,300 quantum gates in the 54-qubit Sycamore processor, the researchers found that quantum machines can learn from exponentially fewer experiments than those required in conventional experiments.

“These results provide the first rigorous foundation showing that emerging quantum technology can significantly improve how humans can learn about nature in physics, chemistry, material science, and biology,” Huang says.

The researchers focused on three different tasks—predicting the properties of a quantum system after scanning its properties; predicting the properties of a key component of a quantum system after analyzing its behavior; and modeling the behavior of a quantum system. Their findings suggest that no conventional experiments with classical computers can overcome the quantum advantage seen with quantum computers on such tasks.

“This gives me hope that quantum computers will allow us to see and learn about parts of our universe that would otherwise be invisible,” says study coauthor Jarrod McClean, a theoretical quantum physicist and theoretical computer scientist at Google Quantum AI in Venice, Calif.

Currently quantum computers are noisy intermediate-scale quantum (NISQ) platforms, meaning they are error-ridden and at most possess only a few dozen to a few hundred qubits. However, the researchers note their results suggest that even today’s NISQ processors can display a substantial quantum advantage when it comes to learning from experiments.

“The physical experiments on Google’s Sycamore processor show that a huge quantum advantage can already be seen on noisy quantum machines,” Huang says. “This shows that we may be able to see how quantum technology can transform science sooner than we originally think.”

The scientists detailed their findings 9 June in the journal Science.

The Conversation (1)
Howard Weinberg14 Jun, 2022
M

This fascinating article would greatly benefit from diagrams depicting the signal generation and processing. It is a jumble to outline only in words. Please send to your editor for this excellent piece.

The First Million-Transistor Chip: the Engineers’ Story

Intel’s i860 RISC chip was a graphics powerhouse

21 min read
Twenty people crowd into a cubicle, the man in the center seated holding a silicon wafer full of chips

Intel's million-transistor chip development team

In San Francisco on Feb. 27, 1989, Intel Corp., Santa Clara, Calif., startled the world of high technology by presenting the first ever 1-million-transistor microprocessor, which was also the company’s first such chip to use a reduced instruction set.

The number of transistors alone marks a huge leap upward: Intel’s previous microprocessor, the 80386, has only 275,000 of them. But this long-deferred move into the booming market in reduced-instruction-set computing (RISC) was more of a shock, in part because it broke with Intel’s tradition of compatibility with earlier processors—and not least because after three well-guarded years in development the chip came as a complete surprise. Now designated the i860, it entered development in 1986 about the same time as the 80486, the yet-to-be-introduced successor to Intel’s highly regarded 80286 and 80386. The two chips have about the same area and use the same 1-micrometer CMOS technology then under development at the company’s systems production and manufacturing plant in Hillsboro, Ore. But with the i860, then code-named the N10, the company planned a revolution.

Keep Reading ↓Show less
{"imageShortcodeIds":[]}