The December 2022 issue of IEEE Spectrum is here!

Close bar

Machine Learning Will Tackle Quantum Problems, Too

ML algorithms take on quantum-computer workloads till the qubits come to town

3 min read
Vector art of a head with circuits examining a quantum symbol
Getty Images

Quantum computers may prove far more powerful than any conventional supercomputer when it comes to performing the kinds of complex physics and chemistry simulations that could lead to next-generation batteries or new drugs. However, it may take many years before practical and widespread quantum computing becomes reality.

Now a new study finds that machine learning, which now powers computer vision, speech recognition, and more, can also prove significantly better than regular computers at the kinds of tasks at which quantum computers excel. These findings suggest that machine learning may help tackle key quantum problems in the era before quantum computers finally arrive.

Quantum computers can theoretically achieve a “quantum advantage” where they can find the answers to problems no classical computers could ever solve. The more components known as qubits that a quantum computer has, the greater its computational power can grow, in an exponential fashion.

“If quantum computers were mature right now, it would definitely be better to use quantum computers.”
—Robert Hsin-Yuan Huang, Caltech

A major application for quantum computers may be modeling complex molecules and other systems where strange quantum effects play key roles. These odd phenomena include superposition, where an object may exist in two or more places or states at the same time, and entanglement, wherein multiple bodies can influence each other instantaneously regardless of how far apart they are.

Classical computers often struggle to model quantum systems, especially ones involving many bodies. In contrast, quantum computers are themselves quantum systems, and so can theoretically solve these kinds of quantum many-body problems far more quickly.

However, quantum computers are currently noisy intermediate-scale quantum (NISQ) platforms, meaning their qubits number up to a few hundred at most. To prove useful for practical applications, future quantum computers will likely need thousands of qubits to help compensate for errors, a goal that may take many years.

In the new study, researchers investigated machine-learning algorithms, ones that improve automatically through experience, running on classical computers. They found these classical machine-learning algorithms may solve challenging quantum problems better than any other algorithm on classical computers. They detailed their findings online 22 September in the journal Science.

One set of applications the scientists analyzed involved finding the ground state of a molecule, the one in which it has the least amount of energy. Superposition and entanglement can make predicting a molecule’s ground state very difficult, especially when it possesses many atoms, says study lead author Robert Hsin-Yuan Huang, a quantum-information theorist at the California Institute of Technology, in Pasadena, Calif.

The researchers investigated what happened when classical machine-learning algorithms were given data on the ground states of molecules—for example, information supplied by experiments that collected quantum data from molecules. They found that such classical machine-learning algorithms could efficiently and accurately go on to predict the ground states of other molecules significantly better than other kinds of classical algorithms.

This advantage comes from how “nature operates quantum mechanically,” so data gathered from quantum experiments “contains fragments of the quantum computational power in nature,” Huang says. This means classical machine-learning algorithms that learn from this data “can predict more accurately and more efficiently than any non-machine-learning algorithm,” he adds.

All in all, when it comes to predicting ground states, a classical machine-learning algorithm “can predict more accurately than classical non-machine-learning algorithms with the same amount of computational time,” Huang says. “If we instead aim at achieving the same prediction accuracy, then classical machine learning can run super-polynomially faster than classical non-machine-learning algorithms.”

Another set of applications the researchers explored was classifying a wide range of quantum phases of matter. Familiar phases of matter include the many crystal structures that ice may adopt, whereas more exotic quantum phases of matter include the kinds seen in topological insulators, where electricity or light can flow without scattering or losses.

The scientists found that when classical machine-learning algorithms were trained on classical data on quantum phases, they could efficiently learn how to accurately classify quantum phases they did not encounter during training.

“It is exciting to have formal proof that classical machine-learning algorithms trained with data from physical experiments could outperform any classical non-machine-learning algorithms in an important problem in quantum physics,” Huang says. “It really shows the power of classical machine learning in addressing challenging problems in physics, chemistry, and material sciences.”

Future research can explore what other important quantum problems for which classical machine learning could do well, Huang says. Further work can also explore how to optimizing the way in which classical machine-learning algorithms can solve quantum problems, in terms of how much training data and computational time they require, he notes.

Ultimately, one day quantum computers will outperform even classical machine learning when it comes to simulating chemistry and physics experiments. “If quantum computers were mature right now, it would definitely be better to use quantum computers,” Huang says.

Still, until quantum computers arrive, “classical machine-learning models trained on experimental data can solve practical problems in chemistry and materials science that would be too hard to solve using classical processing alone,” Huang says.

The Conversation (0)

Why Functional Programming Should Be the Future of Software Development

It’s hard to learn, but your code will produce fewer nasty surprises

11 min read
A plate of spaghetti made from code
Shira Inbar

You’d expectthe longest and most costly phase in the lifecycle of a software product to be the initial development of the system, when all those great features are first imagined and then created. In fact, the hardest part comes later, during the maintenance phase. That’s when programmers pay the price for the shortcuts they took during development.

So why did they take shortcuts? Maybe they didn’t realize that they were cutting any corners. Only when their code was deployed and exercised by a lot of users did its hidden flaws come to light. And maybe the developers were rushed. Time-to-market pressures would almost guarantee that their software will contain more bugs than it would otherwise.

Keep Reading ↓Show less