**First in a 2-part series on quantum computing.**

**Computers today** are fast approaching some fundamental limitations. Perhaps their biggest problem is that they exploit the classical physics that governs the hurly-burly rush of countless billions of electrons through nearly as many transistors. And the chips at the heart of today's computers are running out of room for classical physics to work.

To make those chips' transistors switch faster, we've primarily relied on making the devices smaller. But when they begin to approach 10 nanometers or so--and it is the goal of the semi conductor industry to get there in the next decade--very odd things will happen. Formerly well-behaved electrons will start revealing their quantum nature--darting across the transistor on the dictates of probability, regardless of whether the device is switched on or off. When transistors reach those infinitesimal dimensions and electrons start showing their true colors, computer make rs will have two choices: try to fend off the quantum weirdness with radically new types of semiconductors and transistors, or embrace the weirdness.

We say: surrender to the weirdness. Working with the quantum nature of things instead of against it will open up vast new frontiers for computing. And achievements during the past couple of years at university and government laboratories around the world have made it clear that a large-scale, practical quantum computer could be built, probably in the next 25 to 30 years. These achievements have demonstrated that the semiconductor manufacturing technologies underpinning modern computing, which were developed over nearly half a century, need not be abandoned. On the contrary, they will be instrumental in making quantum computers a practical reality.

These machines will take computing where it's never been before. Most notably, there are classes of problems for which a conventional computer can do little more than try out all the possible solutions one at a time until it stumbles on the answer. Say, you have a phone number and want to look up the name it's paired with in a phone book that has 1 million entries. There's not much you can do but go page by page looking for the match. On average, your classical computer must examine half a million entries before finding a match. Sure, at gigahertz microprocessor speeds even that won't take long, but there are plenty of much larger needle-in-a-haystack problems scientists face all the time, some of which would take your laptop 100 years to complete.

If you had a computer based on the principles of quantum mechanics, however, you could, in effect, examine all the entries in the telephone book practically at the same time. Such a quantum computer would need just 1000 steps--one five-hundredth of what a classical computer needs--to find the right name in the million-entry phone book. The theoretical ability of quantum computers to perform parallel processes seemed like an odd parlor trick when they were dreamed up in the 1980s, first by Richard Feynman and more concretely by David Deutsch. But in 1994 something happened that put quantum computing squarely in the crosshairs of governments, armed forces, and everyone else with digital secrets to keep.

Peter Shor, a theoretical matematician then at Bell Labs, discovered an algorithm for a quantum computer that could far more efficiently determine the prime factors of a large integer. Factoring is one of those problems that tie conventional computers in knots. Computers are so bad at it, in fact, that most encryption systems today rely on the products of enormous prime numbers, figuring it would take a computer decades to factor the number. Shor's algorithm changed all that, and the idea that so much information could become so vulnerable has sparked a worldwide race to build a machine powerful enough to crack codes.

The first step in building a quantum computer is to find something to act as a quantum bit, or qubit, something whose quantum state can be read and manipulated. The trouble is that a quantum state is an exceedingly delicate thing, mainly because it can be changed by the most evanescent interactions--a fluctuation in a magnetic field, a wayward photon, and so on. Just a year after Shor's breakthrough, two physicists then at Austria's University of Innsbruck, Juan Ignacio Cirac and Peter Zoller, theorized that a string of ions held fast in a vacuum by an electromagnetic field and cooled to within a few thousandths of a degree above absolute zero could act as stable qubits and form the basis of a quantum computer. Scientists at the U.S. National Institute of Standards and Technology (NIST), the nation's timekeeper, already had plenty of experience trapping and cooling ions from their work with atomic clocks, and they wasted no time in putting the scheme into practice. That same year, David Wineland of NIST and one of us (Monroe) used a trapped beryllium ion as a qubit to perform logic operations that are key to running a quantum computer.

## Comments