Computing

Waiting for Quantum Computing? Try Probabilistic Computing

An engineer channels Galileo to describe a new approach to computing: p-bits

Illustration of three people.
Illustration: Serge Bloch

Computer scientists and engineers have started down a road that could one day lead to a momentous transition: from deterministic computing systems, based on classical physics, to quantum computing systems, which exploit the weird and wacky probabilistic rules of quantum physics. Many commentators have pointed out that if engineers are able to fashion practical quantum computers, there will be a tectonic shift in the sort of computations that become possible.

But that's a big if.

Quantum computers hold great theoretical promise, sure, but the hurdles that need to be overcome to build practical machines are enormous. Some skeptics have argued that the technical challenges are so immense that it's very unlikely that general-purpose quantum computers will become available anytime in the foreseeable future. Others, including the engineers now working very hard to build these machines at Google, IBM, Intel, and elsewhere, are more sanguine, anticipating that 5 or 10 more years of work may be enough to bring the first practical general-purpose quantum computers on line. Only time will tell.

But even if the whole quantum-computing enterprise were to develop far more slowly than its proponents anticipate, one thing seems certain. Quantum computing has already inspired a deeper understanding of the role of probability in computing systems—just as the late physicist Richard Feynman had hoped it would when he brought the idea to people's attention back in the early 1980s.

It was just this understanding that we were seeking in 2012 when our group began research on probabilistic bits, or “p-bits," a play on the word used to describe the fundamental unit of information in a quantum computer, the qubit. Feynman had considered such a probabilistic computer as a counterpoint to the quantum computer he was envisioning. So, we asked ourselves: How could we build one?

One way to store a bit is to use a magnet with two possible directions of magnetization. Early computers used that approach for what was known as magnetic-core memory. It's hard to miniaturize magnetic memory, though, because magnets become unstable as they are made smaller. We have turned this seeming bug into a feature, using tiny unstable magnets to implement p-bits. In 2019, with the help of collaborators at Tohoku University, in Japan, we built a probabilistic computer with eight such p-bits.

We didn't really need the new magnet-based p-bit to build a probabilistic computer, though. Indeed, earlier we had built probabilistic computers that implement p-bits using elaborate electronic circuits to generate pseudorandom sequences out of deterministic bits. Companies like Fujitsu are already marketing similar probabilistic computers. But using unstable magnets as the fundamental building block allows a p-bit to be implemented with a few transistors instead of a few thousand, making it possible to build much larger probabilistic computers.

In such a computer, a system of p-bits evolves from an initial state to a final state, transiting through one of many possible intermediate states. Which path the computer takes depends on pure chance, with each path having a certain probability. Add together the probabilities of all possible paths there, and you get the overall probability of getting to a given final state.

A quantum computer does something similar, but it uses qubits instead of p-bits. This means that each path now has what physicists call a probability amplitude, which can be negative. More precisely it is a complex number, meaning that it has both a real part and an imaginary part.

To determine the overall probability of going from an initial state to some final state in a quantum computer, you first have to add the amplitudes for all the possible paths between the two to get the probability amplitude of the final state. That final amplitude is also a complex number, whose magnitude you then square to get the actual probability, a number that ranges between 0 (never happens) and 1 (always happens).

That, in a nutshell, is the essential difference between a probabilistic computer and a quantum computer. The former adds probabilities; the latter adds complex probability amplitudes.

This difference is more important than it might seem. Probabilities are positive numbers less than one. So adding an extra path can only increase the final probability. But probability amplitudes are complex numbers. That means that adding an additional path can cancel out an existing path. It is as if a path can have a negative probability.

The power of quantum computing comes directly from this ability to negate probabilities. Celebrated quantum algorithms—like the one Peter Shor developed for integer factorization or the one Lov Grover devised for searching through data—carefully orchestrate the intermediate paths available so that those leading to the wrong outputs cancel, while those leading to the right answers add constructively.

But this power comes at a price. The qubits that carry these complex amplitudes have to be carefully protected from the environment, often requiring the electronics to be maintained at cryogenic temperatures. A probabilistic computer by contrast can be built with simpler technology operating at room temperature. But such a computer lacks the magic of negative probabilities, making it effective only for algorithms that do not require path cancellation.

In truth, it's theoretically possible to emulate a quantum computer using probabilistic bits, but it may not be a practical strategy. Still, there are important problems for which probabilistic computers could provide significant speedup over deterministic ones, which is why we've been so interested in building this new type of computing machine.

How would sucha probabilistic computer work? The principles are very different from those underlying the digital systems we use every day, making them foreign even to most students of computer engineering. So we wanted to offer a gentle introduction to the topic, which we present here in the form of a dialogue.

Salviati provides the authors' knowledge and perspective.

Sagredo is the intelligent layman, representing you, dear reader.

Simplicio is uninterested in the topic at hand but provides some comic relief.Illustrations: Serge Bloch

The dramatis personae are named in homage to Galileo Galilei, who invented characters with these names in his Dialogue Concerning the Two Chief World Systems, the famous 1632 text in which he propounded the then-provocative idea that Earth revolves around the sun. They are:

Salviati, who like Galileo's Salviati provides the authors' own knowledge and perspective;

Sagredo, who like Galileo's Sagredo takes the role of the intelligent layman, which, dear reader, is meant to represent you;

Simplicio, who unlike Galileo's Simplicio does not stand in for someone wedded to the viewpoint of the Catholic Church that the universe revolves around the Earth. We give him a cameo role here for the sake of parity and to provide a little comic relief.

Galileo cast his dialogue as a series of discussions held over four days at Sagredo's house in Venice. We bring this more up to date by setting the action on a plane flight of 40 minutes, enough time for some strangers to engage in a serious technical discussion.

///////

Sagredo: I saw you were reading an IEEE journal—are you an electrical engineer?

Salviati: Indeed I am. I do research in computing.

Sagredo: Interesting. I'm in business, but I love reading about the latest engineering advances. Have you been working on anything interesting lately?

Salviati: Gosh yes. My colleagues and I have been studying a new approach to computing that we are very excited about.

Sagredo: Really? What is it?

Salviati: I'd be happy to describe it, but it's kind of hard to explain.

Sagredo: I'm not going anywhere until we land. And I love this kind of stuff. Please tell me more.

Salviati: Okay. As you no doubt know, all our electronic devices like our smartphones are based on circuits that give a well-defined output for every input: Give it 5 and 6; then it can multiply those numbers and give you 30. Well, we have built a circuit that can work in reverse: Give it 30, and it will give you all possible inputs, namely 5 and 6, 15 and 2, 10 and 3, and 30 and 1.

Sagredo: Sounds cute, but what use is that?

Salviati: It could have lots of uses, because many problems are much more difficult in reverse. For example, multiplication is much easier than factorization. After all, many kids can multiply 771 and 85 to get 65,535. But how many can take 65,535 and give you the two factors 771 and 85? How many could go further and give you all the other combinations, like 257 and 255, for example?

Sagredo: I see. But I have heard that modern digital computers can even beat grandmasters at complex games like chess. Surely they can do inverse problems, too?

Salviati: It's true that digital computers can beat grandmasters at chess or even the game of Go. What's not as well known is that they use up 10 megawatts while doing so, while the brain of the grandmaster is running on a mere 10 to 20 watts. There's a lot of interest in making complex computing more energy efficient and sustainable. We think the compute-in-reverse circuits we've been working on will do just that.

Sagredo: I suppose you can't really explain your design to a novice like me?

Salviati: It would probably take more time than we have for you to understand a real application, like optimization. But you could get the idea if I just take a simple operation and explain how digital circuits do it, and how we would do it instead.

Sagredo: Great. Let's go! I have trouble sleeping on planes anyway.

Salviati: I won't be offended if you fall asleep. (My lectures end up doing that to some of my students.) But I need to draw a couple of pictures. [Salviati sees that the person on his other side has an unused cocktail napkin.] Excuse me, can I use your napkin?

Simplicio: Be my guest. But can I have your copy of Sky Magazine in return?

Salviati: It's a deal. [Salviati lowers his tray table and starts drawing on the napkin.] You see, in a digital computer everything is represented as bits: zeros and ones, which can be represented by physical entities with two states, just like a magnet, say.

Engineers build elaborate circuits that carry out specific operations. For example, we can build a circuit that does a one-bit binary multiplication: The output bit, which I'll call C, is 0 or 1, depending on the product of the input bits, A and B.

Sagredo: So what do your reverse circuits do that is different?

Salviati: We build our circuits with p-bits which are neither 0 nor 1. Instead, they keep fluctuating rapidly between those two values. For 50 percent of the time the p-bit is 0, and the other 50 percent, it is 1.

Sagredo: What use is that? Such bits carry no information at all.

Salviati: Right—not until we can make them talk to one another. You see, if they don't talk, they all fluctuate between 0 and 1 independently. We could plot a histogram like this one showing the probability of all combinations of A, B, and C.

Each one of the eight possibilities is equally likely.

Sagredo: Which is completely useless, as I said.

Salviati: Yes. But now suppose A, B, and C can communicate and that they like listening to one another and doing the same thing. If A becomes 1, B follows, and so does C. And if A becomes 0, so do B and C. Now, if we plot a histogram, we have only two peaks.

Our little p-bit magnets (or whatever we're using) still keep fluctuating, but they fluctuate in unison.

Sagredo: It's like you had just one big magnet switching between 0 and 1, which still doesn't seem very useful.

Salviati: That is true. If we have just a very strong positive communication, all we get is one big magnet. To make this useful we have to cleverly engineer the communication so that a desired set of peaks show up.

For example, if we want to implement the one-bit multiplier we would only want four out of the eight peaks to show up: For {AB→C}, we'd want to see {00→0}, {01→0}, {10→0}, {11→1}.

If we could engineer the communication between p-bits to achieve that, we would have the invertible circuit we talked about.

Sagredo: How's that?

Salviati: Running freely, the three magnets shuttle among the four possibilities:

{00 0}, {01 0}, {10 0}, {11 1}.

But if we forcibly lock the A and B magnets both to zero, the magnets are forced into just one choice {00 0}, meaning that C will take on the value 0.

Sagredo: That is like a multiplier operating in forward mode: 0 x 0 = 0, right?

Salviati: Yes. To operate in inverse mode, we can lock C to 0. The magnets cannot go to {11 1} anymore. So the system will now fluctuate among the three remaining choices: {00 0}, {01 0}, {10 0}. That is the multiplier in reverse. Given the output 0, it is telling us that there are three possible inputs consistent with it: 0 x 0, 0 x 1, and 1 x 0.

Sagredo: I see. But how do you engineer this magical communication among your p-bits? Indeed, how do you even know what kind of communication to engineer?

Salviati: There are well-developed methods for figuring out the kind of communication needed to create a desired set of peaks.

Sagredo: Now you're sounding evasive. I thought you said you had figured it out and that is why you were so excited.

Salviati: Indeed, this part is well known, at least for some applications. Companies are building probabilistic computers using normal hardware and random-number generators to simulate the probabilistic bit flipping I've been discussing. But doing it that way wastes a lot of energy and drains the battery of a laptop in a hurry. Our circuit performs the same function with just three transistors and a special hardware element whose intrinsic physics generates the random numbers.

Sagredo: So what is that special element you needed to get this to work?

Salviati: We've used something called a magnetic tunnel junction to build a neat device that lets p-bits communicate easily. Its output, which because I'm an engineer I'll call Vout , fluctuates. If Vin, is zero, Vout will be 1 for 50 percent of the time, 0 for the other 50 percent. But if Vin is positive, Vout prefers the 0 state. And if Vin is negative, Vout prefers the 1 state. If you make Vin sufficiently positive or negative, you can “lock" the output to one state or another.

That is how each p-bit listens to the other p-bits, through the input voltage Vin. And it talks through its output voltage Vout. For example, p-bit A can communicate with a p-bit B by feeding the output of A back to the input of B. We used this device to build invertible circuits. We've done nothing earthshaking with them so far: They're just a proof of concept. But we've shown that such a device can be built with an advanced technology that will one day let us build huge circuits to solve real-world problems.

Sagredo: I see. What sorts of real-world problems could you solve with suitably complicated p-bit circuits?

Salviati: Well, we could use it to solve optimization problems, problems that require you to find configurations that minimize some cost function.

Sagredo: Could you say that again in English?

Each problem requires a specific pattern of connections. Once we figure out the pattern and wire it up properly, the p-bit circuit will give the answer in the form of configuration peaks

Salviati: People solve optimization problems every day—like figuring out the optimal order to deliver a set of packages. In that case, the total distance driven is the cost function that you want to minimize. Problems like that can be mapped onto the basic architecture we use. Each problem requires a specific pattern of connections. Once we figure out the pattern and wire it up properly, the p-bit circuit will give the answer in the form of configuration peaks.

Sagredo: Okay, you've got me interested, but we're about to land. How can I find out more about your research?

Salviati: I've recently published an article on the topic in a magazine called IEEE Spectrum, and the online version of that includes an easy-to-read summary of how we built a p-bit computer that calculates the degree of genetic relatedness between relatives.

How to Build a Simple Probabilistic Computer

Here are some simple circuits that could be used to judge genetic relatedness

So how could you actually build a probabilistic computer? The strategy we are pursuing is based on a device whose output ( Vout) has only two possible values, with probabilities that are controlled by the input voltage (Vin). Those output voltages might be ±1 volt or ±0.8 volt or whatever. We'll assign those to two logical states and denote them as 0 and 1.

There are many ways to construct such a device. The strategy that's most interesting to us is using something called a magnetic tunnel junction—a device whose resistance depends on its magnetic state. In this case, you'd use a tunnel junction that's unstable: It flips rapidly between its two magnetic states, resulting in its resistance constantly varying between two values. A p-bit combines such a tunnel junction with a few transistors. One transistor is controlled by the input voltage; a few others serve just to buffer the output.

But you could create a circuit that works like this many other ways. It really doesn't matter how it's built.

The other thing we need in order to construct the fundamental building block of a probabilistic computer is some sort of circuitry that can take multiple inputs and combine their influence to produce a single voltage at the Vin terminal. That, too, can be achieved in many different ways, for example, with a simple network of resistors or capacitors.

In the end, we have a circuit that has multiple inputs and one output, which fluctuates rapidly between its two possible states. We can hide all the complicated internal elements and just draw this p-bit device as a rectangle.

If none of the inputs are connected to anything, the output spends half the time in the logical 0 state and half the time in the logical 1 state. Applying a signal to one or more of the inputs can affect the output. It still fluctuates rapidly, but it may then spend more of the time in the 1 state and less time in the 0 state or vice versa.

A probabilistic computer consists of many such p-bit units wired together in some way. To illustrate how such a computer can compute something interesting, we've simulated one that calculates the degree of genetic correlation between relatives. This is a very simple thing to calculate, but it's useful for getting across the general idea of how you can compute things with p-bits.

First, consider a family with two parents and their two children. For that, our probabilistic computer would have to have four p-bits, one to represent each person involved:

To model the genetic influence of each parent on each child, we'll simply connect the output from each parent to one of the inputs for each of the two children:

Now the state of a child at any one instant is influenced by each parent. To judge the overall degree of genetic correlation between the two siblings, you need one other simple logic element: an exclusive NOR ( XNOR) gate, represented the usual way here:

The output of an XNOR gate is 1 when each of its two inputs are the same; otherwise the output is 0. Now, just connect the outputs of the two child p-bits to the two inputs to this XNOR gate:

The output of the XNOR will now be 1 whenever the outputs of the two child p-bits have the same value. To get the overall degree of correlation between the two siblings, you merely need to average the output of the XNOR over time, which can be done using just a resistor and capacitor:

You might have anticipated the answer here, which is that time-averaged output of the XNOR gate is 0.5, indicating the siblings' states match 50 percent of the time. That is indeed the average degree of genetic correlation between two siblings.

To what degree are you genetically related to, say, an uncle? You just need to hook up six p-bits to compute that:

The answer you'd get by measuring the output of such a circuit is 25 percent. We've simulated such circuits, which after a brief warm-up period approach the values you'd expect.

These calculations of the degree of genetic correlation between relatives are rather trivial, but they serve to illustrate how probabilistic computers, albeit very simple ones, could be built.

With larger numbers of p-bits and more complicated networking between them, probabilistic computers can tackle other challenges, including factoring numbers and solving optimization problems like the well-known traveling salesman problem. Just how far this approach can be extended, and whether such circuits will prove truly practical for problems of genuine interest, is not yet clear, but we're excited to be exploring the possibilities.

This article appears in the April 2021 print issue as “Dialogue Concerning the Two Chief Computing Systems."

IEEE Spectrum
FOR THE TECHNOLOGY INSIDER

Follow IEEE Spectrum

Support IEEE Spectrum

IEEE Spectrum is the flagship publication of the IEEE — the world’s largest professional organization devoted to engineering and applied sciences. Our articles, podcasts, and infographics inform our readers about developments in technology, engineering, and science.