Intel's New Path to Quantum Computing

Intel's director of quantum hardware, Jim Clarke, explains the company's two quantum computing technologies

10 min read

Samuel K. Moore is IEEE Spectrum’s semiconductor editor.

Intel's full silicon wafer of test chips, each containing up to 26-qubits, along with their 49-superconducting qubits chip Tangle Lake.
Photo: Amy Nordrum

Despite a comparatively late start, Intel is progressing quickly along the road to a useful quantum computer. The company’s director of quantum hardware, Jim Clarke, came by IEEE Spectrum’s offices on 9 May to prove it. He brought with him samples of two technologies that show why the chip fabrication powerhouse can make a unique contribution to the quest for exponentially-faster computing. The first was a Tangle Lake, a specially packaged chip containing 49-superconducting qubits that Intel CEO Brian Krzanich, showed off at CES in January. The other was something new: a full silicon wafer of test chips, each containing up to 26-qubits that rely on the spins of individual electrons. The first of these wafers arrived at Delft University of Technology, in The Netherlands, that day for testing. Clarke’s group can make 5 such wafers per week, meaning that Intel has probably now made more qubit “devices than have ever been made in the world of quantum computing.”

Jim Clarke on…

IEEE Spectrum: What’s special about Tangle Lake?

Jim Clarke: I can’t underscore, for these systems, how important the packaging is. Typically we make our computers to run at room temperature, in our back pocket or on our wrist or slightly higher temperatures, but never at a fraction of a degree above absolute zero [as you need for superconducting qubits]. So these guys developed a package that could withstand the temperatures mechanically and still be relatively clean from a signal perspective.

IEEE Spectrum: Is there a limit to the density of qubits using the technology in Tangle Lake? Those pinouts look pretty big.

Clarke: I think already this is the largest chip-to-[printed circuit board] attachment that Intel has ever done. So any larger than this on a single piece—the coefficient of thermal expansion and shrinkage would be severe. Not only that, as you see, the actual connectors have a very large footprint, and those are important right now.

We can go bigger (more qubits per chip) with this technology, but not by much. So, what we’ll do is work to make the qubits smaller and the connections smaller. And so, within the same size footprint, we can maybe increase the number of qubits by several factors. But it’s hard to reach a place with that technology where you’d have the millions of qubits you would need to do something really life altering.

IEEE Spectrum: So how do you get to millions of qubits?

Clarke: I can see a path with this technology to perhaps 1000. Beyond that, I think you have to get creative. That’s one of the reasons that we’re working on multiple qubit technologies. When we spoke in the fall, we were talking almost exclusively about the superconducting chips. But we’re actually working on two technologies. One’s a bit further along—that’s the superconducting. They each have pluses and minuses. For example, the size of the qubits in this other type, called a silicon spin qubit, is a million times smaller. So that would be one possible benefit.

IEEE Spectrum: How’s the silicon spin qubit work?

Clarke: Think of a conventional transistor with a steady stream of current flowing through it. Well, what we have is a single electron trapped in the transistor. That single electron can have one of two states: spin up or spin down. Those are the two states of the qubit. So what we are doing is essentially creating a series of single-electron transistors and coupling them together using our advanced transistor process technology.

Our philosophy with this technology is that by using Intel’s best process technology for transistors, we should be able to build the best spin qubit. That’s one of the reasons that we’re studying this type.

IEEE Spectrum: So how many qubits are on this one wafer?

Clarke: Just like a conventional wafer, you dice it up into individual chips. Each chip has 3, 7, 11, or 26 qubits. When we make a wafer like that, there are thousands and thousands of these little sub-arrays. So it’s quite possible that  with this wafer we will have made more qubit arrays than have ever been made in a university setting before.

IEEE Spectrum: How far along is the spin-qubit work compared with the superonducting qubit work?

Clarke: If the superconducting qubit community is [making chips with] between 10 and 50 qubits, the spin community is on the order of just a couple of qubits. It’s less mature by several years. What has been missing in that space has been, basically, the process control of a big semiconductor company like Intel. So that’s what we’re bringing to the table.

What we’re doing in the near term with those spin-qubit chips is trying to turn them into qubits. Right now, they are linear arrays of what we call quantum dots. We’re trying to basically prove the physics of larger devices.

Contrast that with the superconducting qubits: While for the superconducting qubits we still try to make them better, they’re at a stage where we can start integrating them into a system. With a quantum processor of the Tangle Lake size, it’s big enough where we can actually start to build all the components around it. We’re developing a quantum version of [the Intel Architecture] to go along with Tangle Lake. At some point, we hope to be able to basically swap a different qubit type into that overall structure.

We’re trying to build a quantum system that is extensible. So it can go [on without a hitch] whether it’s 50 qubits or whether it’s 1 million qubits. We want to make it so if I show up with a new kind of qubit tomorrow, they won’t have to redo their system.

IEEE Spectrum: So there wouldn’t be differences in applications you’d build off of each qubit technology?

Clarke: We hope not. Effectively, what we’re doing is hedging our bets by pursuing two technologies. Once we reach certain milestones that we have around, say, multiqubit operation or error correction, then we’ll likely settle on one technology.

IEEE Spectrum: Can there be software development at this point?

Clarke: We’ve had a simulator for several years. It’s called Intel Quantum Simulator. Last year we put it on [Intel’s open source software site] 01.org

What’s interesting about it is that, on your laptop, you can simulate about 30 qubits. But quantum computing is exponential in its behavior, so you’d need a supercomputer for 40.

Typically, what we would do is have one of our algorithm developers come up with an algorithm, then perhaps test it on a simulator, and then provide feedback to the hardware team or test it on early stage hardware. So it’s basically a feedback loop.

Right now there’s quite a bit of learning just by going through the simulator. Once you reach a certain size—for instance you can’t simulate 49—you have to find other ways to verify your algorithm.

IEEE Spectrum: How long will it likely take to get to, say, a 1000-qubit system?

Clarke: If you look through history, it was roughly 10 years between the first transistor and the first integrated circuit, and it was roughly 10 years between the first integrated circuit and the Intel 4004 microprocessor. The 4004 only had 2500 transistors. So if you think about where we are relative to that timeline, perhaps we’re in the early 1960s. So it’s not unreasonable in 5 years to have 1000 qubits. And that would be a really compelling achievement.

I think it’s probably closer to 10 years or more to have the 1 million qubits it would take to have profoundly changed society.

The hope is that if you can do 1000, that it isn’t 1000 times harder to do a million. But that remains to be seen, and it might be an optimistic wish at this point.

IEEE Spectrum: Is it likely that future quantum computers will have to link different chips and move quantum information between them?

Clarke: Our academic partners in the Netherlands are at least studying if you have a set of spin qubits on one part of the silicon, can you effect qubits in another part. Why is that interesting? If you have space in between the qubits, it allows you to have local electronic control—a more integrated computer chip.

What you see with Tangle Lake is basically just the qubits. There’s no control electronics. That’s all external to the refrigerator. [Superconducting qubits work only at the millikelvin temperatures reachable inside large dilution refrigerators.] With spin qubits, there are, for several reasons, opportunities to bring those control electronics closer to the actual qubits.

There are a lot of benefits for putting control electronics inside the fridge. Intel is working on cryogenic control chips that have been optimized for low-temperature operation and compatible with these particular chips here. I think you’ll see some of that later in the year.

IEEE Spectrum: Why is it easier to integrate the control electronics in the spin-qubit chip than in a superconducting qubit chip?

Clarke: A few reasons. One of the main reasons is that these qubits, we think, can be operated at higher temperatures. And by higher temperatures I mean that instead of one-hundredth of a degree above absolute zero, it’s more like 1 degree above absolute zero. That doesn’t sound like a lot, but from a refridgeration standpoint, you gain an order of magnitude in cooling capacity. What that means is you can have some local electronics around—electronics that dissipate power—and still be able to keep your qubits cold enough to operate.

IEEE Spectrum: Don’t these huge refrigerators make quantum computing too energy intensive?

Clarke: I get asked that a lot because certainly in the data center all they care about is energy efficiency. That’s why they build these in the middle of nowhere where land is cheap and its usually next to a big river like the Columbia River in Oregon. The way to think about this is: There’s no promise it’s going to be energy efficient. If I go to a datacenter and say “you need one or two of these; they’re not energy efficient but they’re exponential in computing power, meaning hundreds and thousands of times more powerful than these other systems,” you’re not really going to care about the energy. If power efficiency is compute per energy, we might be going up a little in energy, but we’re going up orders of magnitude in computing.

IEEE Spectrum: How many of these silicon spin qubit wafers have you produced?

Clarke:  Probably 5 wafers per week. Our first samples arrived in Delft today. That rate is a pittance compared to one of our transistor programs, where they’re producing hundreds or thousands of wafers, even in development. But still, five of those wafers has more devices than have ever been made in the world of quantum computing.

IEEE Spectrum: What’s your prediction for the most interesting near term applications for quantum computers?

Clarke: The first types of applications you’re going to see are optimization algorithms, which are used heavily, but are not something the average person would appreciate. These optimization problems, I’ll say, are ubiquitous in chemistry, biology, and different finance or mathematical systems. They’re just not things I would necessarily appreciate. It’s something I had to teach myself. 

People are starting to work on chemistry and materials. (My background is in chemistry.) If you could understand a molecule better than its ever been understood before, or faster than its ever been understood before, it opens up an area in chemistry, pharmaceuticals, and materials that could possibly impact our lives quicker. These would certainly be the applications we would see first before we start working on cryptography.

IEEE Spectrum: What about the applications farther out?

Clarke: There, you’re really doing things like matrix inversion, cryptography, unstructured search. And probably some of the applications haven’t been developed yet. I like to go back to the Cray 1 supercomputer in the 1970s. There was actually a fight at the time as to which national lab got the very first one. It’s been 40 years, but it’s still hard to believe that what we can do in our back pocket is more than it could do at the time. So really far down the road, it’s hard for me to predict. But one thing is for sure: If you develop computing power, someone will find a way to use it. That’s one thing that’s sure.

IEEE Spectrum: What are some important quantum computing milestones to look forward to?

Clarke: There are a few. The quality of the qubits have to be good enough that you don’t have an error every time you do an operation. We call this fidelity. Second, you need to prove error correction. These qubits don’t last very long, so you need a lot of redundant qubits to correct for errors. That hasn’t been proven to the extent that it needs to be proven. The third issue is control electronics. To operate these things at low temperature without a lot of latency, they need to be fast. That’s still an area for development.

The thing that worries me most is the interconnects—the wiring betwen qubits. It sounds simple. I’ll give you an example: Tangle Lake has 108 RF connectors to the outside world. Yet it has only 49 qubits. Our server chips have 7 billion transistors, but only 2000 pins to the outside world, and most of those are power and ground. Our memory chips—say, something like Optane—could have a terabyte of memory but less than 100 pins to the outside world. I think it’s unreasonable to have more wires than qubits, by a long shot.

IEEE Spectrum: Do you have a solution to the interconnect problem?

Clarke: We have a few ideas, but I won’t broadcast them today. One thing about these ideas is that there is still some fundamental physics that has to be proven. To say it’s just an engineering problem now, which some are saying, is incorrect. There’s still a lot of fundamental science that has to be looked at. But it’s definitely exciting.

I was born in 1972, which was the tail end of the moon landings. I draw a lot of parallels to the Space Race with this project. We’re ultimately trying to fundamentally change computing for the next 100 years. This is certainly why a lot of competitors are in this space. They feel the same way.

IEEE Spectrum: Is it really a race? Or is it one of those “a rising tide lifts all boats” situations?

Clarke: When we announced our project in 2015 with TU Delft, our assumption was that no one place could do it themselves and you really had to partner. For us, that was partnering with the university, working hand in hand to design this stuff. That being said, what you’re seeing today is universities and companies isolating themselves because they think the technology is maturing sooner than it actually is. And I wonder whether we’re hindering our velocity by not working in a more collaborative environment.

We try to work with multiple universities. We’re open to [commercial partnerships] if other industry partners could work in a synergistic fashion.

IEEE Spectrum: Imagine it’s 2050. Which technology has been more important, AI or quantum computing? And yes, we realize the answer is “a bit of both.” But if you had to come down on one side, what would it be?

Clarke: I reserve the right to think about this further, but here’s my take: Quantum computing right now, has a limited application space. Perhaps more so than AI. So, in terms of ubiquity, [the winner] might be AI. But if you were to find a significant fraction of applications that could run faster on a quantum computer, nothing beats exponential. Nothing. And so, if there’s a possibility that a quantum computer can help, say, find a cure for cancer, that would put it in the lead.

Faster doesn’t do it justice. Humans don’t do exponential very well. To the same extent, people don’t fully grasp what Moore’s Law has done. I think humans don’t appreciate the exponent. Perhaps we’ll change that.

The Conversation (0)