The December 2022 issue of IEEE Spectrum is here!

Close bar

Why I'm Wagering $100,000 on Quantum Computing

If quantum computing really does turn out to be impossible for some fundamental reason, I'll be absolutely thrilled.

3 min read
Why I'm Wagering $100,000 on Quantum Computing

Hi, I'm Scott Aaronson. I study quantum computing at MIT. Recently, on my blog, I offered a $100 000 reward for a demonstration, convincing to me, that scalable quantum computing is impossible in the physical world. The award is entirely at my discretion; I might also choose to give smaller awards for "partial" falsifications of scalable quantum computing. Rachel Courtland of IEEE Spectrum asked me to comment on why I made such an offer; in particular, she wanted to know "why it's even an open question whether quantum computing is scalable." She adds: "I think a lot of non-experts assume that it's just a question of investment, time, and technological innovation."

Personally, I think that those non-experts are completely right: it is just a question of investment, time, and innovation! Indeed, that's the only reason I felt emboldened to make this offer. While I could scrounge together $100 000 if necessary, it certainly wouldn't be easy on a professor's salary.

The context for my offer is that, for decades, a small but vocal minority of computer scientists and physicists has held that building a scalable quantum computer is impossible: not just really, really hard (which everyone agrees about), not just "impossible for the next thousand years" (how would anyone know?), but impossible even in principle, in the same sense that perpetual-motion machines or faster-than-light travel are impossible in principle. A few of the skeptics seem rather angry, and express the view that quantum computing researchers are some sort of powerful cabal bent on suppressing dissent.

Tiny quantum computations have already been demonstrated in the lab – for example, 15 has been factored into 3×5 – so the question is whether quantum computers can be "scaled up" to bigger sizes capable of solving more interesting problems. The central problem is decoherence, meaning unwanted interactions between the computer and its external environment, which prematurely "measure" the computer and destroy its fragile quantum state. The more complicated the quantum computation, the worse a problem decoherence can become. So for the past fifteen years, the hope for building scalable quantum computers has rested with a mathematically-elegant theory called "quantum fault-tolerance," which shows how, if decoherence can be kept below a certain critical level, clever error-correction techniques can be used to render its remaining effects insignificant.

Not surprisingly, most quantum computing skeptics – among the ones who offer physical arguments at all! – focus on trying to poke holes in particular methods for quantum fault-tolerance. Some of their criticisms are interesting and might lead to good science. The problem, from my perspective, is that so far the skeptics' case has been entirely negative: none of them are able even to hint at an alternative picture of physical reality, which would explain from basic principles why no form of fault-tolerance can work, and why quantum computing isn’t possible.

Most of the skeptics say that they have no problem with quantum mechanics itself (it is, after all, the best-confirmed physical theory of all time); it's only scalable quantum computers that they object to.  To date, though, no one really knows how you can have quantum mechanics without the possibility of quantum fault-tolerance. So as I see it, the burden falls on the skeptics to give an alternative account of what's going on that would predict the impossibility of scalable QC.

An even more dramatic way to put the point is this: if quantum computing is really impossible, then we ought to be able to turn that fact on its head. Suppose you believe that nothing done by “realistic” quantum systems (the ones found in Nature) can possibly be used to outperform today’s classical computers. Then by using today’s classical computers, why can’t we easily simulate the quantum systems found in Nature? What is the fast classical algorithm for simulating those quantum systems? How does it work? Like a wily defense attorney, the skeptics don't even try to address such questions; their only interest is in casting doubt on the prosecution's case.

The reason I made my $100 000 bet was to draw attention to the case that quantum computing skeptics have yet to offer. If quantum computing really does turn out to be impossible for some fundamental reason, then once I get over the shock to my personal finances, I'll be absolutely thrilled. Indeed, I'll want to participate myself in one of the greatest revolutions in physics of all time, a revolution that finally overturns almost a century of understanding of quantum mechanics. And whoever initiates that revolution will certainly deserve my money.

But what I know for sure is that quantum computing isn't impossible for some trivial reason that’s simply been overlooked for 20 years by a very large group of physicists, mathematicians, computer scientists, and engineers. And I hope putting my money where my mouth is will help more people realize that.

The Conversation (1)
Todd Krueger12 Nov, 2022
INDV

While I believe quantum entanglement is proven, I also believe that quantum entanglement cannot be reliably reliably sustained in a computer system without what I would call, Quantum Entanglement Cascade Collapse, where localized atoms sharing like frequency states will interact one another randomly within a like-system state. This being the case, an entanglement of bits could occur and would skew the results of code states as the are delivered with unintentional entangled across the chip. This problem would be magnified each cycle.

bit 1)X<->Null(+/-)<->Y =C | bit 2)X<->Null(+/-)<->Y=C

X

bit 3)X<->Null(+/-)<->Y =C | bit 4)X<->Null(+/-)<->Y=C

C=bit-state/per clock-cycle

If bit 1 is held in a like-state through a cycle while bit 4 is held in the same cycle, t is not just a matter of physical grace in a cycle, but a random like-cycle can cause cross to bit 3 just by physical-grace of an adjacent bit’s atom hitting a random entanglement that could not be be predictable.

I know I’m probably not up to speed on this, but this is how I see it running in my head. Comments? By the way, you a fantastic.

Why Functional Programming Should Be the Future of Software Development

It’s hard to learn, but your code will produce fewer nasty surprises

11 min read
Vertical
A plate of spaghetti made from code
Shira Inbar
DarkBlue1

You’d expectthe longest and most costly phase in the lifecycle of a software product to be the initial development of the system, when all those great features are first imagined and then created. In fact, the hardest part comes later, during the maintenance phase. That’s when programmers pay the price for the shortcuts they took during development.

So why did they take shortcuts? Maybe they didn’t realize that they were cutting any corners. Only when their code was deployed and exercised by a lot of users did its hidden flaws come to light. And maybe the developers were rushed. Time-to-market pressures would almost guarantee that their software will contain more bugs than it would otherwise.

Keep Reading ↓Show less
{"imageShortcodeIds":["31996907"]}