Thursday, October 24, 2019

Google's Quantum Supremacy Paper

[Important update, 10/25 0630. Dave Bacon has corrected me.  The chip is fully programmable. I had thought that the choice of coupler patterns (Fig. S25 in the SOM) would limit what could and could not be done. I thought those were the only patterns the hardware supported, but it's just the set that they chose; in fact, any couplers can be on/off in any cycle, as long as they aren't using the same qubits. I stand corrected! Thanks, Dave!]

[Edit 10/24. 1730 JP time: softened the language a bit in the first couple of paragraphs; it came out harsher than I intended in the original version.]

We interrupt your irregularly scheduled programming on cryptography (What Every Quantum Researcher and Engineer Should Know about Classical Cryptography) for some comments on Google's new supremacy paper...

By now, everyone on the entire planet and some people in space has heard that Google has claimed the achievement of "quantum supremacy". As the resident quantum pundit on Dave Farber's IP list, I suppose I'm obliged to say a few things, but I really don't have much to add beyond what some of the most prominent people have said, so this is largely a collection of links I mostly endorse...

Fundamentally, Google claims that their 53-qubit superconducting processor can solve a problem that would take a classical supercomputer 10,000 years.  But the problem is rather constrained; arguably, they defined the problem as whatever the computer is capable of doing.  That sounds harsh, and I don't mean it to be; it's still a tour de force that will stand as an important milestone, and it is programmable, but it's not yet fully general. Actually, let me soften that a bit; the chip itself was designed in conjunction with an understanding of what they are physically capable of doing, but very much with an eye toward what would most clearly demonstrate supremacy within their physical constraints.  So it's more fair to say it's hardware/software co-design. (And as I always tell my students, you can't design any computer without a plan for its workload!)

If you squint, it's something like a satisfiability problem (SAT, in CS theory terms).  In a SAT problem, you're given a Boolean circuit and challenged to find a set of input values that makes the (single) output TRUE.  In this case, measuring out all 53 qubits, it's not a single value that's the output, but rather a probability distribution. Using quantum entanglement and interference, in fact, it's not a flat "any number out of $2^{53}$" distribution, but instead has complex statistical characteristics.  It's those characteristics that they are looking for.  They started with a small problem size (using a subset of the qubits), calculated the distribution, saw that the machine conforms (but see the next paragraph!), then increased the size up past the level where their code can simulate it, and declared victory.

There are three two big catches to this: one, as already stated it's not fully programmable; two, it's not clear that this is beyond the capability of classical systems (see below); three, I believe them when they say it works, but noise remains the limiting factor, and it takes tremendous statistical skill to pull the signal out of the noise -- indeed, the fidelity (roughly, probability that it did the right thing) for their largest problem is only 0.2%!  That is, only one time in 500 does the circuit run properly, with no errors.  And given that the output is supposed to be a probability distribution, and that the space of answers is about $10^{16}$, that means figuring out whether it differs from pure noise is tough.

The paper itself:
https://www.nature.com/articles/s41586-019-1666-5
53 qubits and 76 authors by my count, which makes 1.43 authors per qubit.  The team is headed by John Martinis (pronounced close to "Martinez"), one of the long-time key people in superconducting qubits.  John was at UC Santa Barbara before Google lured him and his entire team away a few years ago, and put truly outstanding resources at their disposal.

There is also substantial (67 pages) Supplementary Online Material (SOM) at the bottom of the web page, don't miss it if you want details of everything from the fabrication (two chips bonded face-to-face, one with the classical parts and one with the quantum parts!) to the cross-entropy benchmarking (XEB) used to statistically prove their proposition that the device works to the calibration procedure (tremendously difficult) to the algorithm itself.

News & views from William Oliver:
https://www.nature.com/articles/d41586-019-03173-4

The five-minute Google feel-good video:
https://www.youtube.com/watch?v=-ZNEzzDcllU

Blog post by Sundar Pichai, Google CEO:
https://blog.google/perspectives/sundar-pichai/what-our-quantum-computing-milestone-means

They built on techniques developed by Sergio Boixo & co.:
https://arxiv.org/abs/1608.00263

Naturally, IBM, one of the other biggest names in quantum computing, is pushing back:
https://www.ibm.com/blogs/research/2019/10/on-quantum-supremacy/
(full disclosure: I'm Vice Center Chair of Keio's Quantum Computing Center, and we are working with IBM; my students have recently been using IBM's own 53-qubit processor.)

They're not questioning the experiment itself, but whether it's truly post-classical.  Simulators have been advancing rapidly, and _ideas_ for simulators and what counts as quantum supremacy have been moving even more rapidly.

(n.b.: there is pushback on the term "quantum supremacy"; in particular, the Europeans don't like it, they think it echoes Nazi terminology.)

Anyway...the most straightforward way of simulating a quantum computer takes memory exponential in the number of qubits:

10 qubits: 16KB
20 qubits: 16MB
30 qubits: 16GB
40 qubits: 16TB
50 qubits: 16PB
53 qubits: 128PB (144,115,188,075,855,872 bytes)

and, of course, a separate computation for each entry.  But numerous techniques compress that dramatically; one technique called tensor networks, which (very, very, very roughly) allows a query for the value of a state to reuse previously-calculated results, with the whole thing compressed into a tree of pointers to values, plus modifiers.  IBM's Edwin Pednault and team have simulated up to 56 qubits in special cases:
https://arxiv.org/abs/1710.05867
and this team's new work forms the core of the IBM pushback against Google:
https://arxiv.org/abs/1910.09534

Indeed, Alibaba simulated some special cases related to supremacy last year, up to *144* qubits:
https://arxiv.org/abs/1805.01450

There has been some recent discussion of what it is that truly determines the difficulty of a quantum computation; it's not just the number of qubits or the number of entangling two-qubit gates, it's a more complex function. I should dig up some references on that.

Scott Aaronson's quantum supremacy FAQ:
https://www.scottaaronson.com/blog/?p=4317
and a new post on the topic:
https://www.scottaaronson.com/blog/?p=4372

Simon Devitt's podcast, Meet the meQuanics, just did an episode with some of the inventors of some of the theoretical techniques Google used:
https://soundcloud.com/mequanics/meet-the-mequanics-e53-quantum-supremacy-a-montenaro-m-bremner-r-mann

More useful discussion:
https://spectrum.ieee.org/tech-talk/computing/hardware/how-googles-quantum-supremacy-plays-into-quantum-computings-long-game

And from John Preskill himself, inventor of the term:
https://www.quantamagazine.org/john-preskill-explains-quantum-supremacy-20191002/

Finally, let me point out that Chris Monroe is quoted in Financial Review, talking about his ion trap experiment from a couple of years ago -- featuring, you guessed it, 53 qubits (they were trying for 52 and wound up with 53):
https://www.afr.com/technology/why-google-wasn-t-really-the-first-to-achieve-quantum-supremacy-20190927-p52vg6
https://www.nature.com/articles/nature24654
https://arxiv.org/abs/1708.01044
52 was chosen to get ahead of Misha Lukin's group, which had 51:
https://www.nature.com/articles/nature24622
https://arxiv.org/abs/1707.04344

More than enough for now.  Hope this helps!

No comments: