Monday, February 10, 2020

A Science Fiction Short Story: Quantum Computing for the Next Fifteen Years

(See the note at the bottom about hybrid algorithms!)

I started working on quantum computing full time in 2003, 17 years ago next month, after just shy of 17 years of working on classical networks, operating systems, and storage systems. Shor's algorithm and the basics of fault tolerance were known, but nobody really knew how to fit them together into a machine architecture; that was the theme of my Ph.D. thesis. Topological codes were just beginning to be developed, but were far from practical. HHL, VQE, QAOA and other algorithms didn't yet exist; indeed, the notion of hybrid algorithms didn't exist, to the best of my knowledge. Architecture-aware algorithm analysis began while I was doing my thesis, and to date my own biggest contributions may still lie there. Superconducting qubits existed, but just barely, and the transmon was still in the future. Many things that make CS theorists happy have happened, but that's outside my bailiwick.

Back around 2012, IARPA instituted a program to investigate algorithms at desirable problem sizes. They were:
  1. The unique shortest vector (USV) problem, for a dimension of 50.
  2. Linear systems of equations, with an array dimension of 3 × 108 .
  3. The class number problem, for 124 decimal digits.
  4. Ground-state energy calculation for an iron-sulphur molecular complex with 208 basis functions, to get 9 bits of accuracy in the result.
  5. Quantum random walk on a binary welded tree, with a tree height of 300.
  6. The triangle-finding problem on an arbitrary graph.
  7. Boolean formula evaluation.
(Sorry, I seem not to have details on the problem instances for those last two.  See the supplementary material for our CACM article for some references, or my recent #QuantumComputerArchitecture tweetstorm and dig from there.) The early results from that suggested that at least the most straightforward implementations of those algorithms resulted in huge circuit depths, for some of the problems. This resulted in the first wave of pessimism, but since then we have had QAOA and a continuing wave of algorithms, small-scale implementations of existing algorithms, and software tools.

Periodically, anti-quantum computing pundits arise. They used to be of the "it'll never work" variety; these days it's usually of the "it's decades and decades away from being practical" kind. Rather than getting involved in "is not!" "is too!" arguments, I figured I'd just write down, off the top of my head, some the things that might happen in the next fifteen years, coming up to the 50th anniversary of Feynman and Deutsch:
  • 2020-2022: First feedforward in solid state (superconducting?) systems, and with that the first claim of fault tolerance for a distance 3 surface code logical qubit. Similarly, [[7,1,3]] claim in ion traps. Solving of small problems on NISQ continues, in chemistry, graph states, quantum walks, etc. Improvements in software tools better support hybrid quantum-classical algorithms such as QAOA.  Fidelity and scalability of QFT remains a nagging problem, worse on some platforms than others. Quantum s*acy field quiet, as hardware focuses on measurement, feedforward, and continued improvement in CNOT gates. First wave of startup failures or bargain basement acquisitions, as VCs understand the technology and market better and focus on fewer but more promising companies.
  • 2023-2024: First logical CNOT, using lattice surgery. Successful execution of graph and walk problems grow at better than hardware rates as compilers and use of QAOA and other techniques improves. Better techniques for extracting true answers out of noisy results help all algorithms. QFT works well now, thanks to very low-level implementations rather than gate-based. Renewed, incontrovertible claims of s*acy. New, wiser round of quantum software startups. First non-superconducting, non-ion trap, semi-commercial system. A handful of companies claim to be using quantum computers in production computing systems, but nobody really takes them seriously.
  • 2025-2027: Error corrected systems start to open a serious gap in fidelity compared to physical qubits, but are still limited to 3-5 logical qubits and small code distances. NISQ optimization algorithms starting to be interesting, but are limited in data set size due to I/O and gates counts for dealing with heterogeneity. NISQ superconducting systems are bumping up against die size constraints; control bus limitations partially alleviated by cold stage control hardware. Multi-chip demonstrations exist, but are too slow and low fidelity to be practical. Ewin Tang's bombardment of the quantum algorithm ramparts begins to slow as we finally begin to grasp what's going to work well and what's not.
  • 2028-2030: A 10-qubit QRAM appears, but it's specific to a particular technology and doesn't have an obvious path to scalability. Nevertheless, it's a game changer. Quantum volume of 10,000 reached, an important psychological and marketing threshold. It proves, in practice, to be more useful for deeper circuits on fewer qubits, rather than the other way around, so technically we're still well within classical simulation range. Large numbers of companies claim to be using quantum computers in production computing systems, and people take them quite seriously, but they don't really exceed classical capabilities yet. Quantum systems are indisputably generating new science.
  • 2031-2035: The first serious hybrid-technology machines (deliberately discounting the use of photons here) appear, built from qubits sourced from multiple companies, another industry first. Multicomputer architectures, pioneered in practice in ion traps, become standard. SAT problems, not so different from those D-Wave originally proposed to solve, become all the rage as logical qubit counts reach 100. Quantum systems are indisputably solving problems important to commercial customers. As the first half century of quantum computing closes, classical supercomputing architects acknowledge that both Gustafson-Barsis's and Moore's Laws have stalled, apparently for good.
Remember, as America's foremost philosopher once noted, predictions are hard, especially about the future. With the American Physical Society's March Meeting coming up momentarily, it's especially risky, as there are often announcements there. Feel free to tell me I'm wrong about any individual point; I'll concede. I have very limited faith in all of this, but I do think the general direction is right.

(After I posted this, I got the following note from Peter McMahon about the history of hybrid algorithms. This early work deserves more attention than it gets!)

I wanted to bring to your attention some prior work that seems to be almost completely unknown in the QC community, but that brings the date of invention of hybrid algorithms for solving optimization problems to no later than 1999:

• "Quantum optimization" (Hogg, Portnov) https://doi.org/10.1016/S0020-0255(00)00052-9
• "Quantum optimization for combinatorial searches" (Trugenberger) https://doi.org/10.1088/1367-2630/4/1/326

I would argue that Eq. 1 of Hogg and Portnov’s paper, as well as their description of a hybrid approach to optimizing the phases at the top of Section 2.2, is capturing much of the core mechanism of QAOA, all the way back in 1999! The framing of applying evolution of the cost Hamiltonian isn’t quite there, but appears in Eq. 5 of Trugenberger’s paper. So between the two of these papers, we have the invention^ of hybrid quantum-classical optimization algorithms and the insight to use an alternation of mixing/driving and cost-function-Hamiltonian evolution in the quantum part of the algorithm.

^ Maybe there’s an even earlier paper presenting this kind of algorithm that I don’t know about; I’d be curious to learn about it if there is!
(Thanks, Peter!) 

3 comments:

  1. Hi, it is a very nice list!

    I have a short question/comment:
    you are writing
    "First feedforward in solid state (superconducting?) systems, and with that the first claim of fault tolerance for a distance 3 surface code logical qubit."

    In my opinion, feed-forward has already been used successfully for superconducting circuits also in the context of error corrections, see for example:
    [1] https://www.nature.com/articles/nature12513
    [2] https://www.nature.com/articles/nature18949
    [3] https://www.nature.com/articles/s41534-019-0185-4
    just to mention a few.

    Also, I would argue that feedforward is not needed for the first claim of fault tolerance. Here Pauli frame updating should be enough. Do you agree?

    In any case, I agree that a distance-3 surface code is a critical next step and on the near-term horizon (we do have a distance-2 surface code now, arXiv:1912.09410 ) and I do agree that we need to make a feed-forward architecture compatible with quantum error correction (as in [3]) :-)


    ReplyDelete
  2. Thanks, Christian. (When did you post this? I didn't get a notification, and blasted Blogger here doesn't seem to give me a date.)

    I've read [2], but not [1], and [3] looks like a nice new paper. There are also, of course, claims of demonstrations in ion traps. So far, my impression is that the existing experiments work on only one axis (X or Z) but not both, and they also involve a fair amount of post-selection. Google also had one "this is _almost_ QEC" paper a couple of years ago that was very nice. The measurement integration times are high compared to gate times and memory lifetime, so it's still difficult to use in a general setting.

    So, I think those are important steps, but don't quite count as feed-forward that is useful in algorithm execution, or as full QEC yet.

    But we are definitely getting there...

    ReplyDelete
  3. I Like to add one more important thing here, The global Quantum Computing Market is expected to exceed USD 1853.3 Million by 2026, with CAGR at 32 % in the given forecast period.

    ReplyDelete