Saturday, May 28, 2022

An Early, Relatively Complete Quantum Computer Design

 Based on a Twitter thread by yours truly, about research with Thaddeus Ladd and Austin Fowler back in 2009.

In the context of an ongoing #QuantumArchitecture #QuantumComputing project, today we reviewed some of my old work. I believe that in 2009 this was the most complete architectural study in the world.

We started with a technology (optically controlled quantum dots in silicon nanophotonics), an error correction scheme (the then-new surface code) and a workload/goal (factoring a 2048-bit number).

We considered everything.

Optimized implementation of Peter Shor's algorithm (at least the arithmetic, the expensive part). (More recent work by Archimedes Pavlidis and by Craig Gidney goes beyond where we were in 2009.)

How many logical qubits do we need? 6n = 12K.

How many logical Toffoli gates? A LOT.

So how low a residual logical gate error can we allow?

Given that, and a proposed physical gate error rate, how much distillation do we need? How much should we allow for "wiring", channels to move the logical qubits around?

We ended up with 65% of the space on the surface code lattice for distillation, 25% for wiring, and only 10% for the actual data.

From here, we estimated the code distance needed. High, but not outrageous, in our opinion. (More on that below.)

With micron-sized dots and waveguides, we had already grokked that a multi-chip system was necessary. So we already knew we were looking at at least a two-level system for the surface code lattice, with some stabilizers measured fast and others slow.

We worked through various designs, and wound up with one with several different types of connections between neighbors. See the labels on the right ("W connection", "C connection", etc.) in this figure from the paper. This is the system design.



Turns out each type has a different connection rate and a different fidelity, so we need purification on the Bell pairs created between ancillary dots, before we can use them for the CNOT gates for stabilizer measurements. This could mean that portions of the lattice run faster and portions run slower.

Oh, and an advance in this architecture that I think is under-appreciated: the microarchitecture is designed to work around nonfunctional dots and maintain a complete lattice. I have slides on that, but sadly they didn't make it into the paper, but see Sec. 2.1.

Put it all together, and it's an enormous system. How big?

Six billion qubits.

So if you have heard somewhere that it takes billions of qubits to factor a large number, this might be where it started. But that was always a number with a lot of conditions on it. You really can't just toss out a number, you have to understand the system top to bottom.

Your own system will certainly vary.

The value in this paper is in the techniques used to design and analyze the system.

You have to lay out the system in all its tedious detail, like in this table that summarizes the design.



That lattice constant in the table, btw, is one edge of a surface code hole, so the actual code distance is 4x that, or d = 56. We were aiming for just a factor of three below the surface code threshold, and for a huge computation. These days, most people will tell you hardware needs to be 10x better than threshold for QEC to be feasible. If not, you end up with numbers like these.

You can pretty much guess who did what. Austin Fowler (then in Waterloo, now at Google) did the QEC analysis. Thaddeus Ladd (then at Stanford with Yoshi Yamamoto, now at HRL) did the quantum dot analysis and simulation. I brought the arithmetic expertise and overall system view. We all worked super hard on the chip layout and how all the pieces fit together to make a system.

These things are beasts, with so many facets of the design problem, that we need all hands to make them work!


No comments: