A paper titled Operator Imprecision and Scaling of Shor's Algorithm, by Hill and Viamontes, appeared on the arXiv about ten days ago. Naturally, I snapped it up and quickly read it. Here's what I think. Take it with an entire shaker full of salt...
If you want to cut to the chase and keep your workload down, there's a really good reason to postpone reading the paper: it hasn't been refereed yet. If you work in the area, you might care enough to read it anyway (and might even get asked to referee), if you don't, the simplest form of triage is to wait and see what expert opinion says, then decide whether you want to agree with the experts or not :-).
Anyway, in one sentence, my position on Shor's algorithm:
- There are very good reasons for taking a Missouri "show me" attitude toward Shor's algorithm, but this paper probably does not change the arguments, and a variety of people much smarter than me have analyzed the algorithm in more detail and are pretty convinced it's going to work with acceptable scaling.
And, a one-sentence summary of the paper:
- Quantum error correction doesn't work.
To believe that Shor's algorithm can't be run in the real world, you have to believe either that the algorithm doesn't really scale, or that quantum error correction doesn't work. (Scott Aaronson reduces the argument further to, "Either the Extended Church-Turing Thesis is false, or quantum mechanics must be modified, or the factoring problem is solvable in classical polynomial time." Personally, I think he's leaving out important real-world cheats, such as "Shor has a bug" and "There was a gotcha in QEC that no one had spotted" and "Noise turned out to be more of a problem than we thought". But, in theory, he's right, and he's a theorist, so there you go :-).)
Taking the concerns in order, Shor first:
I'm not a mathematician, cryptographer, theoretical computer scientist, or physicist; I'm a computer systems guy who happens to be working in quantum computing. I have written a series of papers (including my Ph.D. thesis) on how to implement Shor's algorithm, though I am far from satisfied with the depth of my understanding of the interaction of the algorithm with either noise or systematic errors. See, for example,
my thesis or some of
my
other
papers.
My reasoning for studying Shor's algorithm is the following: I am interested in quantum computer architectures, and I need a well-defined workload. Shor's is the most cleanly defined and potentially important (not to mention famous) of the existing algorithms. Moreover, Shor uses two very important building blocks, arithmetic and the quantum Fourier transform (QFT), that are likely to prove generally valuable, so studying their behavior is useful. I consider myself to have given "provisional assent" to the hypothesis that Shor's algorithm works with reasonable scalability, subject to the caveats in my thesis and other papers.
There have been a number of papers addressing the scalability of Shor's algorithm, and unfortunately, this paper doesn't really seem to discuss any of the other arguments, as presented by Barenco, Fowler, and others. The paper references a few of them, but doesn't really say why their analysis is different from those papers, so it's impossible to thoroughly assess if the authors understand what the arguments are, pro and con.
The success probability of Shor's algorithm is believed to decline with the length of the number being factored. The Barenco and Fowler papers arrive at different conclusions on the scalability of Shor, though both are some variant of sub-exponential (logarithmic or polynomial, actually, if memory serves, though it's been a while since I read the papers). They start from different assumptions about the mathematical relationships of, essentially, the parameters to the algorithm. The focus of both those papers is the behavior of the system with perfect gates but an incompletely-run QFT, the "approximate QFT", which is considered to be the preferred form,
avoiding exponentially-precise gates. The results seem related to the general problem of imperfections and Shor's algorithm, though.
Anyway, on to the paper:
I have read it, but not studied it in major detail yet. I don't know either of the authors personally, but the second author has done good work; he is certainly no dummy.
The argument is pretty straightforward, arguably naive. That doesn't mean it's wrong, but there are a lot of assumptions and simplifications in the work, and they need to be examined carefully.
Essentially, what they have done is a simple simulation of an imperfect inverter and extrapolated from that to the performance of a complete system. They suggest that logical states drift linearly from the desired state, given a certain over-rotation of the physical gates. That is inherently plausible, but I think they have under-estimated the ability of quantum error correction (QEC) to suppress those errors, and they haven't accounted for ordinary noise.
(One would hope that the first author understands the impact of measurement on the system, but it's not clear from the rather terse paper that he does.)
See Sec. 2.3.5 (p. 73 or 56, depending on which numbering you're looking at) of my thesis:
http://arxiv.org/abs/quant-ph/0607065/
I think, if the argument in my thesis is right (which is certainly open to question), that systematic over-rotation is suppressed to O(sin^2(epsilon)) where epsilon is the amount over-rotated by all of the gates.
So, a 1% error in rotation (Pi+2*Pi/100 rotation) would result in
Pi+2*Pi/10000 logical rotation, I think. Then the *second* level of QEC would result in 10^-8 logical error, if I've done it right -- the double-exponential suppression.
A physical over-rotation of 10^-3, then, would be 10^-12 with two layers, if I did that right. Thus, based on this analysis, QEC is capable of suppressing errors quite well.
The argument in my thesis is probably naive, but QEC has a *lot* of work behind it; if you are interested, there are tons of references in my thesis and elsewhere. Moreover, last December there was an entire conference dedicated to the topic.
Returning to Shor:
It's not possible to simulate the system for large enough cases to really confirm the validity of the derived expressions, and we're still a number of years from a system large enough to experimentally prove or disprove the proposition. Personally, I don't think the analysis considering noise, systemic errors, and the approximate QFT is complete yet. Most of the theoreticians are pretty satisfied, but we all know about the difference between theory and practice.
In summary, the paper in question (arXiv:0804.3076) would have broad implications for the ability of quantum computers to maintain state accurately enough to provide real-world acceleration over classical computers. It's a very legitimate concern, which gets raised occasionally by very smart people, and usually ends in a stalemate with the preponderance of quantum information people on the side of "don't worry, QEC works". But it's not yet clear that this paper really pushes the argument forward in either direction. The paper is also not yet refereed, and it will be interesting to see how it changes as a result of any referee's comments.
Useful references:
@Article{barenco:approx-qft,
author = {Adriano Barenco and Artur Ekert and Kalle-Antti Suominen
and P\"aivi T\"orm\"a},
title = {Approximate Quantum {Fourier} Transform and Decoherence},
journal = {Physical Review A},
year = 1996,
volume = 54,
pages = {139--146}
}
@Article{fowler04:_limited-rotation-shor,
author = {Austin G. Fowler and Lloyd C. L. Hollenberg},
title = {Scalability of {Shor's} algorithm with a limited set
of rotation gates},
journal = pra,
year = 2004,
volume = 70,
pages = 032329
}
@ARTICLE{devitt-2006-6,
author = {Simon J. Devitt and Austin G. Fowler and Lloyd C.~L. Hollenberg},
title = {Robustness of Shor's algorithm},
journal = {QUANT.INF.COMP.},
volume = {6},
pages = {616},
url = {http://www.citebase.org/abstract?id=oai:arXiv.org:quant-ph/0408081},
year = {2006}
}
(There are more, including some very early ones from before QEC was invented. Homework for the interested reader.)
Anyway, I hope this at least short-circuits any rush to burn Peter Shor in effigy. He's way too smart and sweet for that.
2 comments:
The paper is rubbish as far as I can tell. They have rediscovered the fact that you need fault-tolerant circuits for error correction. Then they show that if you don't use such fault-tolerant circuits you fail. Well, yeah, duh.
I am not sure about your argument on over/rotation by a small parameter. In fact, if you have any error in your control Hamiltonian, it will show up as second order in your fidelities anyways, same as non-overrotation errors in the Hamiltonian [arXiv:0803.4268 by others and this humble commenter might be slightly relevant] but I cannot rule out the whole thing since if you are mixing overrotations/unitary errors and stochastic errors, you will be in a very slippery ground combining two models of errors. Knill, I think, has got a bunch of simulations on that which probably take space of three books and he also's got the twirling and randomization to help.
The other problem with overrotations is that unless you really go after them, they can be quite big. This is why NMR people [among others] so fervently use composite pulses which basically correct the systematic error due to the off-center positioning of their sample or off-resonance errors.
Post a Comment