Saturday, October 16, 2021

Quantum Ponzi

I have seen bubbles and winters(*) in AI, Internet, computer architecture, and even the PC. (And I've seen attack ships on fire off the shoulder of Orion.) Now, some argue, we are in a quantum bubble that will inevitably pop and lead to a prolonged (and deserved) quantum winter. I agree that there is excessive hype making the rounds, but some of the current reaction to that has its roots in the basic academic/industrial culture clash. There are always those in academia who dislike the charlatanism and shamanism of capitalism, but it is inevitable. Our job as academics is first, of course, to advance the science itself, and (also first) to nurture the next generation of talent, but second, to convey the scientific results to colleagues, funders and investors in a way that tempers expectations to minimize the depth of the winter when it comes.

In July 2021, Prof. Victor Galitski of the University of Maryland's Joint Quantum Institute posted a long anti-quantum hype piece on LinkedIn. I disagree with the title not at all. I disagree with some elements of the contents just as emphatically as I agree with the title. Usually I just let such criticism slide (or post a vision for the future), but this time I felt I should respond, both because this one has gotten some airplay and because Galitski's own JQI position seems to lend it some weight of authority. Plus, he brings up some good points worth responding to thoughtfully, and some naive points that I'll endeavor to respond to politely. And finally, some people I admire have also made positive comments about this, so I wanted to counterbalance their thoughts. Although this is structured around Galitski's criticism, I hope the points I make will resonate farther than just the single article.

Let me attempt to summarize his points, first the major ones, then the smaller ones, rather than in the order he presents them.

  1. There is a lot of unjustified hype around QC these days.
  2. The hype comes from unsubstantiated claims about a. the present and near future of quantum hardware, and b. the long term potential of quantum to change the world.
  3. These claims are coming from a. pure at heart researchers who have been corrupted by capitalism, and b. already corrupt capitalists with no clue, who are only in this to make a buck. The latter group includes VCs and classical computing people.
  4. These claims are resulting in an investment bubble that will inevitably pop.
  5. That bubble is draining away talent, both seasoned and potential, that rightly belongs in academia.
  6. As a corollary, the work done in industry (both large and small) is less important than the work done in academia.
  7. The popping of that bubble will poison investment for some time to come.
  8. Only physicists are qualified to talk about quantum.
  9. The barrier to entering the software industry is low, therefore software is easy and never gets harder, and therefore is unworthy of respect.
  10. Quantum systems are vulnerable to security problems.
  11. Today's quantum systems can be fully simulated, therefore today's cloud accessible quantum systems may be fraudulent.
Put this bluntly, this is more than legitimate skepticism of quantum hype, it is a pinched, narrow view of the world.
Let me try to go through these points roughly in order:

1. hype exists. Hoo, boy, I don't think anyone would disagree with this.

2. hardware and software aren't good. In his second paragraph, Galitski manages to both diss the fundamental importance of quantum algorithms and pooh-pooh the state of hardware. Despite being at JQI (which I suppose has a broader remit than just quantum information), he states, rather bluntly, that none of the existing algorithms will truly revolutionize our world, and by implication that such a revolution is unlikely to ever be forthcoming. I disagree. It is no longer anything more than obstinacy to refuse to recognize the profound shift that quantum information represents at the theoretical level. It is fully as fundamental as the shift from analog to digital information. When and how that will affect daily practice is the question at hand.

It is true that very few algorithms have been effectively evaluated for the requirements for machines to execute them on problems of commercial or scientific interest. But that number is not zero; it's perhaps ten or twenty, depending on how you count, and yes, the fidelity and resource demands often come out far higher than we naïvely initially hope. Shor's algorithm was among the first so evaluated; recently, chemistry and finance are getting the treatment. Ultimately, we need to go through the entire Quantum Algorithm Zoo, line by line, and identify the smallest problem that's infeasible classically, and therefore where QCs need to be in technical development to generate truly new results (as well as figure out which of those algorithms have real-world impact, and which are only of theoretical interest). However, the existence of hybrid algorithms complicates that picture; we may well reach the point where quantum computers do sub-problems for us in a useful fashion before they truly, definitively exceed classical supercomputers.

Data centers today consume about 1% of the world's generated electricity (and still growing), and Haber-Bosch process manufacture of agricultural fertilizer another 1% of all energy. The logistics and transportation industries consume even larger amounts of energy, and optimizing them is an enormous computational task. Both specific computational results and the general deployment of quantum computers may impact this energy landscape, but it is incumbent upon us to make that story increasingly concrete. This is very much an engineering problem, and requires incorporating a lot of details about the machines to be used; it's much more than a $O(\cdot)$ problem.

Classical supercomputers are, in fact, an interesting challenge to compare. The fabrication of quantum computers benefits from classical VLSI technology and their operation requires a lot of supporting classical computation. More importantly, the success of classical digital computers is so tremendous, that quantum computers have a very tall hill to climb before surpassing them. Conversely, classical computers are facing truly fundamental problems: working at the atomic scale, and dealing with heat. The former is a result of Moore's Law, the latter of Dennard scaling. Current transistors are only a few tens of atoms across, and we don't know how to make transistors out of anything smaller than an atom. The latter has a solution, but will require major reengineering. (See my paper, Q's C problem, C's Q problem.) Quantum computing offers partial solutions to these problems, both with its physical technological contributions and its potential to attack certain classes of computational problems,  especially those with modest amounts of state but exponential growth in the interesting state space. So, quantum computers still have a long ways to go, but they are both desirable and necessary.

3. hype is corrupted, unqualified or both. Wow, the nose-in-the-air ivory tower attitude here is high. "the researchers are forced to quit activities, they are actually good at and where they could have made real impact, and join the QC hype", we are told. For more on this, see points 5 and 8, below.

4. this is a bubble. This is perhaps Galitski's most important point. Growth in investment is absolutely necessary for the field to expand beyond its academic roots and create an industry, but the media hype and current ready availability of VC funds means that not all investment is wise. The way to improve the quality of investment is experience and education. Some of these startups will fail, for sure; some should never be invested in in the first place. Want to make a difference here? Engage with VCs and help them learn and make wise decisions.

5. brain drain.  Eating our seed corn is definitely a problem, largely self-correcting, known since the 1990s or before. There are plenty of reasons to dislike Silicon Valley, but overall the balance (and sometimes tension) between government labs, government funded university research, corporate funded research both at universities and in its own labs, industrial development and startups is fundamentally healthy. (Though the US, EU, JP, KR, CN, SG, and AU models differ rather dramatically.) There is a valid and serious issue of how to best manage this (or to let it operate without intervention), but Galitski isn't making an argument on that topic, just lamenting the movement of people into other areas.

6. industrial work isn't good, and is mostly less than fundamental. I think this is so blatantly wrong (or at least elitist "my problems are the only ones worth solving") that it hardly needs refuting; as I noted above, we need to go through the available algorithms and figure out which are industrially relevant, and that's an engineering activity sitting right where people are poised to leap into industry, making their systems, algorithms and talent useful outside the confines of their own laboratory. That tech transfer is among the most important activities of all, even if there is a lot of skepticism about how well that really works out for universities.

The whiff of success, and with it the possibility of strategic advantage and riches (including university IP licensing) is leading to increased friction within the system, imposed by governments and corporate agreements, impeding flow of people and ideas through restrictive agreements and import/export paperwork and restrictions. Of the two, the government-imposed limits worry me more, because they can't be gotten around.

7. popping bubbles are bad. Galitski enumerates his two main points: that the current investment scene is a Ponzi scheme, and that the lure of money is drawing the best people out of academia and into the nascent industry. (He lists a third point, that hype is bad for science, but here he seems to primarily mean as a consequence of the first two points.)

In a true Ponzi scheme, investors have a responsibility to pay those who recruited them, which they fulfill by attracting others to pay them. This pyramid or tree structure depends on continued exponential growth in those willing to invest, and so collapses when the potential investor pool dries up, with the last round of investors left holding the bag.

I don't think a bubble is the same thing as a Ponzi scheme. Moreover, if we manage it well, investment in quantum computing will grow wiser and more rational. In that sense, criticism and discussion of irrational investment and building realistic expectations is welcome.

It does puzzle me why Galitski cares at all, since he apparently thinks there is little of value in quantum computing altogether. "To be sure, there are gems," he says, but there is little if anything positive in his take.

8 & 9. quantum belongs to the physicists.

To really see Galitski's opinion of the tech industry as a whole, it's worth quoting him:

A successful company in the "quantum technology space" can not pop up like Facebook or TikTok or a similar dumbed down platform, based on a code written by a college drop out. What's needed is years of education, work, and dedication. But what's going on is that there is an army of "quantum evangelists," who can't write the Schrödinger equation[.]

"You can't QC if you don't Schrödinger" smacks of elitism, but I suppose that's a point of view with moderately broad support in the community. (Heck, of course an author like Galitski thinks you should do a lot of QM before you do QC.) Personally, I can say

$i\hbar\frac{\partial}{\partial t}|\psi(t)\rangle = H|\psi(t)\rangle$

with the best of them, but -- and this will elicit gasps -- I don't think you need to do that in order to do QC. In fact, I think it misses the point if you want to develop software; the skills you need are very different. (See my quantum computer engineer's bookshelf.) I'd be more inclined to say you can't QC if you don't sashay the Fourier. Finding the interference patterns that drive interesting quantum algorithms will require creativity, math, and perhaps geometric thinking; one-dimensional wells, the ultraviolet catastrophe and perturbation theory can be left for (much) later.

It's not clear which tech industry college dropout he has in mind; certainly there are a lot to choose from. There are even a lot to choose from if you restrict your list to those whose products have a mixed effect on society as a whole. It is true that it is possible to begin a large classical project with almost no investment; the barrier to entry is low. That is largely seen as a plus, rather than a minus, across the industry. But being dismissive of the amount of investment of time and brainpower, and the actual intellectual innovation and research it takes to reach the scale of global impact is foolish.

Fundamentally, it is important to recognize that there are a lot of really smart people in the world who aren't physicists, and some of them are trying to figure out how to deploy quantum computers (and quantum networks) outside of the physics laboratory. There are hardware engineers, software engineers, and business people who are learning. They need the room, time, respect and support to make this happen.

I have spent quite a bit of time with people in Japan, the U.S., and other countries who started with zero clue about quantum but are starting companies. Some of them start out roll-your-eyes clueless, and yes, most of those will go down in flames. Others, however, will surprise you. Through hard work and a willingness to study, they are in fact learning. Ultimately, they will build or buy a clue, or go out of business.

Yes, it would be better if they weren't a drain on resources (money and people) and reputation while acquiring or failing to acquire their clue. But over time, those doing the evaluation (VCs and the general public) will themselves become more knowledgeable and sophisticated. Personally, I would rather they did that with our blessing and our support rather than without.

10. insecure systems. I have no doubt that today's cloud-accessible quantum systems have security vulnerabilities. All computer systems have them. It's a tenet of our industry. I don't understand why this is relevant to Galitski's larger point.

11. fraud! Because today's systems could be fully simulated, there might be fraudulent companies out there, some Quantum Theranos. Yeah, I suppose that's possible. "Fake it 'til you make it." Faking quantum computers is easy; faking quantum computer development is hard. You think investors aren't going to come look into the labs? You think they aren't going to expect to see dil fridges, racks of FGPA boxes, even lines of FPGA source code? And, over the next few years, results of calculations that can't be simulated? Especially in a post-Theranos atmosphere? Due diligence is always necessary (and I have seen it go wrong), but I don't think this is a valid point for criticizing the nascent industry.


Overall, I find Galitski's criticism to have a few valid points; we all agree that hype will result in negative effects for the community as a whole as a "reality correction" sets in. But -- and perhaps I'm being too sensitive here -- I read his criticism as coming from a deep misunderstanding and dislike of the tech industry, and skepticism not just about the current quantum frenzy but more deeply of the value of quantum computing itself. I disagree.

We want to avoid the sheer silliness of the dot com bubble, its worst excesses on domain names and business models. At the same time, we want to avoid the prolonged AI winters in which too few smart people and too few research dollars entered the field. (Keeping in mind that, despite its demonstrated, thrilling successes, we might be in a time of over-exuberance for machine learning, the current favored model of AI; studying its successes and excesses carefully would be instructive for the future of quantum computing.) Let's all be responsible and realistic about the amount of work to be done, but maintain our optimism and faith in the long-term vision.

To quote myself, we are in the time of Babbage trying to foresee what Knuth, Lampson and Torvalds will do with these machines as they mature. Let's do it.

Onward and upward!

(*) What a mixed metaphor! Can we have springs and renaissances, too? Or at least some explanation of how a bubble popping results in winter?

5 comments:

Ed Gerck said...

In particular, and this is important today, it is well-known that a shadow has fallen over the race to detect a new type of quantum particle, the Majorana fermion, that could power quantum computers.

The Nature retraction is a setback for Microsoft’s approach to quantum computing, as researchers continue to search for the exotic quantum states.

While the evidence of elusive Majorana particle dies --- computing hope lives on, and is now made possible by using tri-state+ in software with standard binary hardware, while enabling the use of spintronic methods and other novel approaches using integers.

rdv said...

It's true that the Majorana fermion paper had to be retracted due to further analysis of the data. That particular approach has always been rather controversial. It requires a deeper level of math and physics than I possess, so I've always remained neutral on its potential.

Ion trap, transmon, photonic, quantum dot, nitrogen vacancy diamond all continue to make solid, incremental progress. One or more of these may well prove to be the technology of choice, or it may be a dark horse such as neutral atoms, or something no one has yet thought of. We'll see.

Jacq Romero said...
This comment has been removed by the author.
Jacq Romero said...

Thanks for writing this Rod. I agree that the community as a whole needs to be less "elitist". I also don't think you start with quantum computing by doing Schrodinger and calculating the energy levels of the Hydrogen atom (as my undergrad quantum physics course started). When I give public talks, I highly recommend Leonard Susskind's book (The Theoretical Minimum) to the interested audience. The discussion from spins to Boolean logic (or its failure!) is something I hope I read when I was younger.

rdv said...

Thanks, Jacqui!