Thursday, November 28, 2024

Can AI "Beat" Quantum Computing?

 Google DeepMind recently hosted an in-person event featuring Demis Hassibis and other Nobel laureates, and a few days ago they posted the videos on YouTube. The whole hour-long video I watched is well worth it; I haven't gotten around to watching the others yet.

Hassibis expresses some thoughtful opinions on quantum computing and the value of quantum computers relative to AI. He's a smart guy, and while he comes from a pro-AI point of view, he has been talking to the right people and learning a lot about quantum, so it's worth taking what he says seriously. From right around 10:00 to around 14:05 Hassabis says, in essence, that he thinks that all of the important real world problems have enough structure that they can be solved classically, and so the challenge is "just" to "pre-compute" a model that lets you find it, and AI will be good at that. Therefore, maybe in practice AI will outperform quantum computers at solving problems we care about.  (He doesn't claim anything that contradicts what's known about computational complexity classes; more about that below.)

It's a very interesting conception, and I have always held something much like that as a caveat in the back of my mind.

Interestingly, one of the things we have known about quantum computers for some time is that we gain at most a polynomial speedup over classical computers when there is NO structure to the problem. But that is an extremely general result.

We also know that there are problems where we can get provable exponential speedups for exact solutions, and that some problems admit only exact solutions. And so the question is, how big is that window where quantum's advantage is practical?

AI is pushing the boundaries of classical models for systems, allowing its heuristics to be very effective in real world-type problems. So will it push quantum into a tightly confined corner where it is left to play with toy problems ad esoteric problems with no value?

I sent (a somewhat rougher form of) the above to my pal Suzanne Woolf, who thoughtfully responded:

I like this idea very much, just because I hadn't thought of it but now that you've articulated it, it's embarrassingly obvious. Problem-solving of any kind is so often a matter of framing the question so it can be answered with the tools you have available (or can invent-- didn't Leibniz and Newton invent calculus at more or less the same time?)

But color me skeptical about the limits on adaptability of AI, especially LLMs. I think the problems they have-- "hallucinations," GIGO, computational intensity-- are fundamental: you can give the genius toddler all of the dictionaries, encyclopedias, collections of literary works from the Bible and Japanese mythology to Shakespeare to Agatha Christie and Martha Wells, and daily newspapers across the world for months on end-- but it's still a toddler.

I'm not necessarily going to argue that judgment requires self-awareness, and that's a hard limit, but I could probably be persuaded.

Stay tuned...

No comments: