Tuesday, July 18, 2006

The Age of Spiritual Machines, Part I

[Part I of a long review. Part II is here.]

Ray Kurzweil is a salesman, and a True Believer. I just finished reading his The Age of Spiritual Machines, in which he shares his faith in neural networks, evolutionary algorithms, Drexlerian nanotechnology, and Moore's Law, which leads him to conclude that a "strong" AI (a true intelligence, more than just a program capable of passing the Turing Test) will emerge around 2019 (indeed, will be runnable on a single PC), and that progress will continue to accelerate toward the point where human and machine intelligences merge on the Net before the end of the twenty-first century (an event which he calls the Singularity).

I have many problems with the book, though there are broad areas of agreement on fundamental principles -- I'm a believer in strong AI. If carbon-based matter can think, I see no reason why silicon-based matter can't think -- and no reason to believe that we can't build it, and that it will improve over time. But that's a very far cry from agreeing with the major themes, let alone details, of Kurzweil's book.

The first, and biggest, problem is his Law of Accelerating Returns. While Henry Adams was mulling this concept for human society a hundred years ago, Kurzweil goes far beyond Adams (whom he doesn't appear to cite, though maybe I missed it; in general, the footnoting in the book is good, but the prior literature including science fiction is certainly vast) and asserts that the evolution of the Universe itself has as a goal the creation of intelligence, and that evolution runs at an ever-accelerating pace, unstoppably. He treats this as some sort of vaguely-defined physical law, which I find implausible and poorly supported, at best. (Perhaps he has a more technical argument in a paper somewhere? After all, this is a pop "science" book.) He pays a bit of lip service to punctuated equilibria (misreading Gould, in my opinion) and the possibility of catastrophic societal meltdowns, but doesn't really put much stock in them. He doesn't deal with the fact that dinosaurs seemed quite comfortably in control of the planet until catastrophe befell them -- without any archeological or paleontological evidence that dinosaurs needed intelligence to maintain their dominance, or indeed that their evolution over much of their dominant period truly constituted "progress" as we would define it.

Likewise, Kurzweil extends Moore's Law to some sort of supernatural phenomenon, arguing that computational power starting with mechanical calculators and continuing through the end of the nominal VLSI-relevant Moore's Law in 20-30 years, then continuing through some ill-defined nanotech computational substrate, continues to accelerate. Not just stays on Moore's Law, but that the performance-doubling time will continue to shrink! While his twentieth-century chart is fascinating, I doubt very much that some sort of fundamental principle is in evidence, and that the rate of computation will continue to advance until we are computing with individual quarks. Kurzweil mentions S-curves and the end of exponential growth, but simply has faith that we will find some way around it -- that as each individual S-curve begins to tail off, there is another waiting in the wings to pick up the baton and run.

Kurzweil spends a few pages discussing quantum computing, and while it's not very good, it's also not terrible for a layman's understanding circa 1998. He does conclude (correctly, IMHO) that quantum computing is likely to be a special-purpose tool, rather than a true replacement for all computation.

Kurzweil has worked on voice recognition. I don't dispute that he dictated the bulk of Spiritual to a voice recognition system, but the assertion that keyboards would practically disappear by 2009 must have seemed a reach even in 1999. Likewise, it seems to me that he has substantially oversold the capabilities -- both contemporary impact and future breadth of applicability -- of neural nets and evolutionary algorithms. I have a little experience (more as a user than developer, in collaboration with another researcher) with both evolutionary and neural nets, and in my experience, they take a lot of care and feeding, and getting them to scale reasonably with the problem size is difficult; they tend to need fairly structured guidance, rather than simply turning them on and letting them go. Let me hasten to add that I'm a believer in the value of these technologies -- but they certainly are not yet some silver bullet that allows us to dispense with understanding problems ourselves before instructing a computer how to solve them for us.

Kurzweil believes that simple (ultra-)Moore's Law growth in computation will allow us to scale up these two technologies to the point where it's possible for us to just turn them loose (maybe with a dash or two of learning about the human brain's structure) and we'll get intelligent beings; we already have abstracted the neuron adequately, and only need to evolve large enough and correctly connected neural nets and the structures themselves will take over from there. While it's a beguiling scenario, my opinion is that we are likely to actually need new insights and will have to actively guide their development. Simply creating some sort of neuronal evolutionary soup leaves us in a combinatorial space beyond all comprehending in size -- waiting for a human brain to evolve in that environment is going to require eons, in my opinion.

Kurzweil takes on Roger Penrose's The Emperor's New Mind, which was already old news when he was writing but is still an influential book. I read TENM shortly after it came out, and while the details have long since faded, I was unconvinced by Penrose's arguments, which seemed to amount to the assertion that intelligence (or consciousness?) requires some non-physical phenomenon -- or at least new physics that we don't yet understand. In the end he comes to the suggestion that intelligence is derived at bottom from quantum processes. Let me stress that my IQ is probably half of Penrose's, and finding my accomplishments if stacked up next to his would require a microscope. I'm also not a consciousness researcher (and neither is Penrose). But I don't yet see any reason to invoke new physics (beyond possibly deepening our understanding of nonlinear dynamics and complexity). There is still a lot of wiggle room for well-understood physics to generate poorly-understood macroscopic phenomena.

So here, at least, I agree with Kurzweil: I'm not convinced by Penrose's anti-strong AI arguments (many of which, according to John McCarthy, were already well refuted before TENM was published). If intelligence is a property exhibited by matter, I see no particular reason to believe that we will always be unable to create matter that thinks.

On to part II

No comments:

Post a Comment