If only political debates were this interesting. A quick-witted moderator, two opposing but well-behaved thinkers, and a central question any MIT loyalist would love: will humans ever build conscious, volitional, or spiritual machines?
The surprisingly funny discussion, held in a packed Stata Center auditorium in November, was staged in honor of the 70th anniversary of Alan Turing’s paper “On Computable Numbers,” which defined the limits of computer science. At one podium stood Ray Kurzweil ‘70–inventor, best-selling author, and stubborn artificial-intelligence optimist. His opponent: David Gelernter, Yale University computer scientist, software pioneer, and occasional conservative columnist. Rodney Brooks, director of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the event’s host, kept the two on track while demonstrating a keen sense of geek humor. A few minutes after Kurzweil joked that he thought in logarithmic time scales, Brooks, seeing the inventor’s allotted minutes ticking down, interrupted to say that the logarithm of his remaining time was 1.
Kurzweil’s answer to the big question was a qualified yes. He believes that the exponential progress of technology will lead to machines capable of acing the Turing test: they’ll be able to carry on a conversation with a human being, and the human will not be able to tell that the other party is a computer. And Kurzweil says we’ll get there in 25 or 30 years. But he adds that whether these robots will truly be conscious or simply display what he calls “apparent consciousness” is another question, and one for which he has no definite answer.
Gelernter did his best to extinguish Kurzweil’s optimism. Acknowledging that he was probably in the minority at MIT, he said, “I appreciate your willingness to listen to unpopular positions, and I’ll try to make the most of it by being as unpopular as I can.”
He didn’t entirely dismiss the idea that humans could build conscious robots but insisted that we’re not on the right track now. If we want to make real progress, he said, we should focus on consciousness itself, particularly the human variety. Software, he said, is not going to get us there. Instead, we should be studying how the brain’s chemistry gives rise to the human mind.
Neither budged much from his initial stance, no major disagreements were resolved, and there wasn’t a clear winner. Except in the one-liner department. At the close of his introductory remarks, questioning why we need superintelligent machines at all when we can just make more humans, Gelernter said, “Consult me afterward, and I’ll let you know how it’s done.”