Hello,

We noticed you're browsing in private or incognito mode.

To continue reading this article, please exit incognito mode or log in.

Not an Insider? Subscribe now for unlimited access to online articles.

Will Machines Ever Be Conscious?

A debate worthy of Alan Turing.

If only political debates were this interesting. A quick-witted moderator, two opposing but well-behaved thinkers, and a central question any MIT loyalist would love: will humans ever build conscious, volitional, or spiritual machines?

CSAIL’s Rodney Brooks (center) kept Ray Kurzweil ’70 (left) and David Gelernter on track–and laughing–at an MIT debate on the future of artificial intelligence.

The surprisingly funny discussion, held in a packed Stata Center auditorium in November, was staged in honor of the 70th anniversary of Alan Turing’s paper “On Computable Numbers,” which defined the limits of computer science. At one podium stood Ray ­Kurzweil ‘70–inventor, best-selling author, and stubborn artificial-­intelligence optimist. His opponent: David ­Gelernter, Yale University computer scientist, software pioneer, and occasional conservative columnist. Rodney Brooks, director of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the event’s host, kept the two on track while demonstrating a keen sense of geek humor. A few minutes after Kurzweil joked that he thought in logarithmic time scales, Brooks, seeing the inventor’s allotted minutes ticking down, interrupted to say that the logarithm of his remaining time was 1.

This story is part of the March/April 2007 Issue of the MIT News Magazine
See the rest of the issue
Subscribe

Kurzweil’s answer to the big question was a qualified yes. He believes that the exponential progress of technology will lead to machines capable of acing the Turing test: they’ll be able to carry on a conversation with a human being, and the human will not be able to tell that the other party is a computer. And ­Kurzweil says we’ll get there in 25 or 30 years. But he adds that whether these robots will truly be conscious or simply display what he calls “apparent consciousness” is another question, and one for which he has no definite answer.

Gelernter did his best to extinguish Kurzweil’s optimism. Acknowledging that he was probably in the minority at MIT, he said, “I appreciate your willingness to listen to unpopular positions, and I’ll try to make the most of it by being as unpopular as I can.”

He didn’t entirely dismiss the idea that humans could build conscious robots but insisted that we’re not on the right track now. If we want to make real progress, he said, we should focus on consciousness itself, particularly the human variety. Software, he said, is not going to get us there. Instead, we should be studying how the brain’s chemistry gives rise to the human mind.

Neither budged much from his initial stance, no major disagreements were resolved, and there wasn’t a clear winner. Except in the one-liner department. At the close of his introductory remarks, questioning why we need superintelligent machines at all when we can just make more humans, ­Gelernter said, “Consult me afterward, and I’ll let you know how it’s done.”

Become an MIT Technology Review Insider for in-depth analysis and unparalleled perspective.

Subscribe today

Uh oh–you've read all of your free articles for this month.

Insider Premium
$179.95/yr US PRICE

Want more award-winning journalism? Subscribe to Insider Online Only.
  • Insider Online Only {! insider.prices.online !}*

    {! insider.display.menuOptionsLabel !}

    Unlimited online access including articles and video, plus The Download with the top tech stories delivered daily to your inbox.

    See details+

    What's Included

    Unlimited 24/7 access to MIT Technology Review’s website

    The Download: our daily newsletter of what's important in technology and innovation

/
You've read all of your free articles this month. This is your last free article this month. You've read of free articles this month. or  for unlimited online access.