Skip to Content

Will Machines Ever Be Conscious?

A debate worthy of Alan Turing.

March 12, 2007

If only political debates were this interesting. A quick-witted moderator, two opposing but well-behaved thinkers, and a central question any MIT loyalist would love: will humans ever build conscious, volitional, or spiritual machines?

CSAIL’s Rodney Brooks (center) kept Ray Kurzweil ’70 (left) and David Gelernter on track–and laughing–at an MIT debate on the future of artificial intelligence.

The surprisingly funny discussion, held in a packed Stata Center auditorium in November, was staged in honor of the 70th anniversary of Alan Turing’s paper “On Computable Numbers,” which defined the limits of computer science. At one podium stood Ray ­Kurzweil ‘70–inventor, best-selling author, and stubborn artificial-­intelligence optimist. His opponent: David ­Gelernter, Yale University computer scientist, software pioneer, and occasional conservative columnist. Rodney Brooks, director of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the event’s host, kept the two on track while demonstrating a keen sense of geek humor. A few minutes after Kurzweil joked that he thought in logarithmic time scales, Brooks, seeing the inventor’s allotted minutes ticking down, interrupted to say that the logarithm of his remaining time was 1.

Kurzweil’s answer to the big question was a qualified yes. He believes that the exponential progress of technology will lead to machines capable of acing the Turing test: they’ll be able to carry on a conversation with a human being, and the human will not be able to tell that the other party is a computer. And ­Kurzweil says we’ll get there in 25 or 30 years. But he adds that whether these robots will truly be conscious or simply display what he calls “apparent consciousness” is another question, and one for which he has no definite answer.

Gelernter did his best to extinguish Kurzweil’s optimism. Acknowledging that he was probably in the minority at MIT, he said, “I appreciate your willingness to listen to unpopular positions, and I’ll try to make the most of it by being as unpopular as I can.”

He didn’t entirely dismiss the idea that humans could build conscious robots but insisted that we’re not on the right track now. If we want to make real progress, he said, we should focus on consciousness itself, particularly the human variety. Software, he said, is not going to get us there. Instead, we should be studying how the brain’s chemistry gives rise to the human mind.

Neither budged much from his initial stance, no major disagreements were resolved, and there wasn’t a clear winner. Except in the one-liner department. At the close of his introductory remarks, questioning why we need superintelligent machines at all when we can just make more humans, ­Gelernter said, “Consult me afterward, and I’ll let you know how it’s done.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.