Skip to Content

What Makes a Mind? Kurzweil and Google May be Surprised

One AI researcher suggests that an ambitious plan to build a more intelligent machine may be flawed.
January 21, 2013

After writing about Ray Kurzweil’s ambitious plan to create a super-intelligent personal assistant in his new job at Google (see “Ray Kurzweil Plans to Create a Mind at Google—and Have it Serve You”), I sent a note to Boris Katz, a researcher in MIT’s Computer Scientist and Artificial Intelligence Lab who’s spent decades trying to give machines the ability to parse the information conveyed through language, to ask him what he makes of the endeavor.

A cross section showing the somatosensory cortex of a mouse. Neurons, at the bottom, and dendrites, reaching up, have been colored by green fluorescent protein from jellyfish (CC BY-SA 2.0).

Here’s what Katz has to say about Kurzweil’s new project:

I certainly agree with Ray that understanding intelligence is a very important project, but I don’t believe that at this point we know enough about how the brain works to be able to build the kind of understanding he says he is interested in into a product. 

I previously interviewed Katz for an article about Apple’s Siri (see “Social Intelligence”). He explained that constructing meaning from language goes well beyond learning vocabulary and grammar, often relying on a lifetime of experience with the world. This is why Siri is only capable of responding to a fairly narrow set of questions or commands, even if Apple’s designers have done a clever job of making Siri seem as if its understanding goes much deeper.

Kurzweil believes he can build something approaching human intelligence by constructing a model of a brain based on simple principles and then having that model gorge itself on an enormous quantities of information—everything Google indexes from the Web and beyond.

There are reasons to believe this type of approach might just work. Google’s own language translation technology has made remarkable strides simply by ingesting vast quantities of documents already translated by hand and then applying statistical learning techniques to figure out what translations work best. Likewise, IBM’s Watson demonstrated a remarkable ability to answer Jeopardy questions by applying similar statistical techniques to information gathered from sources including the website Wikipedia (see “How IBM Plans to Win Jeopardy!”). But this is very different from the way humans develop an understanding of the world and of language.

If it does not provide an accurate representation of how the brain works, the question is whether Kurzweil’s approach will hit a wall in terms of simply mimicking that understanding by producing really useful responses to very sophisticated questions. 

Katz continues:

It is quite possible that this approach will allow his group to improve precision of Google’s search results, or to better guess what article a particular user may want to read. However, the Watson system was created to play a game, and it is great at doing that, but it had no common sense and no real understanding of even the concepts that it gave answers about. I am afraid that giving a Watson-like system an order of magnitude more data will not change this fact.

Katz’s objections make a lot of sense to me. But I think Kurzweil’s project could still have a very important impact. Even if it completely fails to deliver the kind of results Kurzweil and Google are hoping for it will push the statistical approach to AI further than ever. And so, either way, it may show where AI research should be focusing its efforts and help us understand what makes a mind a little better than before.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.