Skip to Content

What Makes a Mind? Kurzweil and Google May be Surprised

One AI researcher suggests that an ambitious plan to build a more intelligent machine may be flawed.
January 21, 2013

After writing about Ray Kurzweil’s ambitious plan to create a super-intelligent personal assistant in his new job at Google (see “Ray Kurzweil Plans to Create a Mind at Google—and Have it Serve You”), I sent a note to Boris Katz, a researcher in MIT’s Computer Scientist and Artificial Intelligence Lab who’s spent decades trying to give machines the ability to parse the information conveyed through language, to ask him what he makes of the endeavor.

A cross section showing the somatosensory cortex of a mouse. Neurons, at the bottom, and dendrites, reaching up, have been colored by green fluorescent protein from jellyfish (CC BY-SA 2.0).

Here’s what Katz has to say about Kurzweil’s new project:

I certainly agree with Ray that understanding intelligence is a very important project, but I don’t believe that at this point we know enough about how the brain works to be able to build the kind of understanding he says he is interested in into a product. 

I previously interviewed Katz for an article about Apple’s Siri (see “Social Intelligence”). He explained that constructing meaning from language goes well beyond learning vocabulary and grammar, often relying on a lifetime of experience with the world. This is why Siri is only capable of responding to a fairly narrow set of questions or commands, even if Apple’s designers have done a clever job of making Siri seem as if its understanding goes much deeper.

Kurzweil believes he can build something approaching human intelligence by constructing a model of a brain based on simple principles and then having that model gorge itself on an enormous quantities of information—everything Google indexes from the Web and beyond.

There are reasons to believe this type of approach might just work. Google’s own language translation technology has made remarkable strides simply by ingesting vast quantities of documents already translated by hand and then applying statistical learning techniques to figure out what translations work best. Likewise, IBM’s Watson demonstrated a remarkable ability to answer Jeopardy questions by applying similar statistical techniques to information gathered from sources including the website Wikipedia (see “How IBM Plans to Win Jeopardy!”). But this is very different from the way humans develop an understanding of the world and of language.

If it does not provide an accurate representation of how the brain works, the question is whether Kurzweil’s approach will hit a wall in terms of simply mimicking that understanding by producing really useful responses to very sophisticated questions. 

Katz continues:

It is quite possible that this approach will allow his group to improve precision of Google’s search results, or to better guess what article a particular user may want to read. However, the Watson system was created to play a game, and it is great at doing that, but it had no common sense and no real understanding of even the concepts that it gave answers about. I am afraid that giving a Watson-like system an order of magnitude more data will not change this fact.

Katz’s objections make a lot of sense to me. But I think Kurzweil’s project could still have a very important impact. Even if it completely fails to deliver the kind of results Kurzweil and Google are hoping for it will push the statistical approach to AI further than ever. And so, either way, it may show where AI research should be focusing its efforts and help us understand what makes a mind a little better than before.

Keep Reading

Most Popular

open sourcing language models concept
open sourcing language models concept

Meta has built a massive new language AI—and it’s giving it away for free

Facebook’s parent company is inviting researchers to pore over and pick apart the flaws in its version of GPT-3

transplant surgery
transplant surgery

The gene-edited pig heart given to a dying patient was infected with a pig virus

The first transplant of a genetically-modified pig heart into a human may have ended prematurely because of a well-known—and avoidable—risk.

Muhammad bin Salman funds anti-aging research
Muhammad bin Salman funds anti-aging research

Saudi Arabia plans to spend $1 billion a year discovering treatments to slow aging

The oil kingdom fears that its population is aging at an accelerated rate and hopes to test drugs to reverse the problem. First up might be the diabetes drug metformin.

Yann LeCun
Yann LeCun

Yann LeCun has a bold new vision for the future of AI

One of the godfathers of deep learning pulls together old ideas to sketch out a fresh path for AI, but raises as many questions as he answers.

Stay connected

Illustration by Rose WongIllustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.