Skip to Content

The big announcements at Google’s I/O event in San Francisco Wednesday didn’t mention Web search, the technology that got the company started and made it so successful. But in a small session later that day, the inventor and futurist Ray Kurzweil talked confidently about making Google’s current search technology obsolete.

Kurzweil joined the company 18 months ago to lead a project aimed at creating software capable of understanding text as well as humans can. Yesterday, he told the audience that progress on this effort was good, and that it would result in an entirely new way to search the Web and manage information.

“You would interact with it like you would a human assistant,” said Kurzweil. It will be possible to ask a question of the software just as you would if talking to another person, he said; and you could trust that it would return a fully reasoned answer, not just a list of links as Google’s search engine does today. Such a virtual assistant might also take the initiative, Kurzweil said, coming forward when new information had appeared that was related to an earlier query or conversation.

Kurzweil said the technology will eventually be as widely used as Google’s current search engine, and its scope will extend beyond text documents. He also predicted that specialized chips designed to implement key parts of the information processing involved would make the technology cheaper to deploy.

Kurzweil gave few details of how the software would work, but he said it was based on the theory of intelligence expounded in his 2012 book How to Create a Mind. Kurzweil’s theory is that all functions in the neocortex, the wrinkled outer layer of our brains that is the seat of reasoning and abstract thought, are based on systems that use a hierarchy of pattern recognition to process information. Each layer, he argues, uses the output of the ones below it to work with increasingly complex and abstract patterns.

In the case of reading text, Kurzweil claims, our brain first recognizes individual letters. It can then proceed to understand the words they form; then the meaning of phrases or sentences; and eventually the thought or argument the person who wrote them is trying to convey.

Google’s current search technology is able to understand only the lower levels of that hierarchy, such as synonyms for individual words, says Kurzweil. It can’t synthesize that low-level knowledge to build up understanding of higher-level concepts.

The idea of building intelligent software that looks for successive levels of patterns in data isn’t exclusive to Kurzweil. He said his group is using a technique known as “hierarchical hidden Markov models,” in use for over a decade. More recently, Google, Facebook, and other companies have seen major leaps in speech recognition and other areas using a newer approach known as deep learning, which is based on large networks of simulated neurons arranged into hierarchies (see “Google Puts Its Virtual Brain to Work”).

Yet no one has created software that can construct complex knowledge or understanding from simple building blocks, said Kurzweil. “That has so far eluded the AI field,” he said. “We have a model that I believe will solve this key problem of being able to add to the hierarchy automatically.”

Kurzweil’s claims about human intelligence and the neocortex are somewhat controversial. Gary Marcus, a psychology professor at NYU, has said that the theory is simplistic and unsupported by evidence from neuroscience.

Kurzweil said Wednesday that his ideas were backed by evidence and talked of using them to create software with faculties not far removed from those of humans. He has estimated that to functionally emulate the human brain, a computer would need to perform around 100 trillion calculations per second. “It would be hard to provide that to a billion users, although I’ve discussed that with Larry Page and he thinks it’s possible,” he said.

Kurzweil even gave a qualified “yes” when asked if systems built that way might ever become conscious. “Whether or not an entity has consciousness is not a scientific question, because there’s no falsifiable experiment you could run,” he said. “People disagree about animals, and they will disagree about AIs. My leap of faith is that if an entity seems conscious and to be having the experiences it claims, then it is conscious.” 

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.