Skip to Content
Artificial intelligence

Forget AlphaGo—DeepMind Has a More Interesting Step Toward General AI

Researchers are testing algorithms that display human-like ingenuity in learning.
June 14, 2017

AlphaGo and self-driving cars are amazingly clever, but neither represents a very big leap toward general artificial intelligence. Fortunately, some AI researchers are developing ways of broadening machine intelligence.

The researchers at DeepMind, which created the champion Go-playing robot AlphaGo, are working on an approach that could prove significant in the quest to make machines as intelligent as we are.

In two papers published this week and reported by New Scientist, researchers at the Alphabet subsidiary describe efforts to teach computers about relational reasoning, a cognitive capability that is foundational to human intelligence.

Simply put, relational reasoning is the ability to consider relationships between different mental representations, such as objects, words, or ideas. This kind of reasoning is both crucial to human cognitive development and vital to solving just about any problem.

Most existing machine-learning systems don’t try to understand the relationship between concepts. A vision system can identify a dog or a cat in a picture, for example, but it doesn’t know that the dog is chasing the cat.

The two systems developed at DeepMind solve that by modifying existing machine-learning methods to make them capable of learning about physical relationships between static objects, as well as the behavior of moving objects over time.

They demonstrate the first capability using CLEVR, a data set of simple objects. After training, they can ask the system whether one object is in front of another, or which object is closest. Their results are dramatically better than anything achieved before, even exceeding human performance in some cases.

In the second paper, the researchers show how a similarly modified machine-learning system can learn to predict the behavior of simple objects in two dimensions. We do this sort of thing all the time in three dimensions, when catching a ball or driving a car, for example. In fact, psychology experiments show that humans employ an “intuitive physics” engine when predicting the effects of an action on objects. That is a lot more sophisticated and powerful than simply recognizing the objects in a scene.

While the advances may not be eye-popping breakthroughs, they are exactly the type of research that’s needed. As impressive as today’s AI is, most of it involves having a machine learn to perform an incredibly narrow task. Without new ideas, AI systems will remain incapable of things like holding a real conversation or solving difficult problems on their own.

Sam Gershman, a professor of psychology at Harvard who studies intelligence, says that we need to think about mimicking human intelligence more closely if we want artificial intelligence to resemble our own.

“Our brains represent the world in terms of relations between objects, agents, and events,” he told MIT Technology Review via e-mail. “Representing the world in this way massively constrains the kinds of inferences we draw from data, making it harder to learn some things and easier to learn other things. So in that sense this work is a step in the right direction: building in human-like constraints that enable machines to more easily learn tasks that are natural for humans.”

However, Gershman cautioned against overstating the significance of DeepMind work. “Super-human performance on any particular machine learning task does not imply super-human intelligence,” he said.

Relational reasoning is also just one element of human intelligence. Gershman and others wrote a paper last year that explores the other aspects of human intelligence that are currently missing from AI. Besides reasoning about relationships, for instance, they noted that humans are capable of compositionality, or building new ideas from existing knowledge in order to solve problems. 

“Relational reasoning is a necessary but not sufficient condition for human-like intelligence,” Gershman said.

(Read More: “DeepMind’s Neural Network Teaches AI to Reason About the World”)

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Providing the right products at the right time with machine learning

Amid shifting customer needs, CPG enterprises look to machine learning to bolster their data strategy, says global head of MLOps and platforms at Kraft Heinz Company, Jorge Balestra.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.