MIT Technology Review Subscribe

Forget AlphaGo—DeepMind Has a More Interesting Step Toward General AI

Researchers are testing algorithms that display human-like ingenuity in learning.

AlphaGo and self-driving cars are amazingly clever, but neither represents a very big leap toward general artificial intelligence. Fortunately, some AI researchers are developing ways of broadening machine intelligence.

The researchers at DeepMind, which created the champion Go-playing robot AlphaGo, are working on an approach that could prove significant in the quest to make machines as intelligent as we are.

Advertisement

In two papers published this week and reported by New Scientist, researchers at the Alphabet subsidiary describe efforts to teach computers about relational reasoning, a cognitive capability that is foundational to human intelligence.

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

Simply put, relational reasoning is the ability to consider relationships between different mental representations, such as objects, words, or ideas. This kind of reasoning is both crucial to human cognitive development and vital to solving just about any problem.

Most existing machine-learning systems don’t try to understand the relationship between concepts. A vision system can identify a dog or a cat in a picture, for example, but it doesn’t know that the dog is chasing the cat.

The two systems developed at DeepMind solve that by modifying existing machine-learning methods to make them capable of learning about physical relationships between static objects, as well as the behavior of moving objects over time.

They demonstrate the first capability using CLEVR, a data set of simple objects. After training, they can ask the system whether one object is in front of another, or which object is closest. Their results are dramatically better than anything achieved before, even exceeding human performance in some cases.

In the second paper, the researchers show how a similarly modified machine-learning system can learn to predict the behavior of simple objects in two dimensions. We do this sort of thing all the time in three dimensions, when catching a ball or driving a car, for example. In fact, psychology experiments show that humans employ an “intuitive physics” engine when predicting the effects of an action on objects. That is a lot more sophisticated and powerful than simply recognizing the objects in a scene.

While the advances may not be eye-popping breakthroughs, they are exactly the type of research that’s needed. As impressive as today’s AI is, most of it involves having a machine learn to perform an incredibly narrow task. Without new ideas, AI systems will remain incapable of things like holding a real conversation or solving difficult problems on their own.

Sam Gershman, a professor of psychology at Harvard who studies intelligence, says that we need to think about mimicking human intelligence more closely if we want artificial intelligence to resemble our own.

Advertisement

“Our brains represent the world in terms of relations between objects, agents, and events,” he told MIT Technology Review via e-mail. “Representing the world in this way massively constrains the kinds of inferences we draw from data, making it harder to learn some things and easier to learn other things. So in that sense this work is a step in the right direction: building in human-like constraints that enable machines to more easily learn tasks that are natural for humans.”

However, Gershman cautioned against overstating the significance of DeepMind work. “Super-human performance on any particular machine learning task does not imply super-human intelligence,” he said.

Relational reasoning is also just one element of human intelligence. Gershman and others wrote a paper last year that explores the other aspects of human intelligence that are currently missing from AI. Besides reasoning about relationships, for instance, they noted that humans are capable of compositionality, or building new ideas from existing knowledge in order to solve problems. 

“Relational reasoning is a necessary but not sufficient condition for human-like intelligence,” Gershman said.

(Read More: “DeepMind’s Neural Network Teaches AI to Reason About the World”)

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement