Everyone knows the Turing test. But almost no one remembers Alan Turing’s statement that to achieve true intelligence, you should design a machine that was like a child. He said the real secret to human intelligence is our ability to learn.
Thirty years of developmental cognitive science have shown that children are the best learners on earth. But how do they learn so much so quickly? For the last 15 years developmental cognitive scientists and computer scientists have been trying to answer this question, and the answers shape new kinds of machine learning (see “Can This Man Make AI More Human?”).
Many of the recent advances in AI have come through techniques like deep learning, which can detect complicated statistical regularities in enormous data sets. Computers can suddenly do things that were impossible before, like labeling images on the Internet.
The trouble with this sort of purely statistical machine learning, though, is that it depends on data that’s already been selected by humans. Machines need gigantic human-generated data sets just to be able to look at a new picture and say “kitty-cat!”—something a baby can do after seeing just a few examples.
An alternative in machine learning and cognitive science—the “probabilistic models” framework—takes a different approach. These systems formulate and test abstract hypotheses. Bayesian inference procedures have been particularly important. For example, you can mathematically describe a particular causal hypothesis as a directed graph that systematically generates a particular data pattern, and then calculate just how likely that hypothesis is to be true, given the data you see. Machines have become great at testing hypotheses against the data in this way, with consequences for everything from medical diagnosis to meteorology. We’ve shown that young children use data to evaluate hypotheses in a similar way.
But there are two things even very young children do that are still far beyond the abilities of current computers. We are trying to understand these abilities both formally and empirically, and these investigations may allow us to design more powerful kinds of AI.
The really hard problem is deciding which hypotheses, out of all the infinite possibilities, are worth testing. Even preschoolers are remarkably good at coming up with brand new concepts and hypotheses in a creative and imaginative way. In fact, our research has shown that they can sometimes do this better than grown-ups.
A second area where children outshine computers is in their ability to go out and explore and experiment with the world around them—we call this “getting into everything.” Developmental cognitive scientists are just beginning to understand and formalize this kind of active learning.
The wildly creative imaginations and ceaseless exploration of young children may be the key to their impressive learning abilities. Studying those children can give us clues about how to design computers that can pass the more profound Turing test and be almost as smart as a three-year-old.
Alison Gopnik is a professor of psychology at the University of California, Berkeley.
How a Russian cyberwar in Ukraine could ripple out globally
Soldiers and tanks may care about national borders. Cyber doesn't.
Meet Altos Labs, Silicon Valley’s latest wild bet on living forever
Funders of a deep-pocketed new "rejuvenation" startup are said to include Jeff Bezos and Yuri Milner.
Meta’s new learning algorithm can teach AI to multi-task
The single technique for teaching neural networks multiple skills is a step towards general-purpose AI.
Going bald? Lab-grown hair cells could be on the way
These biotech companies are reprogramming cells to treat baldness, but it’s still early days.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.