The project brings computer scientists and engineers together with neuroscientists and cognitive psychologists to explore research that might lead to fundamental progress in artificial intelligence. Tenenbaum outlined the project, and his vision for advancing AI, at EmTech, a conference held at MIT this week by MIT Technology Review.
"Imagine we could build a machine that starts off like a baby and learns like a child," he said. "If we could do this it’d be the basis for artificial intelligence that is actually intelligent, machine learning that could actually learn.”
Some stunning advances have been made in AI in recent years, but these have largely been built upon a handful of key breakthroughs in machine learning, especially large, or deep, neural networks. Deep learning has, for instance, given computers the ability to recognize words in speech and faces in images as accurately as a person can. Deep learning also underpins spectacular progress in game-playing programs, including DeepMind’s AlphaGo, and it has contributed to improvements in self-driving vehicles and robotics. But they are all missing something.
"None of these systems are truly intelligent," he said. "None of them have the flexible, common sense, general intelligence of a two year old, or even a one year old. So what’s missing? What’s the gap?"
Tenenbaum’s research focuses on exploring cognitive science in order to understand human intelligence. His work has, for example, explored how even small children are able to visualize aspects of the world using a kind of innate 3-D model. This gives humans greater instinctive understanding of the physical world than a computer or robot has. "Children’s play is really serious business," he said. "They’re experiments. And that’s what makes humans the smartest learners in the known universe.”
Tenenbaum has also done groundbreaking work developing computer programs capable of mimicking some of the more elusive aspects of the human mind, often using probabilistic techniques. For instance, in 2015 he and two other researchers created computer programs capable of learning to recognize new handwritten characters, as well as certain objects in images, after seeing just a few examples. This is important because the best machine-learning programs typically require huge quantities of training data. iSee, a self-driving-car company that draws inspiration from this research, was spun out of Tenenbaum’s lab last year.
The Quest for Intelligence, announced in February, also seeks to explore the societal impact of artificial intelligence. This means accounting for the technology’s fundamental limitations or shortcomings, as well as issues such as algorithmic bias and explainability.
Tenenbaum notes that the original vision for artificial intelligence, a vision that is now more than 50 years old, sought to draw inspiration from human intelligence, but without much scientific grounding. “The fields of cognitive science and neuroscience are now more mature,” he says. “This should make this project special.”
This artist is dominating AI-generated art. And he’s not happy about it.
Greg Rutkowski is a more popular prompt than Picasso.
What does GPT-3 “know” about me?
Large language models are trained on troves of personal data hoovered from the internet. So I wanted to know: What does it have on me?
DeepMind has predicted the structure of almost every protein known to science
And it’s giving the data away for free, which could spur new scientific discoveries.
An AI that can design new proteins could help unlock new cures and materials
The machine-learning tool could help researchers discover entirely new proteins not yet known to science.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.