DeepMind’s latest AI transfers its learning to new tasks
By using insights from one job to help it do another, a successful new artificial intelligence hints at a more versatile future for machine learning.
Backstory: Most algorithms can be trained in only one domain, and can’t use what’s been learned for one task to perform another, new one. A big hope for AI is to have systems take insights from one setting and apply them elsewhere—what’s called transfer learning.
What’s new: DeepMind built a new AI system called IMPALA that simultaneously performs multiple tasks—in this case, playing 57 Atari games—and attempts to share learning between them. It showed signs of transferring what was learned from one game to another.
Why it matters: IMPALA was 10 times more data-efficient than a similar AI and achieved double the final score. That’s a promising hint that transfer learning is plausible. Plus, a system like this that learns using less processing power could help speed up training of different types of AI.
Deep Dive
Artificial intelligence
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
Driving companywide efficiencies with AI
Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
Generative AI deployment: Strategies for smooth scaling
Our global poll examines key decision points for putting AI to use in the enterprise.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.