DeepMind’s latest AI transfers its learning to new tasks
By using insights from one job to help it do another, a successful new artificial intelligence hints at a more versatile future for machine learning.
Backstory: Most algorithms can be trained in only one domain, and can’t use what’s been learned for one task to perform another, new one. A big hope for AI is to have systems take insights from one setting and apply them elsewhere—what’s called transfer learning.
What’s new: DeepMind built a new AI system called IMPALA that simultaneously performs multiple tasks—in this case, playing 57 Atari games—and attempts to share learning between them. It showed signs of transferring what was learned from one game to another.
Why it matters: IMPALA was 10 times more data-efficient than a similar AI and achieved double the final score. That’s a promising hint that transfer learning is plausible. Plus, a system like this that learns using less processing power could help speed up training of different types of AI.
Deep Dive
Artificial intelligence

Yann LeCun has a bold new vision for the future of AI
One of the godfathers of deep learning pulls together old ideas to sketch out a fresh path for AI, but raises as many questions as he answers.

Inside a radical new project to democratize AI
A group of over 1,000 AI researchers has created a multilingual large language model bigger than GPT-3—and they’re giving it out for free.

Sony’s racing AI destroyed its human competitors by being nice (and fast)
What Gran Turismo Sophy learned on the racetrack could help shape the future of machines that can work alongside humans, or join us on the roads.

DeepMind has predicted the structure of almost every protein known to science
And it’s giving the data away for free, which could spur new scientific discoveries.
Stay connected

Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.