DeepMind’s latest AI transfers its learning to new tasks
By using insights from one job to help it do another, a successful new artificial intelligence hints at a more versatile future for machine learning.
Backstory: Most algorithms can be trained in only one domain, and can’t use what’s been learned for one task to perform another, new one. A big hope for AI is to have systems take insights from one setting and apply them elsewhere—what’s called transfer learning.
What’s new: DeepMind built a new AI system called IMPALA that simultaneously performs multiple tasks—in this case, playing 57 Atari games—and attempts to share learning between them. It showed signs of transferring what was learned from one game to another.
Why it matters: IMPALA was 10 times more data-efficient than a similar AI and achieved double the final score. That’s a promising hint that transfer learning is plausible. Plus, a system like this that learns using less processing power could help speed up training of different types of AI.
Deep Dive
Artificial intelligence
Large language models can do jaw-dropping things. But nobody knows exactly why.
And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.
OpenAI teases an amazing new generative video model called Sora
The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.
Google’s Gemini is now in everything. Here’s how you can try it out.
Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.
Google DeepMind’s new generative model makes Super Mario–like games from scratch
Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.