Skip to Content
Artificial intelligence

DeepMind’s AI has used teamwork to beat humans at a first-person shooter

Capture the flag game
Capture the flag gameDeepMind

Deep-learning algorithms have already mastered games like Starcraft to beat humans, and now they have shown they can team up to beat us too.

The news: In a paper published in Science yesterday, DeepMind showed how it had let AI programs loose in a modified version of the 3D first-person video game Quake III Arena. The team used an algorithm called “For the Win,” which trains a host of agents in parallel using reinforcement learning, the technique that lets AI learn which tactics work and which do not (and that famously enabled DeepMind’s AI to win at Go). This time, AI agents were trained on around 450,000 games of Capture the Flag, the classic game that involves snatching a flag from your opponent’s base while protecting your own.

Each agent could only see a first-person view of the maze-like structure, just as a human player would. The AI agents were mixed up in teams with 40 human players and randomly matched in games—both as opponents and as teammates. To make it even harder, the maps were procedurally generated, meaning no two were the same.

How to win: The teams of AI agents were consistently better than the other pairs and developed teamwork strategies to help them win, including following teammates to outnumber opponents at key moments and waiting near the enemy base to grab a new flag when it appeared. There’s a new video of the agents in action here.

There’s no (A)I in team: The work (which was first published on the arXiv pre-press site before peer review last year) is interesting because it’s hard to get AI to cooperate: cooperation involves so many variables, and all the AI agents are learning independently. There’s the prospect that something like this could help robots operate in the real world more effectively, with each other and with humans.

However, we must be careful not to extrapolate too much. The game was very narrowly defined, and it’s likely the same system couldn’t just transfer to another scenario—never mind real life. In any case, the AI agents were not really collaborating (at least not in the way that humans do, by communicating), Georgia Tech’s Mark Riedl told the New York Times.

For more on the world of AI, sign up here to our weekly AI newsletter, The Algorithm.

Deep Dive

Artificial intelligence

DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.

“This is a profound moment in the history of technology,” says Mustafa Suleyman.

AI hype is built on high test scores. Those tests are flawed.

With hopes and fears about the technology running wild, it's time to agree on what it can and can't do.

You need to talk to your kid about AI. Here are 6 things you should say.

As children start back at school this week, it’s not just ChatGPT you need to be thinking about.

AI language models are rife with different political biases

New research explains you’ll get more right- or left-wing answers, depending on which AI model you ask.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.