Skip to Content

StarCraft Will Become the Next Big Playground for AI

Artificial intelligence will require key advances in order to play a video game filled with planning, guesswork, and bluffing.
November 4, 2016
The new version of StarCraft II includes a range of simplified outputs (shown on the left) to aid machine learning.

Teaching computers to play the board game Go is impressive, but if we really want to push the limits of machine intelligence, perhaps they’ll need to learn to rush a Zerg or set a trap for a horde of invading Protoss ships.

StarCraft, a hugely popular space-fiction-themed strategy computer game, will soon be accessible to advanced AI players. Blizzard Entertainment, the company behind the game, and Google DeepMind, a subsidiary of Alphabet focused on developing general-purpose artificial intelligence, announced the move at a games conference today.

Teaching computers to play StarCraft II expertly would be a significant milestone in artificial-intelligence research. Within the game, players must build bases, mine resources, and attack their opponents’ outposts. Mastering such a complex and sprawling game takes finely honed skills, strategic acumen, and a good dose of cunning. It is visually relatively complex, and players often cannot see what their opponent is up to. It should therefore be an ideal place for computers to make the next big leap in mimicking human intelligence.

Modifications to StarCraft II, the latest version of the game, will be released in the first quarter of next year, making it possible for AI researchers to build systems that can use experimentation, observation, and other cutting-edge learning techniques to improve their play. For example, it might be necessary to use machine-learning systems that rely on delayed rewards and are capable of observing and mimicking human players.

The new interface for StarCraft II will limit the capabilities of machine-learning systems, like the number of commands they can execute per minute, so that they match those of human players. The interface will also present visually simplified versions of the game to aid with machine learning. Blizzard and DeepMind will release tools to get researchers up and running.

StarCraft will be an exciting new challenge, says Oriol Vinyals, a research scientist at Google DeepMind who is leading the effort to make StarCraft II a playground for new AI algorithms. “It’s a game I played a long time ago in quite a serious way,” Vinyals says. “And as a player, I can attest that there are many interesting things about StarCraft. For instance, an agent will need to learn planning and utilize memory, which is a hot topic in machine learning.”

StarCraft is not only a hugely popular computer game, it is the most successful e-sport of all time, with players competing, often in front of large live and televised audiences, for significant prizes. The game already features primitive AIs, which lag far behind the skills of human players. Chris Sigaty, a production director at Blizzard who is responsible for StarCraft II, says he hopes the new venture will feed back into the game. “AIs could perhaps become coaches or teachers,” he says.

DeepMind gained attention several years ago for developing AIs capable of playing simple Atari computer games, and last year the company developed a program called AlphaGo that can play the ancient board game Go at an expert level. AlphaGo defeated one of the strongest human players of all time, which was considered a big milestone because it has proved impossible to program computers to play Go expertly with explicit rules (see “Google’s AI Masters Go a Decade Earlier Than Expected”).

There is growing interest in using computer games to develop and test AI programs. Researchers at Microsoft have turned a research version of the simple open-world game Minecraft into an experimental environment for AI development (“Minecraft is a Testing Ground for Human-AI Collaboration”). Separately, Facebook researchers published a paper that allows StarCraft to be used for machine-learning experimentation without the modifications announced today.



Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.