Skip to Content

What AI Needs to Learn to Master Alien Warfare

AI agents need new ideas to compete in the popular strategy game StarCraft.
August 9, 2017
Gamers play StarCraft at Gamescom in Cologne, Germany, in 2015.

To learn how humans and AI systems can best live together, we may need to kill a whole lot of Zerg.

DeepMind, the AI-focused unit of Alphabet, and the games company Blizzard Entertainment are releasing a set of tools that will let will programmers unleash all sorts of AI algorithms inside the space-themed game StarCraft.

The game is more challenging than most of those tackled by AI programs to date. Not only is StarCraft extremely complex, it also requires planning far ahead and trying to second-guess what your opponent is up to. This means developing AI programs capable of matching humans ought to help researchers explore new facets of humanlike intelligence with machines. One other potential benefit, according to those involved, will be exploring ways for humans and artificial agents to play together.

“StarCraft is interesting for many reasons,” says Oriol Vinyals, the DeepMind researcher who is leading the project. The fact that players often get only a glimpse of their opponents’ activities, for instance, mean that algorithms will need to develop better ways of storing information in memory. “Memory is critical,” Vinyals says. “What you see now is not what you saw a while ago, and something specific that might have happened a minute ago might make you [want to] act differently.” 

DeepMind has built an impressive reputation on building AI programs capable of playing various types of games with superhuman skill. The company began by conquering various Atari games and more recently it took on the extremely complex and abstract board game Go (see “DeepMind’s AI Masters the Game of Go a Decade Earlier than Expected”).

To master these games DeepMind’s researchers used a machine-learning technique called reinforcement learning. Machine learning lets a computer figure out how to do something for itself, without requiring explicit instructions. Reinforcement learning, which is inspired by the way animals seem to learn, enables learning through experimentation with positive feedback (see “10 Breakthrough Technologies 2017: Reinforcement Learning”). However, Vinyals says applying reinforcement learning to StarCraft will be more difficult because it takes such a long time for each game to unfold. “An action I take now only has a consequence much later,” he says.

Within StarCraft, players compete as one of three races: the humanlike Terrans, the cyborg Protoss, or the insectoid Zerg. Battles involve complex strategic actions like mining resources and constructing bases, as well as protracted battle sequences. StarCraft is also the most popular spectator e-sport, and in South Korea especially, tournaments are often played in massive stadiums and shown live on television. Prominent players have welcomed the prospect of matching up against AI programs, but DeepMind hasn’t yet said when this might happen (see “StarCraft Pros Are Ready to Battle AI”).

The tools developed by DeepMind and Blizzard will make it much easier for AI researchers to deploy and test machine-learning algorithms inside StarCraft. The tools will provide AI agents with the same view of the game and interface that human players have. They also make it possible to limit the speed with which a program can execute its actions. This can ensure that a program has to rely on the same intellectual tools as a person does.

StarCraft has been used as a research platform for some time, but it has been relatively challenging to exploit. Vinyals, an expert StarCraft player himself, did pioneering work building bots for StarCraft as a student at the University of California, Berkeley (see “35 Innovators Under 35, 2016: Oriol Vinyals”). Teams at Facebook and the Chinese company Alibaba have also published StarCraft research. DeepMind is publishing a paper at a major machine-learning conference this week showing how existing algorithms perform with the game.

It’s possible that quite different approaches might be needed to master a game like StarCraft, however. Some other researchers have found success using approaches taken from game theory to make progress in other games of “imperfect information,” and where bluffing is important. Earlier this year, Tuomas Sandholm, a professor at Carnegie Mellon University, and one of his students, Noam Brown, built a program, called Libratus, that beat several professional players at heads up, or two-player, no-limit Texas hold’em. Libratus employed a very sophisticated algorithm to calculate the optimal strategy throughout a game (see “Why Poker Is a Big Deal for Artificial Intelligence”). And coincidentally, Brown has been interning at DeepMind this summer.

Jacob Repp, a principle engineer at Blizzard, says his company is interested in seeing if sophisticated AI agents could make the game more interesting, either by playing against people or collaborating with them. It is already possible to create agents in the game that follow scripted commands. Repp says it would be interesting to have those agents use machine learning to some degree, too. And he says the company is exploring these sorts of ideas. “We’re finding that these tools are very useful for the process of making games and designing features in games,” he says.

Keep Reading

Most Popular

Scientists are finding signals of long covid in blood. They could lead to new treatments.

Faults in a certain part of the immune system might be at the root of some long covid cases, new research suggests.

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.