Skip to Content
Artificial intelligence

DeepMind’s new AI just beat top human pro-gamers at Starcraft II for the first time

January 25, 2019

DeepMind, a subsidiary of Alphabet that’s focused on cracking artificial intelligence, has announced a new landmark in that grand quest: beating humans at galactic warfare.
 
The news: AlphaStar, the company’s latest learning algorithm, defeated professional Starcraft II players for the first time, scoring 10 wins and one loss against the pros, called TLO and MaNa. The popular real-time strategy game involves players competing as one of three races to building structures and engaging in combat across a sprawling battlefield.
 
Practice, practice: AlphaStar learned to play within an environment called the AlphaStar League. A large neural network first observed replays of expert human games. It was then pitted against versions of itself, using a machine-learning technique called reinforcement learning to improve over time. Importantly, the program’s speed of action, and its view of the battlefield, were limited so that it didn’t have an unfair edge over humans.
 
Who gives a Zerg? AlphaStar had to display new kinds of intelligence in order to master the game. The techniques developed for playing the game could potentially prove useful in many practical situations where complex strategy is required: think trading or even military planning.
 
Higher score: Starcraft II is not only extremely complex. It is also a game of “imperfect information,” meaning players cannot always see what their opponents are up to. There is also no single best strategy for playing. And it takes time for the results of a player’s actions to become clear, making it harder for an algorithm to learn through experience. DeepMind’s team used a very specialized neural network architecture to address these issues.
 
Game theory: DeepMind is most famous for developing the software that learned to beat the world’s best Go and chess players. But before that, the company developed several algorithms that learned to play simple Atari games. Playing video games is a neat way to measure progress in artificial intelligence, and to compare computers with humans. It is, however, also a very narrow test—AlphaStar, like its predecessors, can only do one task, albeit incredibly well.
 

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

What’s next for generative video

OpenAI's Sora has raised the bar for AI moviemaking. Here are four things to bear in mind as we wrap our heads around what's coming.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.