Skip to Content
Artificial intelligence

DeepMind’s new AI just beat top human pro-gamers at Starcraft II for the first time

January 25, 2019

DeepMind, a subsidiary of Alphabet that’s focused on cracking artificial intelligence, has announced a new landmark in that grand quest: beating humans at galactic warfare.
 
The news: AlphaStar, the company’s latest learning algorithm, defeated professional Starcraft II players for the first time, scoring 10 wins and one loss against the pros, called TLO and MaNa. The popular real-time strategy game involves players competing as one of three races to building structures and engaging in combat across a sprawling battlefield.
 
Practice, practice: AlphaStar learned to play within an environment called the AlphaStar League. A large neural network first observed replays of expert human games. It was then pitted against versions of itself, using a machine-learning technique called reinforcement learning to improve over time. Importantly, the program’s speed of action, and its view of the battlefield, were limited so that it didn’t have an unfair edge over humans.
 
Who gives a Zerg? AlphaStar had to display new kinds of intelligence in order to master the game. The techniques developed for playing the game could potentially prove useful in many practical situations where complex strategy is required: think trading or even military planning.
 
Higher score: Starcraft II is not only extremely complex. It is also a game of “imperfect information,” meaning players cannot always see what their opponents are up to. There is also no single best strategy for playing. And it takes time for the results of a player’s actions to become clear, making it harder for an algorithm to learn through experience. DeepMind’s team used a very specialized neural network architecture to address these issues.
 
Game theory: DeepMind is most famous for developing the software that learned to beat the world’s best Go and chess players. But before that, the company developed several algorithms that learned to play simple Atari games. Playing video games is a neat way to measure progress in artificial intelligence, and to compare computers with humans. It is, however, also a very narrow test—AlphaStar, like its predecessors, can only do one task, albeit incredibly well.
 

Deep Dive

Artificial intelligence

This new data poisoning tool lets artists fight back against generative AI

The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models. 

Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist

An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.

Unpacking the hype around OpenAI’s rumored new Q* model

If OpenAI's new model can solve grade-school math, it could pave the way for more powerful systems.

Minds of machines: The great AI consciousness conundrum

Philosophers, cognitive scientists, and engineers are grappling with what it would take for AI to become conscious.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.