Skip to Content
Artificial intelligence

DeepMind’s new AI just beat top human pro-gamers at Starcraft II for the first time

January 25, 2019

DeepMind, a subsidiary of Alphabet that’s focused on cracking artificial intelligence, has announced a new landmark in that grand quest: beating humans at galactic warfare.
 
The news: AlphaStar, the company’s latest learning algorithm, defeated professional Starcraft II players for the first time, scoring 10 wins and one loss against the pros, called TLO and MaNa. The popular real-time strategy game involves players competing as one of three races to building structures and engaging in combat across a sprawling battlefield.
 
Practice, practice: AlphaStar learned to play within an environment called the AlphaStar League. A large neural network first observed replays of expert human games. It was then pitted against versions of itself, using a machine-learning technique called reinforcement learning to improve over time. Importantly, the program’s speed of action, and its view of the battlefield, were limited so that it didn’t have an unfair edge over humans.
 
Who gives a Zerg? AlphaStar had to display new kinds of intelligence in order to master the game. The techniques developed for playing the game could potentially prove useful in many practical situations where complex strategy is required: think trading or even military planning.
 
Higher score: Starcraft II is not only extremely complex. It is also a game of “imperfect information,” meaning players cannot always see what their opponents are up to. There is also no single best strategy for playing. And it takes time for the results of a player’s actions to become clear, making it harder for an algorithm to learn through experience. DeepMind’s team used a very specialized neural network architecture to address these issues.
 
Game theory: DeepMind is most famous for developing the software that learned to beat the world’s best Go and chess players. But before that, the company developed several algorithms that learned to play simple Atari games. Playing video games is a neat way to measure progress in artificial intelligence, and to compare computers with humans. It is, however, also a very narrow test—AlphaStar, like its predecessors, can only do one task, albeit incredibly well.
 

Deep Dive

Artificial intelligence

open sourcing language models concept
open sourcing language models concept

Meta has built a massive new language AI—and it’s giving it away for free

Facebook’s parent company is inviting researchers to pore over and pick apart the flaws in its version of GPT-3

images created by Google Imagen
images created by Google Imagen

The dark secret behind those cute AI-generated animal images

Google Brain has revealed its own image-making AI, called Imagen. But don't expect to see anything that isn't wholesome.

Yann LeCun
Yann LeCun

Yann LeCun has a bold new vision for the future of AI

One of the godfathers of deep learning pulls together old ideas to sketch out a fresh path for AI, but raises as many questions as he answers.

AGI is just chatter for now concept
AGI is just chatter for now concept

The hype around DeepMind’s new AI model misses what’s actually cool about it

Some worry that the chatter about these tools is doing the whole field a disservice.

Stay connected

Illustration by Rose WongIllustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.