Skip to Content
Artificial intelligence

An old-fashioned AI has won a Starcraft shootout

November 16, 2018

The latest results in a long-running contest of video-game-playing AIs reveal how hard it is for machines to master swarming insectoid Zergs or blitzing Protoss. They also show that even old-school approaches can still sometimes win out.

The AIIDE Starcraft Contest, held at Memorial University in Newfoundland, Canada, has been running since 2010. Participating teams submit bots that play an original version of Starcraft, a sprawling sci-fi-themed game, in a series of one-on-one showdowns.

Starcraftiness: Video games are generally useful in AI because they offers a constrained environment and a good way to quantify progress. The popular online strategy game Starcraft has emerged as an important  benchmark for AI both because it is extremely complicated and because it’s a game where it’s hard to measure progress. There are a vast number of possible states and a huge number of potential moves at every moment. And it can be hard to tell if a strategy is a good one until much later on in a battle.

Game theory: DeepMind, a subsidiary of Alphabet, famously used several hot machine-learning techniques to let computers master Atari video games and then the board games Go and chess. The company’s researchers are working on programs capable of playing Starcraft II, a later version of the same game. They have released a platform that makes it easier to develop bots for the game and limits the speed at which they can play, to level the playing field with humans. (The Canadian contest also gets some funding from DeepMind as well as Facebook.)

Old-school winner: The best-performing bot was SAIDA, developed by a team of researchers at Samsung in South Korea. Interestingly, the researchers say they hand-coded their bot to pursue a particular strategy depending on the opponent’s approach. The researchers are also working on a reinforcement-learning strategy (similar to the one that DeepMind used for Atari, Go, and chess), but it wasn’t ready for the contest.

Runners-up: Second and third place went to a team from Facebook and a group of researchers based in China. Both those teams used more modern machine-learning techniques, and the Chinese researchers built their bot in just six weeks by adopting existing code.

Remaining challenges: Bots can beat some amateur players, but they are still nowhere near as good as experts, says Dave Churchill, who organized the contest with his colleague Richard Kelly. “Even within the pro scene, there are orders-of-magnitude difference between the top pros and the amateurs,” Churchill tells MIT Technology Review. “We aren’t even close.”

Deep Dive

Artificial intelligence

A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?

Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.

The viral AI avatar app Lensa undressed me—without my consent

My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.

Roomba testers feel misled after intimate images ended up on Facebook

An MIT Technology Review investigation recently revealed how images of a minor and a tester on the toilet ended up on social media. iRobot said it had consent to collect this kind of data from inside homes—but participants say otherwise.

How to spot AI-generated text

The internet is increasingly awash with text written by AI software. We need new tools to detect it.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.