The latest results in a long-running contest of video-game-playing AIs reveal how hard it is for machines to master swarming insectoid Zergs or blitzing Protoss. They also show that even old-school approaches can still sometimes win out.
The AIIDE Starcraft Contest, held at Memorial University in Newfoundland, Canada, has been running since 2010. Participating teams submit bots that play an original version of Starcraft, a sprawling sci-fi-themed game, in a series of one-on-one showdowns.
Starcraftiness: Video games are generally useful in AI because they offers a constrained environment and a good way to quantify progress. The popular online strategy game Starcraft has emerged as an important benchmark for AI both because it is extremely complicated and because it’s a game where it’s hard to measure progress. There are a vast number of possible states and a huge number of potential moves at every moment. And it can be hard to tell if a strategy is a good one until much later on in a battle.
Game theory: DeepMind, a subsidiary of Alphabet, famously used several hot machine-learning techniques to let computers master Atari video games and then the board games Go and chess. The company’s researchers are working on programs capable of playing Starcraft II, a later version of the same game. They have released a platform that makes it easier to develop bots for the game and limits the speed at which they can play, to level the playing field with humans. (The Canadian contest also gets some funding from DeepMind as well as Facebook.)
Old-school winner: The best-performing bot was SAIDA, developed by a team of researchers at Samsung in South Korea. Interestingly, the researchers say they hand-coded their bot to pursue a particular strategy depending on the opponent’s approach. The researchers are also working on a reinforcement-learning strategy (similar to the one that DeepMind used for Atari, Go, and chess), but it wasn’t ready for the contest.
Runners-up: Second and third place went to a team from Facebook and a group of researchers based in China. Both those teams used more modern machine-learning techniques, and the Chinese researchers built their bot in just six weeks by adopting existing code.
Remaining challenges: Bots can beat some amateur players, but they are still nowhere near as good as experts, says Dave Churchill, who organized the contest with his colleague Richard Kelly. “Even within the pro scene, there are orders-of-magnitude difference between the top pros and the amateurs,” Churchill tells MIT Technology Review. “We aren’t even close.”
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
Unpacking the hype around OpenAI’s rumored new Q* model
If OpenAI's new model can solve grade-school math, it could pave the way for more powerful systems.
Generative AI deployment: Strategies for smooth scaling
Our global poll examines key decision points for putting AI to use in the enterprise.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.