Skip to Content
Artificial intelligence

A team of AI algorithms just crushed humans in a complex computer game

Algorithms capable of collaboration and teamwork can outmaneuver human teams.
June 25, 2018
OpenAI.

Five different AI algorithms have teamed up to kick human butt in Dota 2, a popular strategy computer game.

Researchers at OpenAI, a nonprofit based in California, developed the algorithmic A team, which they call the OpenAI Five. Each algorithm uses a neural network to learn not only how to play the game, but also how to cooperate with its AI teammates. It has started defeating amateur Dota 2 players in testing, OpenAI says.

This is an important and novel direction for AI, since algorithms typically operate independently. Approaches that help algorithms cooperate with each other could prove important for commercial uses of the technology. AI algorithms could, for instance, team up to outmaneuver opponents in online trading or ad bidding. Collaborative algorithms might also cooperate with humans.

OpenAI previously demonstrated an algorithm capable of competing against top humans at single-player Dota 2. The latest work builds on this using similar algorithms modified to value both individual and team success. The algorithms do not communicate directly except through game play.

“What we’ve seen implies that coordination and collaboration can emerge very naturally out of the incentives,” says Greg Brockman, one of the founders of OpenAI, which aims to develop artificial intelligence openly and in a way that benefits humanity. He adds that the team has tried substituting a human player for one of the algorithms and found this to work very well. “He described himself as feeling very well supported,” Brockman says.

Dota 2 is a complex strategy game in which teams of five players compete to control a structure within a sprawling landscape. Players have different strengths, weaknesses, and roles, and the game involves collecting items and planning attacks, as well as engaging in real-time combat.

Pitting AI programs against computer games has become a familiar means of measuring progress. DeepMind, a subsidiary of Alphabet, famously developed a program capable of learning to play the notoriously complex and subtle board game Go with superhuman skill. A related program then taught itself from scratch to master Go and then chess simply by playing against itself.

The strategies required for Dota 2 are more defined than in chess or Go, but the game is still difficult to master. It is also challenging for a machine because it isn’t always possible to see what your opponents are up to, and because teamwork is required.

The OpenAI Five learn by playing against various versions of themselves. Over time, the programs developed strategies much like the ones humans use—figuring out ways to acquiring gold by “farming” it, for instance, as well as adopting a particular strategic role or “lane” within the game.

AI experts say the achievement is significant. “Dota 2 is an extremely complicated game, so even beating strong amateurs is truly impressive,” says Noam Brown, a researcher at Carnegie Mellon University in Pittsburgh. “In particular, dealing with hidden information in a game as large as Dota 2 is a major challenge.”

Brown previously worked on an algorithm capable of playing poker, another imperfect-information game, with superhuman skill (see “Why poker is a big deal in AI”). If the OpenAI Five team can consistently beat humans, Brown says, that would be a major achievement in AI. However, he notes that given enough time, humans might be able to figure out weaknesses in the AI team’s playing style.

Other games could also push AI further, Brown says. “The next major challenge would be games involving communication, like Diplomacy or Settlers of Catan, where balancing between cooperation and competition is vital to success.”

 

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.