Meta has created an AI that can beat humans at an online version of Diplomacy, a popular strategy game in which seven players compete for control of Europe by moving pieces around on a map. Unlike other board games that AI has mastered, such as chess and Go, Diplomacy requires players to talk to each other—forming alliances, negotiating tactics—and spot when others are bluffing.
The AI, called Cicero, ranked in the top 10% across 40 online games against 82 human players (who were not aware they were competing against a bot). In one eight-game tournament involving 21 players, Cicero came first. Meta described its work in a paper published in Science.
Learning to play Diplomacy is a big deal for several reasons. Not only does it involve multiple players, who make moves at the same time, but each turn is preceded by a brief negotiation in which players chat in pairs in an attempt to form alliances or gang up on rivals. After this round of negotiation, players then decide what pieces to move—and whether to honor or renege on a deal.
At each point in the game, Cicero models how the other players are likely to act based on the state of the board and its previous conversations with them. It then figures out how players can work together for mutual benefit and generates messages designed to achieve those aims.
To build Cicero, Meta marries two different types of AI: a reinforcement learning model that figures out what moves to make, and a large language model that negotiates with other players.
Cicero isn’t perfect. It still sent messages that contained errors, sometimes contradicting its own plans or making strategic blunders. But Meta claims that humans often chose to collaborate with it over other players.
And it’s significant because while games like chess or Go end with a winner and a loser, real-world problems typically do not have such straightforward resolutions. Finding trade-offs and workarounds is often more valuable than winning. Meta claims that Cicero is a step toward AI that can help with a range of complex problems requiring compromise, from planning routes around busy traffic to negotiating contracts.
A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?
Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.
The viral AI avatar app Lensa undressed me—without my consent
My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.
Roomba testers feel misled after intimate images ended up on Facebook
An MIT Technology Review investigation recently revealed how images of a minor and a tester on the toilet ended up on social media. iRobot said it had consent to collect this kind of data from inside homes—but participants say otherwise.
How to spot AI-generated text
The internet is increasingly awash with text written by AI software. We need new tools to detect it.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.