If you’ve ever played the card game Hanabi, you’ll understand when I say it’s unlike any other. It’s a collaborative game in which you have full view of everyone else’s hands but not your own.
To win the game, each player must give the others hints about their hands over a limited number of rounds to arrange all the cards in a specific order. It’s an intense exercise in strategy, inference, and cooperation. That’s why researchers at Google Brain and DeepMind think it’s the perfect game for AI to tackle next.
In a new paper, they argue that unlike the other games AI has mastered, such as chess, Go, and poker, Hanabi requires theory of mind and a higher level of reasoning. Theory of mind is about understanding the mental states of others—and understanding that they may not be the same as your own. It’s a foundational skill that humans use to operate efficiently in the world, and one that we usually pick up when we are very young.
Information in Hanabi is limited both by the number of hints afforded to the players in each game and by what can be communicated in each hint. As a result, an AI agent must also pick up implicit information from the other players’ actions to win the game—a challenge it hasn’t had to face before.
Additionally, it has to learn how to provide the maximum possible information in its own hints and actions to help the other players succeed. If an AI agent can successfully navigate such an imperfect-information environment, the researchers believe, it will be one step closer to cooperating effectively with humans.
These are all novel challenges for the research community and will require new algorithmic advancements that link together the work of several subfields of AI, including reinforcement learning, game theory, and emergent communication—the study of how communication arises between multiple AI agents in collaborative settings.
To confirm this hypothesis, the Google team tested all the current state-of-the-art reinforcement-learning algorithms and found that they perform poorly. In response, they released an open-source Hanabi environment to spur further work within the research community.
“As a researcher I have been fascinated by how AI agents can learn to communicate and cooperate with each other and ultimately also humans,” says Jakob Foerster, one of the paper’s coauthors. “Hanabi presents a unique opportunity for a grand challenge in this area.”
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
Driving companywide efficiencies with AI
Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
Unpacking the hype around OpenAI’s rumored new Q* model
If OpenAI's new model can solve grade-school math, it could pave the way for more powerful systems.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.