Skip to Content

Could AlphaGo Bluff Its Way through Poker?

One of the brains behind Google’s Go-winning software says a similar learning approach makes it as good as a human expert at Texas hold ‘em poker.
March 30, 2016

One of the scientists responsible for AlphaGo, the Google DeepMind software that trounced one of the world’s best Go players recently, says the same approach can produce a surprisingly competent poker bot.

Unlike board games such as Go or chess, poker is a game of “imperfect information,” and for this reason it has proved even more resistant to computerization than Go.

Gameplay in poker involves devising a strategy based on the cards you have in your hand and a guess as to what’s in your opponents’ hands. Poker players try to read the behavior of others at the table using a combination of statistics and more subtle behavioral cues.

Artificial Intelligence: it's a kind of magic.

Because of this, building an effective poker bot using machine learning may be significant for real-world applications of AI. The game is relevant to game theory, which concerns situations involving negotiation and coöperation.

Although Go is incredibly complex and its strategic principles cannot be encoded easily, AlphaGo was at least able to see every part of the game. AlphaGo used a combination of two AI techniques, deep reinforcement learning and tree search, to come up with winning Go moves. Deep reinforcement learning involves training a large neural network with positive and negative rewards, and tree search is a mathematical strategy for looking ahead in a game.

David Silver, the lead researcher behind AlphaGo and a lecturer at University College London, posted a paper earlier this month describing efforts to build a poker bot using similar techniques.

Together with Johannes Heinrich, a research student at UCL, Silver used deep reinforcement learning to produce effective playing strategy in both Leduc, a simplified version of poker involving a deck of just six cards, and Texas hold’em, the most popular form of the game. With Leduc, the software reached a Nash equilibrium, meaning an optimal approach as defined by game theory. In Texas hold’em, it achieved the performance of an expert human player.

Meanwhile, a team of researchers at the University of Oxford and Google DeepMind have turned their attention to two fantasy-inspired card games—Magic: the Gathering and Hearthstone.

These games involve playing cards representing different spells, weapons, or creatures against opponents. This work is much more preliminary, and simply involved training a neural network to interpret the information shown on each card, which may either be structured, as in a particular color or number, or unstructured, as in text describing what happens when the card is played.

Even so, Google’s AI team clearly isn’t finished with building superhuman game bots.

(Read more: Kotaku, The Guardian,Five Lessons from AlphaGo’s Historic Victory”)

Keep Reading

Most Popular

How a simple circuit could offer an alternative to energy-intensive GPUs

The creative new approach could lead to more energy-efficient machine-learning hardware.

This classic game is taking on climate change

What the New Energies edition of Catan says about climate technology today.

How battery-swap networks are preventing emergency blackouts

When an earthquake rocked Taiwan, hundreds of Gogoro’s battery-swap stations automatically stopped drawing electricity to stabilize the grid.

Apple is promising personalized AI in a private cloud. Here’s how that will work.

Apple’s first big salvo in the AI wars makes a bet that people will care about data privacy when automating tasks.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.