Skip to Content

Google’s AI Is Battering One of the World’s Top Go Players in Style

A Go-playing AI is thrashing the world’s best player, but what’s most interesting is how creatively and eccentrically it plays.
March 10, 2016

The game of Go is much loved by geeks for its simplicity and subtlety. So it’s a little tragic to see AlphaGo, an AI developed by the alpha geeks at Google DeepMind, go 2-0 up against one of the best Go players in human history, Lee Se-dol.

The second game in the best-of-5 match not only demonstrated the program’s extraordinary strength as a Go player but also highlighted its ability to produce some surprisingly creative moves. These moves reflect the remarkable progress AI is making, as well as the gaps that still remain.

AlphaGo’s match against Se-dol is reminiscent of the battle between IBM’s Deep Blue and Garry Kasparov, then the world chess champion, in 1997. But Go is far more challenging for computers than chess, for two reasons: the number of potential moves in each turn is far higher, and there is no simple way to measure material advantage.

It usually takes years of practice for accomplished Go players to appreciate why a particular board arrangement may be advantageous, and even then they may struggle to explain to a beginner why a position works or doesn’t. The fact that AlphaGo could also learn to recognize these patterns suggests that more subtle human skills could perhaps be automated before we might expect.

AlphaGo’s brilliance comes from the way it is designed. It combines a few different machine-learning approaches in a clever new way, enabling it to study previous games and play against itself in order to improve.

Michael Redmond, an expert American player, complimented AlphaGo for some creative and elegant early play. “There was a great beauty to the opening,” Redmond said after the game. “Based on what I had seen from its other games, AlphaGo was always strong in the end and middle game, but that was extended to the beginning game this time. It was a beautiful, innovative game.”

Even more interesting, however, was a moment when AlphaGo looked to have blundered midgame, only to demonstrate that its seemingly weak position would develop into dominance over the board.

“Today I really feel that AlphaGo played a near-perfect game,” a stunned and sad-looking Se-dol said after the match. “Yesterday I was surprised, but today it’s more than that—I am speechless. I admit that it was a very clear loss on my part. From the very beginning of the game I did not feel like there was a point that I was leading.”

(Sources: Google Blog, “Google’s AI Masters the Game of Go a Decade Earlier than Expected”)

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.