Skip to Content

Google’s AI Masters the Game of Go a Decade Earlier Than Expected

Google achieves one of the long-standing “grand challenges” of AI by building a computer that can beat expert players at the board game Go.
January 27, 2016

Google has taken a brilliant and unexpected step toward building an AI with more humanlike intuition, developing a computer capable of beating even expert human players at the fiendishly complicated board game Go.

The objective of Go, a game invented in China more than 2,500 years ago, is fairly simple: players must alternately place black and white “stones” on a grid of 19 horizontal and 19 vertical lines with the aim of surrounding the opponent’s pieces, and avoiding having one’s own pieces surrounded. Mastering Go, however, requires endless practice, as well as a finely tuned knack of recognizing subtle patterns in the arrangement of the pieces spread across the board.

Google’s team has shown that the skills needed to master Go are not so uniquely human after all. Their computer program, called AlphaGo, beat the European Go champion, Fan Hui, five games to zero. And this March it will take on one of the world’s best players, Lee Sedol, in a tournament to be held in Seoul, South Korea.

“Go is the most complex and beautiful game ever devised by humans,” Demis Hassabis, head of the Google team, and himself an avid Go player, said at a press briefing. By beating Fan Hui, he added, “our program achieved one of the long-standing grand challenges of AI.”

Hassabis also said the techniques used to create AlphaGo would lend themselves to his team’s effort to develop a general AI. “Ultimately we want to apply these techniques to important real-world problems,” he said. “Because the methods we used were general purpose, our hope is that one day they could be extended to help address some of society’s most pressing problems, from medical diagnostics to climate modeling” (see “Could AI Solve the World’s Biggest Problems?”).

Hassabis said the first way the technology might be applied at Google would involve the development of better software personal assistants. Such an assistant might learn a user’s preferences from his online behavior, and make more intuitive recommendations about products or events, he suggested.

Go is far more challenging for computers than, say, chess for two reasons: the number of potential moves each turn is far higher, and there is no simple way to measure material advantage. A player must therefore learn to recognize abstract patterns in hundreds of pieces placed across the board. And even experts often struggle to explain why a particular position seems advantageous or problematic.

Just a couple of years ago, in fact, most Go players and game programmers believed the game was so complex that it would take several decades before computers might reach the standard of a human expert player.

AlphaGo was developed by a team known as Google DeepMind, a group created after Google acquired a small AI U.K. startup called DeepMind in 2014. The researchers built AlphaGo using an extremely popular and successful machine-learning method known as deep learning combined with another simulation technique for modeling potential moves. Deep learning involves training a large simulated neural network to respond to patterns in data. It has proven very useful for image and audio processing, and many large tech companies are exploring new ways to apply the technique.

Two deep-learning networks were used in AlphaGo: one network learned to predict the next move, and the other learned to predict the outcome from different arrangements on the board. The two networks were combined using a more conventional AI algorithm to look ahead in the game for possible moves. A scientific paper written by researchers from Google that describes the work appears in the journal Nature today.

“The game of Go has an enormous search space, which is intractable to brute-force search,” says David Silver, another Google researcher who led the effort. “The key to AlphaGo is to reduce that search space to something more manageable. This approach makes AlphaGo much more humanlike than previous approaches.”

When IBM’s Deep Blue computer mastered chess in 1997, it used hand-coded rules, and exhaustively searched through potential chess moves. AlphaGo essentially learned over time to recognize potentially advantageous patterns, and then simulated a limited number of potential outcomes.

Google’s achievement has been met with congratulations and some astonishment by other researchers in the field.

“On the technical side, this work is a monumental contribution to AI,” says Ilya Sutskever, a leading AI researcher and the director of a new nonprofit called OpenAI (see “Innovators Under 35: Ilya Sutskever”). Sutskever says the work was especially important because AlphaGo essentially taught itself how to win. “The same technique can be used to achieve extremely high performance on many other games as well,” he says.

Michael Bowling, a professor of computer science at the University of Alberta in Canada who recently developed a program capable of beating anyone at heads-up limit poker, was also excited by the achievement. He believes that the approach should indeed prove useful in many areas where machine learning is applied. “A lot of what we would traditionally think of as human intelligence is built around pattern matching,” he says. “And a lot of what we would think of as learning is having seen these patterns in the past, and being able to realize how they connect to a current situation.”

One aspect of the result worth noting is that it combines deep learning with other techniques, says Gary Marcus, a professor of psychology at New York University and the cofounder and CEO of Geometric Intelligence, an AI startup that is also combining deep learning with other methods (see “Can This Man Make AI More Human?”).

“This is not a so-called end-to-end deep-learning system,” Marcus says. “It’s a carefully structured, modular system with some thoughtful hand-engineering on the front end. Which is, when you think about it, quite parallel to the human mind: rich, modular, with a bit of tweaking by evolution, rather than just a bunch of neurons randomly interconnected and tuned entirely by experience.”

Google isn’t the only company using deep learning to develop a Go-playing AI, either. Facebook has previously said that it has a researcher working on such a system, and last night both Yann LeCun, director of AI research at Facebook, and CEO Mark Zuckerberg  posted updates on the effort. Facebook’s effort is at an earlier stage, but it also combines deep learning with another technique.

Seeing AI master Go may also lead to some existential angst. During the press briefing announcing the news, Hassabis was faced with questions about the long-term risks of the AI systems Google is developing. He said that the company was taking steps to mitigate those risks by collaborating with academics, by organizing conferences, and by working with an internal ethics board.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.