Skip to Content

Don’t Despair if Google’s AI Beats the World’s Best Go Player

You’re still special: Google’s Go-playing AI might be capable of subtle tactical insights, but it’s a long way from truly intelligent.
March 8, 2016

We may be about to witness a remarkable demonstration of the advancing capabilities of AI programs. AlphaGo, a program developed by AI researchers at Google, is getting set to take on the world’s most successful Go player, Lee Se-dol.

The contest will take place this week in South Korea, where Go is hugely popular. It will be fascinating because Go is such a complex and subtle game that many experts thought it would take years, if not decades, for computers to be able to compete with the best human players. Successful players must learn through years of practice to recognize promising moves, and they will often struggle to explain why a particular position seems promising.

And yet earlier this year, a team at Google DeepMind, a subsidiary created when Google acquired a British AI company in 2014, published details of a computer program that was able to beat Fan Hui, the European Go champion and a professional player, in a game played behind closed doors.

Lee Se-dol (right), a legendary South Korean player of Go, poses with Google researcher Demis Hassabis before the Google DeepMind Challenge Match in Seoul.

Developing AlphaGo involved combining several simulated neural networks with other AI techniques so that the program could learn by studying thousands of previous games and could also practice against itself.

If AlphaGo defeats Se-dol, the feat will probably be heralded as a sad moment for humankind and another sign that computers could soon start encroaching on more human turf by mastering other skills that we have long considered beyond automation.

That may be true to some extent, but don’t panic just yet. As subtle as it is, Go is still a very narrow area of expertise, and its rules are still tightly constrained. What’s more, AlphaGo cannot do anything else (even if the techniques used to build it could be applied to other board games). Some argue that a better way to gauge progress towards general AI is to ask computers to take much broader and more complex challenges, like passing an elementary science exam. And thankfully, that’s the sort of thing that AI programs are still pretty terrible at.

(Sources: Nature, ”Google’s AI Masters the Game of Go a Decade Earlier Than Expected”)

Keep Reading

Most Popular

Rendering of Waterfront Toronto project
Rendering of Waterfront Toronto project

Toronto wants to kill the smart city forever

The city wants to get right what Sidewalk Labs got so wrong.

windows desktop with anime image from Wallpaper Engine
windows desktop with anime image from Wallpaper Engine

Chinese gamers are using a Steam wallpaper app to get porn past the censors

Wallpaper Engine has become a haven for ingenious Chinese users who use it to smuggle adult content as desktop wallpaper. But how long can it last?

Yann LeCun
Yann LeCun

Yann LeCun has a bold new vision for the future of AI

One of the godfathers of deep learning pulls together old ideas to sketch out a fresh path for AI, but raises as many questions as he answers.

Linux hack concept
Linux hack concept

The US military wants to understand the most important software on Earth

Open-source code runs on every computer on the planet—and keeps America’s critical infrastructure going. DARPA is worried about how well it can be trusted

Stay connected

Illustration by Rose WongIllustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.