Skip to Content

Don’t Despair if Google’s AI Beats the World’s Best Go Player

You’re still special: Google’s Go-playing AI might be capable of subtle tactical insights, but it’s a long way from truly intelligent.
March 8, 2016

We may be about to witness a remarkable demonstration of the advancing capabilities of AI programs. AlphaGo, a program developed by AI researchers at Google, is getting set to take on the world’s most successful Go player, Lee Se-dol.

The contest will take place this week in South Korea, where Go is hugely popular. It will be fascinating because Go is such a complex and subtle game that many experts thought it would take years, if not decades, for computers to be able to compete with the best human players. Successful players must learn through years of practice to recognize promising moves, and they will often struggle to explain why a particular position seems promising.

And yet earlier this year, a team at Google DeepMind, a subsidiary created when Google acquired a British AI company in 2014, published details of a computer program that was able to beat Fan Hui, the European Go champion and a professional player, in a game played behind closed doors.

Lee Se-dol (right), a legendary South Korean player of Go, poses with Google researcher Demis Hassabis before the Google DeepMind Challenge Match in Seoul.

Developing AlphaGo involved combining several simulated neural networks with other AI techniques so that the program could learn by studying thousands of previous games and could also practice against itself.

If AlphaGo defeats Se-dol, the feat will probably be heralded as a sad moment for humankind and another sign that computers could soon start encroaching on more human turf by mastering other skills that we have long considered beyond automation.

That may be true to some extent, but don’t panic just yet. As subtle as it is, Go is still a very narrow area of expertise, and its rules are still tightly constrained. What’s more, AlphaGo cannot do anything else (even if the techniques used to build it could be applied to other board games). Some argue that a better way to gauge progress towards general AI is to ask computers to take much broader and more complex challenges, like passing an elementary science exam. And thankfully, that’s the sort of thing that AI programs are still pretty terrible at.

(Sources: Nature, ”Google’s AI Masters the Game of Go a Decade Earlier Than Expected”)

Keep Reading

Most Popular

It’s time to retire the term “user”

The proliferation of AI means we need a new word.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Sam Altman says helpful agents are poised to become AI’s killer function

Open AI’s CEO says we won’t need new hardware or lots more training data to get there.

An AI startup made a hyperrealistic deepfake of me that’s so good it’s scary

Synthesia's new technology is impressive but raises big questions about a world where we increasingly can’t tell what’s real.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.