Skip to Content

Five Lessons from AlphaGo’s Historic Victory

As Google’s computer crushed one of humanity’s best Go players, we learned a lot about the software’s inner workings, and what it means for AI.
March 18, 2016

AlphaGo handily beat 18-time world Go champion Lee Sedol 4-1, and in doing so taught us several interesting lessons about where AI research is today, and where it is headed.

There’s life in old AI approaches

One fascinating thing about AlphaGo is the unusual way it was designed. The software combined deep learning—the hottest AI technique out there today—with a much older, and far less fashionable, approach. Deep learning involves using very large simulated neural networks, and usually it eschews logic or symbol manipulation of the kind pioneered by the likes of Marvin Minksy and John McCarthy. But AlphaGo combines deep learning with something called tree-search, a technique invented by one of Minksy’s contemporaries and colleagues, Claude Shannon. Perhaps, then, we will increasingly see the connectionist and symbolic AI coming together in the future.

Polanyi’s paradox isn’t a problem

The game of Go, in which players try to surround and capture each other’s pieces across a large board, is a neat example of Polanyi’s famous paradox: “We know more than we can tell.”

Unlike with chess, there aren’t straightforward guidelines for playing the game or measuring progress, which is one reason why Go has historically been so difficult for computers to play. Machine learning, where a computer isn’t programmed (in the conventional sense) but rather generates its own algorithm for learning from examples, offers a way for computers to navigate Polanyi’s paradox. Plenty of things we do, like driving a car or recognizing a face, are similar. Some economists have highlighted this as an important point. And, as an article in the New York Times shows, some even see AlphaGo’s triumph as compelling evidence that computers will take over more tasks (and jobs) as machine learning is used ever more widely.

AlphaGo isn’t really AI

Not so fast, though. Amazing as AlphaGo is, it’s still a long way from truly intelligent. As AI expert and robotics entrepreneur Jean-Christophe Baillie points out, real intelligence will require not just more sophisticated learning but things like embodiment and the ability to communicate. Indeed, driving a car on a busy city street or interacting with someone you recognize is a lot more complex than we might realize. So while machine learning might let computers take on more tasks, it’s going to be a long time before they can replace everything people do.

AlphaGo is pretty inefficient

Compared with a human, AlphaGo learns quickly, consuming data on previous games and playing against itself at silicon speed. But it’s much less efficient than a person at learning, in that it requires far more examples of Go games in order to pick up effective techniques. This is one of the key problems with deep learning, which many people are trying to solve, by finding ways to learn from either from new kinds of data or from less data altogether.

Commercialization isn’t obvious

The skills demonstrated by AlphaGo—subtle pattern recognition, planning, and decision making—are obviously important. But it’s less obvious how they might be turned into a commercially viable product. Demis Hassabis, the founder of Google DeepMind, has said that the techniques developed for AlphaGo could be used to build a personal assistant that learns its master’s preferences and habits more effectively. But human language is a lot more complex than a board game, and a lot harder to learn from. In other words, it might be tricky to apply AlphaGo’s specific skill set in the messy real world.

(Read more: New York Times, IEEE Spectrum, Nature, “The Missing Link of Artificial Intelligence,” “Can this Man Make AI More Human?“)

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.