Skip to Content
Artificial intelligence

DeepMind’s Groundbreaking AlphaGo Zero AI Is Now a Versatile Gamer

December 6, 2017

Don’t challenge this algorithm to a board game. Because chances are it can learn to outsmart you inside a day.

Earlier this year, we reported that Alphabet’s machine-learning subsidiary, DeepMind, had made a huge advance. Using an artificial-intelligence approach known as reinforcement learning, it had enabled its AlphaGo software to develop superhuman skills for the game of Go without needing human data. Armed with just the rules of the game, the AI was able to make random plays until it developed champion-beating strategies. The new software was dubbed AlphaGo Zero because it didn’t need any human input.

Now, in a paper published on arXiv, the DeepMind team reports that the software has been generalized so that it can learn other games. It describes two new examples in which AlphaGo Zero was unleashed on the games of chess and shogi, a Japanese game that’s similar to chess. In both cases the software was able to develop superhuman skills within 24 hours, and then “convincingly defeated a world-champion program.”

It’s perhaps not too surprising that the AI was able to pick up killer skills for the two games so quickly: both chess and shogi are less complex than Go. But DeepMind’s ability to generalize the software, so that it can master different games, hints at increasingly adaptable kinds of machine intelligence.

That said, there are still games that AI hasn’t yet mastered. Perhaps the biggest challenge—which DeepMind is already working on—lies in massively complex online strategy games like Starcraft, which humans are still superior at. As we’ve explained in the past, machines will need to develop new skills, such as memory and planning, in order to steal away that crown. But don’t expect it to take too long.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.