Skip to Content
Artificial intelligence

Alpha Zero’s “Alien” Chess Shows the Power, and the Peculiarity, of AI

The latest advance from DeepMind behaves in a very surprising way. Expect other AI systems to be just as odd.
December 8, 2017

The latest AI program developed by DeepMind is not only brilliant and remarkably flexible—it’s also quite weird.

DeepMind published a paper this week describing a game-playing program it developed that proved capable of mastering chess and the Japanese game Shoju, having already mastered the game of Go.

Demis Hassabis, the founder and CEO of DeepMind and an expert chess player himself, presented further details of the system, called Alpha Zero, at an AI conference in California on Thursday. The program often made moves that would seem unthinkable to a human chess player.

“It doesn’t play like a human, and it doesn’t play like a program,” Hassabis said at the Neural Information Processing Systems (NIPS) conference in Long Beach. “It plays in a third, almost alien, way.”

Besides showing how brilliant machine-learning programs can be at a specific task, this shows that artificial intelligence can be quite different from the human kind. As AI becomes more commonplace, we might need to be conscious of such “alien” behavior.

Alpha Zero is a more general version of AlphaGo, the program developed by DeepMind to play the board game Go. In 24 hours, Alpha Zero taught itself to play chess well enough to beat one of the best existing chess programs around.

What’s also remarkable, though, Hassabis explained, is that it sometimes makes seemingly crazy sacrifices, like offering up a bishop and queen to exploit a positional advantage that led to victory. Such sacrifices of high-value pieces are normally rare. In another case the program moved its queen to the corner of the board, a very bizarre trick with a surprising positional value. “It’s like chess from another dimension,” Hassabis said.

Hassabis speculates that because Alpha Zero teaches itself, it benefits from not following the usual approach of assigning value to pieces and trying to minimize losses. “Maybe our conception of chess has been too limited,” he said. “It could be an important moment for chess. We can graft it into our own play.”

The game of chess has a long history in artificial intelligence. The best programs, developed and refined over decades, incorporate huge amounts of human intelligence. Although in 1996 IBM’s Deep Blue beat the world champion at the time, that program, like other conventional chess programs, required careful hand-programming.

The original AlphaGo, designed specifically for Go, was a big deal because it was capable of learning to play a game that is enormously complex and is difficult to teach, requiring an instinctive sense of board positions. AlphaGo mastered Go by ingesting thousands of example games and then practicing against another version of itself. It did this partially by training a large neural network using an approach known as reinforcement learning, which is modeled on the way animals seem to learn (see “Google’s AI Masters Go a Decade Earlier Than Expected”).

DeepMind has since demonstrated a version of the program, called AlphaGo Zero, that learns without any example games, instead relying purely on self-play (see “AlphaGo Zero Shows Machines Can Become Superhuman Without Any Help”). Alpha Zero improves further still by showing that the same program can master three different types of board games.

Alpha Zero’s achievements are impressive, but it still needs to play many more practice games than a human chess master. Hassabis says this may be because humans benefit from other forms of learning, such as reading about how to play the game and watching other people play.

Still, some experts caution that the program’s capabilities, while remarkable, should be taken in context. Speaking after Hassabis, Gary Marcus, a professor at NYU, said that a great deal of human knowledge went into building Alpha Zero. And he suggests that human intelligence seems to involve some innate capabilities, such as an intuitive ability to develop language.

Josh Tenenbaum, a professor at MIT who studies human intelligence, said that if we want to develop real, human-level artificial intelligence, we should study the flexibility and creativity that humans exhibit. He pointed, among other examples, to the intelligence of Hassabis and his colleagues in devising, designing, and building the program in the first place. “That’s almost as impressive as a queen in the corner,” he quipped.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Providing the right products at the right time with machine learning

Amid shifting customer needs, CPG enterprises look to machine learning to bolster their data strategy, says global head of MLOps and platforms at Kraft Heinz Company, Jorge Balestra.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.