Skip to Content
Artificial intelligence

Alpha Zero’s “Alien” Chess Shows the Power, and the Peculiarity, of AI

The latest advance from DeepMind behaves in a very surprising way. Expect other AI systems to be just as odd.
December 8, 2017

The latest AI program developed by DeepMind is not only brilliant and remarkably flexible—it’s also quite weird.

DeepMind published a paper this week describing a game-playing program it developed that proved capable of mastering chess and the Japanese game Shoju, having already mastered the game of Go.

Demis Hassabis, the founder and CEO of DeepMind and an expert chess player himself, presented further details of the system, called Alpha Zero, at an AI conference in California on Thursday. The program often made moves that would seem unthinkable to a human chess player.

“It doesn’t play like a human, and it doesn’t play like a program,” Hassabis said at the Neural Information Processing Systems (NIPS) conference in Long Beach. “It plays in a third, almost alien, way.”

Besides showing how brilliant machine-learning programs can be at a specific task, this shows that artificial intelligence can be quite different from the human kind. As AI becomes more commonplace, we might need to be conscious of such “alien” behavior.

Alpha Zero is a more general version of AlphaGo, the program developed by DeepMind to play the board game Go. In 24 hours, Alpha Zero taught itself to play chess well enough to beat one of the best existing chess programs around.

What’s also remarkable, though, Hassabis explained, is that it sometimes makes seemingly crazy sacrifices, like offering up a bishop and queen to exploit a positional advantage that led to victory. Such sacrifices of high-value pieces are normally rare. In another case the program moved its queen to the corner of the board, a very bizarre trick with a surprising positional value. “It’s like chess from another dimension,” Hassabis said.

Hassabis speculates that because Alpha Zero teaches itself, it benefits from not following the usual approach of assigning value to pieces and trying to minimize losses. “Maybe our conception of chess has been too limited,” he said. “It could be an important moment for chess. We can graft it into our own play.”

The game of chess has a long history in artificial intelligence. The best programs, developed and refined over decades, incorporate huge amounts of human intelligence. Although in 1996 IBM’s Deep Blue beat the world champion at the time, that program, like other conventional chess programs, required careful hand-programming.

The original AlphaGo, designed specifically for Go, was a big deal because it was capable of learning to play a game that is enormously complex and is difficult to teach, requiring an instinctive sense of board positions. AlphaGo mastered Go by ingesting thousands of example games and then practicing against another version of itself. It did this partially by training a large neural network using an approach known as reinforcement learning, which is modeled on the way animals seem to learn (see “Google’s AI Masters Go a Decade Earlier Than Expected”).

DeepMind has since demonstrated a version of the program, called AlphaGo Zero, that learns without any example games, instead relying purely on self-play (see “AlphaGo Zero Shows Machines Can Become Superhuman Without Any Help”). Alpha Zero improves further still by showing that the same program can master three different types of board games.

Alpha Zero’s achievements are impressive, but it still needs to play many more practice games than a human chess master. Hassabis says this may be because humans benefit from other forms of learning, such as reading about how to play the game and watching other people play.

Still, some experts caution that the program’s capabilities, while remarkable, should be taken in context. Speaking after Hassabis, Gary Marcus, a professor at NYU, said that a great deal of human knowledge went into building Alpha Zero. And he suggests that human intelligence seems to involve some innate capabilities, such as an intuitive ability to develop language.

Josh Tenenbaum, a professor at MIT who studies human intelligence, said that if we want to develop real, human-level artificial intelligence, we should study the flexibility and creativity that humans exhibit. He pointed, among other examples, to the intelligence of Hassabis and his colleagues in devising, designing, and building the program in the first place. “That’s almost as impressive as a queen in the corner,” he quipped.

Deep Dive

Artificial intelligence

Yann LeCun
Yann LeCun

Yann LeCun has a bold new vision for the future of AI

One of the godfathers of deep learning pulls together old ideas to sketch out a fresh path for AI, but raises as many questions as he answers.

images created by Google Imagen
images created by Google Imagen

The dark secret behind those cute AI-generated animal images

Google Brain has revealed its own image-making AI, called Imagen. But don't expect to see anything that isn't wholesome.

AGI is just chatter for now concept
AGI is just chatter for now concept

The hype around DeepMind’s new AI model misses what’s actually cool about it

Some worry that the chatter about these tools is doing the whole field a disservice.

AI and robotics concept
AI and robotics concept

AI’s progress isn’t the same as creating human intelligence in machines

Honorees from this year's 35 Innovators list are employing AI to find new molecules, fold proteins, and analyze massive amounts of medical data.

Stay connected

Illustration by Rose WongIllustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.