Skip to Content
Artificial intelligence

How DeepMind plans to stop AI from behaving badly

September 28, 2018

Researchers at the Alphabet subsidiary DeepMind have spelled out how they will ensure that AI is developed safely.

The guidelines aim to make certain that powerful systems capable of learning and figuring out their own solutions to problems don’t start to behave in unexpected and unwanted ways.

The big issues: The researchers say the key challenges are specifying the intended behavior of a system in a way that avoids unwanted consequences; making it robust even in the face of unpredictability; and providing assurances, or ways to override behavior if necessary.

Erratic behavior: This is a growing area of academic research. There are plenty of often amusing examples of machine-learning systems that have started behaving oddly. Take, for example, the AI agent that taught itself a rather bizarre way to rack up points in the game CoastRunners. The AI learned it could accumulate more points not by finishing a race, as was intended, but by hitting certain obstacles around the course instead (as in the gif above). DeepMind’s AI Safety team has also shown ways to have an AI agent shut itself off if it starts behaving in ways that might prove risky.

Far out: We shouldn’t worry unduly about AI systems becoming dangerously autonomous. In any case, there are far greater issues to worry about right now, including the bias that may lurk in AI algorithms or the fact that many machine-learning systems are difficult to understand.

Deep Dive

Artificial intelligence

Why Meta’s latest large language model survived only three days online

Galactica was supposed to help scientists. Instead, it mindlessly spat out biased and incorrect nonsense.

DeepMind’s game-playing AI has beaten a 50-year-old record in computer science

The new version of AlphaZero discovered a faster way to do matrix multiplication, a core problem in computing that affects thousands of everyday computer tasks.

Google’s new AI can hear a snippet of song—and then keep on playing

The technique, called AudioLM, generates naturalistic sounds without the need for human annotation.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.