Skip to Content
Artificial intelligence

How DeepMind plans to stop AI from behaving badly

September 28, 2018

Researchers at the Alphabet subsidiary DeepMind have spelled out how they will ensure that AI is developed safely.

The guidelines aim to make certain that powerful systems capable of learning and figuring out their own solutions to problems don’t start to behave in unexpected and unwanted ways.

The big issues: The researchers say the key challenges are specifying the intended behavior of a system in a way that avoids unwanted consequences; making it robust even in the face of unpredictability; and providing assurances, or ways to override behavior if necessary.

Erratic behavior: This is a growing area of academic research. There are plenty of often amusing examples of machine-learning systems that have started behaving oddly. Take, for example, the AI agent that taught itself a rather bizarre way to rack up points in the game CoastRunners. The AI learned it could accumulate more points not by finishing a race, as was intended, but by hitting certain obstacles around the course instead (as in the gif above). DeepMind’s AI Safety team has also shown ways to have an AI agent shut itself off if it starts behaving in ways that might prove risky.

Far out: We shouldn’t worry unduly about AI systems becoming dangerously autonomous. In any case, there are far greater issues to worry about right now, including the bias that may lurk in AI algorithms or the fact that many machine-learning systems are difficult to understand.

Deep Dive

Artificial intelligence

The inside story of how ChatGPT was built from the people who made it

Exclusive conversations that take us behind the scenes of a cultural phenomenon.

AI is dreaming up drugs that no one has ever seen. Now we’ve got to see if they work.

AI automation throughout the drug development pipeline is opening up the possibility of faster, cheaper pharmaceuticals.

GPT-4 is bigger and better than ChatGPT—but OpenAI won’t say why

We got a first look at the much-anticipated big new language model from OpenAI. But this time how it works is even more deeply under wraps.

The original startup behind Stable Diffusion has launched a generative AI for video

Runway’s new model, called Gen-1, can change the visual style of existing videos and movies.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.