Skip to Content

Google Gets Practical about the Dangers of AI

The company lays out five unsolved challenges that need to be addressed if smart machines such as domestic robots are to be safe.
June 22, 2016

Could machines become so intelligent and powerful they pose a threat to human life, or even humanity as a whole?

It’s a question that has become fashionable in some parts of Silicon Valley in recent years, despite being more or less irreconcilable with the simple robots and glitchy virtual assistants of today (see “AI Doomsayer Says His Ideas Are Catching On”). Some experts in artificial intelligence believe speculation about the dangers of future, super-intelligent software is harming the field.

Now Google, a company heavily invested in artificial intelligence, is trying to carve out a middle way. A new paper released today describes five problems that researchers should investigate to help make future smart software safer. In a blog post on the paper, Google researcher Chris Olah says they show how the debate over AI safety can be made more concrete and productive.

“Most previous discussion has been very hypothetical and speculative,” he writes. “We believe it’s essential to ground concerns in real machine-learning research, and to start developing practical approaches for engineering AI systems that operate safely and reliably.”

Olah uses a cleaning robot to illustrate some of his five points.  One area of concern is in preventing systems from achieving their objectives by cheating. For example, the cleaning robot might discover it can satisfy its programming to clean up stains by hiding them instead of actually removing them.

Another of the problems posed is how to make robots able to explore new environments safely. For example, a cleaning robot should be able to experiment with new ways to use cleaning tools, but not try using a wet mop on an electrical outlet.

Olah describes the five problems in a new paper coauthored with Google colleague Dario Amodei, with contributions from others at Google, Stanford University, the University of California, Berkeley, and OpenAI, a research institute cofounded and partially funded by Tesla CEO and serial entrepreneur Elon Musk.

Musk, who once likened working on artificial intelligence to “summoning the demon,” made creating “safe AI” one of OpenAI’s founding goals (see “What Will It Take to Build a Virtuous AI?”).

Google has also spoken of a commitment to ensuring that artificial intelligence software doesn’t have unintended consequences. The company’s first research paper on the topic was released this month by its DeepMind group in London. DeepMind’s leader, Demis Hassabis, has also convened an ethics board to consider the possible downsides of AI, although its members have not been disclosed (see “How Google Plans to Solve Artificial Intelligence”).

Oren Etzioni, CEO of the Allen Institute for AI, welcomes the approach outlined in Google’s new paper. He has previously criticized discussions about the dangers of AI as being too vague for scientists or engineers to engage productively. But the scenarios laid out by Google are specific enough to allow real research, even if it’s still unclear whether such experiments will be practically useful, he says.

“It’s the right people asking the right questions,” says Etzioni. “As for the right answers—time will tell.”

Keep Reading

Most Popular

Scientists are finding signals of long covid in blood. They could lead to new treatments.

Faults in a certain part of the immune system might be at the root of some long covid cases, new research suggests.

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.