Skip to Content

AI Doomsayer Says His Ideas Are Catching On

Philosopher Nick Bostrom says major tech companies are listening to his warnings about investing in “AI safety” research.
April 7, 2015

Over the past year, Oxford University philosophy professor Nick Bostrom has gained visibility for warning about the potential risks posed by more advanced forms of artificial intelligence. He now says that his warnings are earning the attention of companies pushing the boundaries of artificial intelligence research.

Nick Bostrom
Nick Bostrom

Many people working on AI remain skeptical of or even hostile to Bostrom’s ideas. But since his book on the subject, Superintelligence, appeared last summer, some prominent technologists and scientists—including Elon Musk, Stephen Hawking, and Bill Gates—have echoed some of his concerns. Google is even assembling an ethics committee to oversee its artificial intelligence work.

Bostrom met last week with MIT Technology Review’s San Francisco bureau chief, Tom Simonite, to discuss his effort to get artificial intelligence researchers to consider the dangers of their work (see “Our Fear of Artificial Intelligence”).

How did you come to believe that artificial intelligence was a more pressing problem for the world than, say, nuclear holocaust or a major pandemic?

A lot of things could cause catastrophes, but relatively few could actually threaten the entire future of Earth-inhabiting intelligent life. I think artificial intelligence is one of the biggest, and it seems to be one where the efforts of a small number of people, or one extra unit of resources, might make a nontrivial difference. With nuclear war, a lot of big, powerful groups are already interested in that.

What about climate change, which is widely seen as the biggest threat facing humanity at the moment?

It’s a very, very small existential risk. For it to be one, our current models would have to be wrong—even the worst scenarios [only] mean the climate in some parts of the world would be a bit more unfavorable. Then we would have to be incapable of remediating that through some geoengineering, which also looks unlikely.

Certain ethical theories imply that existential risk is just way more important. All things considered, existential risk mitigation should be much bigger than it is today. The world spends way more on developing new forms of lipstick than on existential risk.

If someone came to you and said, “My company has developed technology that looks like it could make artificial intelligence much more powerful,” what would you advise?

The first suggestion would be to build up some competency in-house in AI safety research. One crucial thing is to establish a good working relationship between the people working on AI and the people who are thinking about safety issues. The only way there can be a good scenario is if these ideas are created and then implemented.

Google, Facebook, and other major tech companies seem to be leading progress in artificial intelligence. Are you already talking with them?

I don’t want to mention names. It’s the ones that have significant interest in this area. I think there’s recognition it makes sense to have some people thinking about [AI safety] now, and then maybe, as and when things move forward, to ramp that up. It’s good to have a seed in there, somebody who’s keeping an eye on these things.

What kind of research is possible on something so far from being real today?

What’s been produced up to date is a clearer understanding of what the problem is and some concepts that can be used to think about these things. These may not look like much on paper, but before, it wasn’t possible to go to the next stage, which is developing a technical research agenda.

Can you give an example of a technical project that might be on that?

For example, could you design an AI motivation system [so] that the AI doesn’t resist the programmer coming in to change its goal? There is a whole set of things that could be practically useful, like boxing methods—tools that can contain an AI before it is ready to be released.

But those are a long way from the kinds of systems that researchers at, say, Google are actually building today.

Yes. One of the challenges is to do useful work in this area far ahead of when you actually have a system where they could or should be applied.

Couldn’t an artificial intelligence that is well short of human-level still cause problems? For example, if used by a government as a weapon, or by accident if let loose on financial markets?

There’s a whole class of more imminent and smaller problems that some people say are more real. There’s algorithmic trading, or drones, or automation and its impact on labor markets, or whether systems could discriminate wittingly or unwittingly on the basis of race. I don’t deny that those issues exist—I just think there is this additional issue that the world might not address because it only really becomes serious once AI reaches a certain very high level.

You are careful to say that no one can really predict how close that level is. But are there particular advances that would signal we’re getting somewhere?

Very likely a number of big breakthroughs would have to occur, to give more common-sense reasoning abilities, general learning abilities in different domains, [and] more flexible planning capabilities.

There have been at least two periods where hype bubbles within the field were then followed by periods of disillusionment. It might well be that we’re now in a third hype bubble and that we reach the limit of what can be done with these new techniques and it will take a long time before the next big frontier. But it could also be the wave that goes all the way there.

Do you remain optimistic about our chances if it does?

My long-term view is that it’s most likely we either end up in a bad or very good place, not somewhere that’s so-so. Hopefully, it will turn out to be great.

Keep Reading

Most Popular

Scientists are finding signals of long covid in blood. They could lead to new treatments.

Faults in a certain part of the immune system might be at the root of some long covid cases, new research suggests.

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.