AI Doomsayer Says His Ideas Are Catching On
Philosopher Nick Bostrom says major tech companies are listening to his warnings about investing in “AI safety” research.
There are big financial incentives to develop more powerful forms of artificial intelligence software.
Over the past year, Oxford University philosophy professor Nick Bostrom has gained visibility for warning about the potential risks posed by more advanced forms of artificial intelligence. He now says that his warnings are earning the attention of companies pushing the boundaries of artificial intelligence research.
Many people working on AI remain skeptical of or even hostile to Bostrom’s ideas. But since his book on the subject, Superintelligence, appeared last summer, some prominent technologists and scientists—including Elon Musk, Stephen Hawking, and Bill Gates—have echoed some of his concerns. Google is even assembling an ethics committee to oversee its artificial intelligence work.
Bostrom met last week with MIT Technology Review’s San Francisco bureau chief, Tom Simonite, to discuss his effort to get artificial intelligence researchers to consider the dangers of their work (see “Our Fear of Artificial Intelligence”).
How did you come to believe that artificial intelligence was a more pressing problem for the world than, say, nuclear holocaust or a major pandemic?
A lot of things could cause catastrophes, but relatively few could actually threaten the entire future of Earth-inhabiting intelligent life. I think artificial intelligence is one of the biggest, and it seems to be one where the efforts of a small number of people, or one extra unit of resources, might make a nontrivial difference. With nuclear war, a lot of big, powerful groups are already interested in that.
What about climate change, which is widely seen as the biggest threat facing humanity at the moment?
It’s a very, very small existential risk. For it to be one, our current models would have to be wrong—even the worst scenarios [only] mean the climate in some parts of the world would be a bit more unfavorable. Then we would have to be incapable of remediating that through some geoengineering, which also looks unlikely.
Certain ethical theories imply that existential risk is just way more important. All things considered, existential risk mitigation should be much bigger than it is today. The world spends way more on developing new forms of lipstick than on existential risk.
If someone came to you and said, “My company has developed technology that looks like it could make artificial intelligence much more powerful,” what would you advise?
The first suggestion would be to build up some competency in-house in AI safety research. One crucial thing is to establish a good working relationship between the people working on AI and the people who are thinking about safety issues. The only way there can be a good scenario is if these ideas are created and then implemented.
Google, Facebook, and other major tech companies seem to be leading progress in artificial intelligence. Are you already talking with them?
I don’t want to mention names. It’s the ones that have significant interest in this area. I think there’s recognition it makes sense to have some people thinking about [AI safety] now, and then maybe, as and when things move forward, to ramp that up. It’s good to have a seed in there, somebody who’s keeping an eye on these things.
What kind of research is possible on something so far from being real today?
What’s been produced up to date is a clearer understanding of what the problem is and some concepts that can be used to think about these things. These may not look like much on paper, but before, it wasn’t possible to go to the next stage, which is developing a technical research agenda.
Can you give an example of a technical project that might be on that?
For example, could you design an AI motivation system [so] that the AI doesn’t resist the programmer coming in to change its goal? There is a whole set of things that could be practically useful, like boxing methods—tools that can contain an AI before it is ready to be released.
But those are a long way from the kinds of systems that researchers at, say, Google are actually building today.
Yes. One of the challenges is to do useful work in this area far ahead of when you actually have a system where they could or should be applied.
Couldn’t an artificial intelligence that is well short of human-level still cause problems? For example, if used by a government as a weapon, or by accident if let loose on financial markets?
There’s a whole class of more imminent and smaller problems that some people say are more real. There’s algorithmic trading, or drones, or automation and its impact on labor markets, or whether systems could discriminate wittingly or unwittingly on the basis of race. I don’t deny that those issues exist—I just think there is this additional issue that the world might not address because it only really becomes serious once AI reaches a certain very high level.
You are careful to say that no one can really predict how close that level is. But are there particular advances that would signal we’re getting somewhere?
Very likely a number of big breakthroughs would have to occur, to give more common-sense reasoning abilities, general learning abilities in different domains, [and] more flexible planning capabilities.
There have been at least two periods where hype bubbles within the field were then followed by periods of disillusionment. It might well be that we’re now in a third hype bubble and that we reach the limit of what can be done with these new techniques and it will take a long time before the next big frontier. But it could also be the wave that goes all the way there.
Do you remain optimistic about our chances if it does?
My long-term view is that it’s most likely we either end up in a bad or very good place, not somewhere that’s so-so. Hopefully, it will turn out to be great.