Skip to Content

No, the Experts Don’t Think Superintelligent AI is a Threat to Humanity

Ask the people who should really know.
September 20, 2016

If you believe everything you read, you are probably quite worried about the prospect of a superintelligent, killer AI. The Guardian, a British newspaper, warned recently that “we’re like children playing with a bomb,” and a recent Newsweek headline reads, “Artificial Intelligence Is Coming, and It Could Wipe Us Out.”

Numerous such headlines, fueled by comments from the likes of Elon Musk and Stephen Hawking, are strongly influenced by the work of one man: professor Nick Bostrom, author of the philosophical treatise Superintelligence: Paths, Dangers, and Strategies.

Bostrom is an Oxford philosopher, but quantitative assessment of risks is the province of actuarial science. He may be dubbed the world’s first prominent “actuarial philosopher,” though the term seems an oxymoron given that philosophy is an arena for conceptual arguments, and risk assessment is a data-driven statistical exercise.

So what do the data say? Bostrom aggregates the results of four different surveys of groups such as participants in a conference called “Philosophy and Theory of AI,” held in 2011 in Thessaloniki, Greece, and members of the Greek Association for Artificial Intelligence (he does not provide response rates or the phrasing of questions, and he does not account for the reliance on data collected in Greece).

His findings are presented as probabilities that human-level AI will be attained by a certain time:

By 2022: 10 percent.

By 2040: 50 percent.

By 2075: 90 percent.

This aggregate of four surveys is the main source of data on the advent of human-level intelligence in over 300 pages of philosophical arguments, fables, and metaphors. 

To get a more accurate assessment of the opinion of leading researchers in the field, I turned to the Fellows of the American Association for Artificial Intelligence, a group of researchers who are recognized as having made significant, sustained contributions to the field.

In early March 2016, AAAI sent out an anonymous survey on my behalf, posing the following question to 193 fellows:

“In his book, Nick Bostrom has defined Superintelligence as ‘an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.’ When do you think we will achieve Superintelligence?”

Over the next week or so, 80 fellows responded (a 41 percent response rate), and their responses are summarized below: 

In essence, according to 92.5 percent of the respondents, superintelligence is beyond the foreseeable horizon. This interpretation is also supported by written comments shared by the fellows.

Even though the survey was anonymous, 44 fellows chose to identify themselves, including Geoff Hinton (deep-learning luminary), Ed Feigenbaum (Stanford, Turing Award winner), Rodney Brooks (leading roboticist), and Peter Norvig (Google).

The respondents also shared several comments, including the following:

“Way, way, way more than 25 years. Centuries most likely. But not never.”

“We’re competing with millions of years’ evolution of the human brain. We can write single-purpose programs that can compete with humans, and sometimes excel, but the world is not neatly compartmentalized into single-problem questions.”

“Nick Bostrom is a professional scare monger. His Institute’s role is to find existential threats to humanity. He sees them everywhere. I am tempted to refer to him as the ‘Donald Trump’ of AI.”

Surveys do, of course, have limited scientific value. They are notoriously sensitive to question phrasing, selection of respondents, etc. However, it is the one source of data that Bostrom himself turned to.

Another methodology would be to extrapolate from the current state of AI to the future. However, this is difficult because we do not have a quantitative measurement of the current state of human-level intelligence. We have achieved superintelligence in board games like chess and Go (see “Google’s AI Masters Go a Decade Earlier than Expected”), and yet our programs failed to score above 60 percent on eighth grade science tests, as the Allen Institute’s research has shown (see “The Best AI Program Still Flunks an Eighth Grade Science Test”), or above 48 percent in disambiguating simple sentences (see “Tougher Turing Test Exposes Chatbots’ Stupidity”).

There are many valid concerns about AI, from its impact on jobs to its uses in autonomous weapons systems and even to the potential risk of superintelligence. However, predictions that superintelligence is on the foreseeable horizon are not supported by the available data. Moreover, doom-and-gloom predictions often fail to consider the potential benefits of AI in preventing medical errors, reducing car accidents, and more.

Finally, it’s possible that AI systems could collaborate with people to create a symbiotic superintelligence. That would be very different from the pernicious and autonomous kind envisioned by Professor Bostrom.

Oren Etzioni is CEO of the Allen Institute for Artificial Intelligence and Professor of Computer Science at the University of Washington.


Updated November 2, 2016:

I’m delighted that Professors Dafoe & Russell, who responded to my article here, and I seem to be in agreement on three critical matters. One, we should refrain from ad hominem attacks. Here, I have to offer an apology: I should not have quoted the anonymous AAAI Fellow who likened Dr. Bostrom to Donald Trump. I didn’t mean to lend my voice to that comparison; I sincerely apologized to Bostrom for this misstep via e-mail, an apology that he graciously accepted. Two, as scientists, we need to assess statements about the risk of AI based on data. That was the key point of my brief article, and the article offered unique data on this topic. Three, we also concur that the media has misrepresented the implications of Bostrom’s work—a topic of major concern that was allayed to some extent by the White House report on AI. Of course, we do differ on many of the details and the intricacies of the arguments, but time will tell.

Keep Reading

Most Popular

Scientists are finding signals of long covid in blood. They could lead to new treatments.

Faults in a certain part of the immune system might be at the root of some long covid cases, new research suggests.

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.