In 2018, several high-profile controversies involving AI served as a wake-up call for technologists, policymakers, and the public. The technology may have brought us welcome advances in many fields, but it can also fail catastrophically when built shoddily or applied carelessly.
It’s hardly a surprise, then, that Americans have mixed support for the continued development of AI and overwhelmingly agree that it should be regulated, according to a new study from the Center for the Governance of AI and Oxford University’s Future of Humanity Institute.
These are important lessons for policymakers and technologists to consider in the discussion on how best to advance and regulate AI, says Allan Dafoe, director of the center and coauthor of the report. “There isn’t currently a consensus in favor of developing advanced AI, or that it’s going to be good for humanity,” he says. “That kind of perception could lead to the development of AI being perceived as illegitimate or cause political backlashes against the development of AI.”
It’s clear that decision makers in the US and around the world need to have a better understanding of the public’s concerns—and how they should be addressed. Here are some of the key takeaways from the report.
Americans aren’t sure AI is a good thing
While more Americans support than oppose AI development, there isn’t a strong consensus either way.
A higher percentage of respondents also believe high-level machine intelligence would do more harm than good for humanity.
When asked to rank their specific concerns, they listed a weakening of data privacy and the increased sophistication of cyber-attacks at the top—both as issues of high importance and as those highly likely to affect many Americans within the next 10 years. Autonomous weapons closely followed in importance but were ranked with a lower likelihood of wide-scale impact.
Americans want better AI governance
More than 8 in 10 Americans believe that AI and robotics should be managed carefully.
That is easier said than done because they also don’t trust any one entity to pick up that mantle. Among the different options presented from among federal and international agencies, companies, nonprofits, and universities, none received more than 50% of the respondents’ trust to develop and manage AI responsibly. The US military and university researchers did, however, receive the most trust for developing the technology, while tech companies and nonprofits received more trust than government actors for regulating it.
“I believe AI could be a tremendous benefit,” Dafoe says. But the report shows a main obstacle in the way of getting there: “You have to make sure that you have a broad legitimate consensus around what society is going to undertake.”
Correction: An earlier version of this story had a typo in the first chart, showing that 31% of Americans "somewhat oppose" AI development. It has been corrected to 13%.
DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.
“This is a profound moment in the history of technology,” says Mustafa Suleyman.
AI hype is built on high test scores. Those tests are flawed.
With hopes and fears about the technology running wild, it's time to agree on what it can and can't do.
You need to talk to your kid about AI. Here are 6 things you should say.
As children start back at school this week, it’s not just ChatGPT you need to be thinking about.
AI language models are rife with different political biases
New research explains you’ll get more right- or left-wing answers, depending on which AI model you ask.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.