Skip to Content

A New AI Ethics Center Shows Growing Angst About Machine Smarts

The pace of progress in artificial intelligence has unsettled many experts, but the biggest risk of all may be inscrutability.
November 3, 2016

Let's face it: the pace of progress in artificial intelligence can sometimes seem unsettling. Terminator-style machines remain science fiction, but AI could have a huge impact on employment, introduce bias into algorithms, and contribute to the development of autonomous weapons. But perhaps the biggest looming threat may be making sure we understand how these increasingly complex systems work when they go awry.

The latest evidence that even the experts are concerned about this is the creation of a new AI ethics research center at Carnegie Mellon University. The new center, called K&< Gates Endowment for Ethics and Computational Technologies, is funded with $10 million from K&< Gates, an international law firm based in Pittsburgh.

Anxiety over machine intelligence has been gaining momentum. Last month the White House released a report assessing the potential effects of AI. And several of the world's largest tech companies recently joined forces to create an organization, called Partnership on AI, to study the technology and its potential impacts.

In a statement, CMU's president, Subra Suresh, said it will be important to consider the human side of all AI systems. “It is not just technology that will determine how this century unfolds," he said. "Our future will also be influenced strongly by how humans interact with technology, how we foresee and respond to the unintended consequences of our work, and how we ensure that technology is used to benefit humanity, individually and as a society." 

CMU itself is experiencing some teething pains due to advances in AI. Last year its robotics department was raided by Uber for a nearby research center dedicated to automated driving. At the same time, the university is spinning out AI-powered startups and consulting with big companies on various AI projects.

Besides unemployment, algorithmic bias, and autonomous weapons, one of the most significant—and least appreciated—consequences of AI could be the way we come to rely on systems that are inscrutable because no one programmed them. This issue is already appearing in some situations, while some experts are trying to devise machine-learning systems that are able to explain their workings.

(Read more: Carnegie Mellon UniversityNature, MIT News, "Obama: My Successor Will Govern a Country Being Transformed by AI," "Tech Titans Join Forces to Stop AI from Behaving Badly")

Keep Reading

Most Popular

travelers walk through Ronald Reagan Washington National Airport
travelers walk through Ronald Reagan Washington National Airport

We won’t know how bad omicron is for another month

Gene sequencing gave an early alert about the latest covid variant. But we'll only know if omicron is a problem by watching it spread.

Conceptual illustration showing a file folder with the China flag and various papers flying out of it
Conceptual illustration showing a file folder with the China flag and various papers flying out of it

The US crackdown on Chinese economic espionage is a mess. We have the data to show it.

The US government’s China Initiative sought to protect national security. In the most comprehensive analysis of cases to date, MIT Technology Review reveals how far it has strayed from its goals.

Passengers rest on the ticketing counter floor
Passengers rest on the ticketing counter floor

Why blanket travel bans won’t work to stop omicron

The aim was to stop the variant's spread, but these bans look like too little, too late.

Eight ways scientists are unwrapping the mysteries of the human brain

Optogenetics and advanced imaging have helped neuroscientists understand how memories form and made it possible to manipulate them.

Stay connected

Illustration by Rose WongIllustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.