Skip to Content
Policy

Establishing an AI code of ethics will be harder than people think

Ethics are too subjective to guide the use of AI, argue some legal scholars.
October 21, 2018
Varoon Mathur/AI Now Institute

Over the past six years, the New York City police department has compiled a massive database containing the names and personal details of at least 17,500 individuals it believes to be involved in criminal gangs. The effort has already been criticized by civil rights activists who say it is inaccurate and racially discriminatory.

"Now imagine marrying facial recognition technology to the development of a database that theoretically presumes you’re in a gang," Sherrilyn Ifill, president and director-counsel of the NAACP Legal Defense fund, said at the AI Now Symposium in New York last Tuesday.

Lawyers, activists, and researchers emphasize the need for ethics and accountability in the design and implementation of AI systems. But this often ignores a couple of tricky questions: who gets to define those ethics, and who should enforce them? 

Sherrilyn Ifill (NAACP Legal Defense Fund), Timnit Gebru (Google), and Nicole Ozer (ACLU) in conversation at the AI Now 2018 Symposium.
Andrew Federman for AI Now Institute

Not only is facial recognition imperfect, studies have shown that the leading software is less accurate for dark-skinned individuals and women. By Ifill’s estimation, the police database is between 95 and 99 percent African American, Latino, and Asian American. "We are talking about creating a class of […] people who are branded with a kind of criminal tag," Ifill said.

Meanwhile, police departments across the US, the UK, and China have begun adopting face recognition as a tool for finding known criminals. In June, the South Wales police released a statement justifying their use of the technology because of the "public benefit" that it provides.

Indeed, technology often highlights peoples' differing ethical standards—whether it is censoring hate speech or using risk assessment tools to improve public safety.

In an attempt to highlight how divergent people’s principles can be, researchers at MIT created a platform called the Moral Machine to crowd-source human opinion on the moral decisions that should be followed by self-driving cars. They asked millions of people from around the world to weigh in on variations of the classic "trolley problem" by choosing who a car should try to prioritize in an accident. The results show huge variation across different cultures.

Establishing ethical standards also doesn’t necessarily change behavior. In June, for example, after Google agreed to discontinue its work on Project Maven with the Pentagon, it established a fresh set of ethical principles to guide its involvement in future AI projects. Only months later, many Google employees feel those principles have been placed by the wayside with a bid for a $10 billion Department of Defense contract. A recent study out of North Carolina State University also found that asking software engineers to read a code of ethics does nothing to change their behavior.

Philip Alston, an international legal scholar at NYU’s School of Law, proposes a solution to the ambiguous and unaccountable nature of ethics: reframing AI-driven consequences in terms of human rights. "[Human rights are] in the constitution," Alston said at the same conference. "They’re in the bill of rights; they’ve been interpreted by courts," he said. If an AI system takes away people’s basic rights, then it should not be acceptable, he said.

Philip Alston (NYU School of Law), Virginia Eubanks (University at Albany, SUNY), and Vincent Southerland (Center on Race, Inequality, and the Law at NYU) on stage at the symposium.
Andrew Federman for AI Now Institute

Alston isn’t the only one who has come up with this solution. Less than a week before the Symposium, the Data & Society Research Institute published a proposal for using international human rights to govern AI. The report includes recommendations for tech companies to engage with civil rights groups and researchers, and to conduct human rights impact assessments on the life cycles of their AI systems.

"Until we start bringing [human rights] into the AI discussion," added Alston, "there’s no hard anchor."

Deep Dive

Policy

Is there anything more fascinating than a hidden world?

Some hidden worlds--whether in space, deep in the ocean, or in the form of waves or microbes--remain stubbornly unseen. Here's how technology is being used to reveal them.

Yes, remote learning can work for preschoolers

The largest-ever humanitarian intervention in early childhood education shows that remote learning can produce results comparable to a year of in-person teaching.

Three technology trends shaping 2024’s elections

The biggest story of this year will be elections in the US and all around the globe

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.