Hello,

We noticed you're browsing in private or incognito mode.

To continue reading this article, please exit incognito mode or log in.

Not an Insider? Subscribe now for unlimited access to online articles.

  • A time line of tech events that raise ethical questions, presented at the opening of the AI Now 2018 Symposium.
  • Varoon Mathur/AI Now Institute
  • Intelligent Machines

    Establishing an AI code of ethics will be harder than people think

    Ethics are too subjective to guide the use of AI, argue some legal scholars.

    Over the past six years, the New York City police department has compiled a massive database containing the names and personal details of at least 17,500 individuals it believes to be involved in criminal gangs. The effort has already been criticized by civil rights activists who say it is inaccurate and racially discriminatory.

    "Now imagine marrying facial recognition technology to the development of a database that theoretically presumes you’re in a gang," Sherrilyn Ifill, president and director-counsel of the NAACP Legal Defense fund, said at the AI Now Symposium in New York last Tuesday.

    Lawyers, activists, and researchers emphasize the need for ethics and accountability in the design and implementation of AI systems. But this often ignores a couple of tricky questions: who gets to define those ethics, and who should enforce them? 

    Sherrilyn Ifill (NAACP Legal Defense Fund), Timnit Gebru (Google), and Nicole Ozer (ACLU) in conversation at the AI Now 2018 Symposium.
    Andrew Federman for AI Now Institute

    Not only is facial recognition imperfect, studies have shown that the leading software is less accurate for dark-skinned individuals and women. By Ifill’s estimation, the police database is between 95 and 99 percent African American, Latino, and Asian American. "We are talking about creating a class of […] people who are branded with a kind of criminal tag," Ifill said.

    Meanwhile, police departments across the US, the UK, and China have begun adopting face recognition as a tool for finding known criminals. In June, the South Wales police released a statement justifying their use of the technology because of the "public benefit" that it provides.

    Indeed, technology often highlights peoples' differing ethical standards—whether it is censoring hate speech or using risk assessment tools to improve public safety.

    The AI Now 2018 Symposium
    AI Now Institute

    In an attempt to highlight how divergent people’s principles can be, researchers at MIT created a platform called the Moral Machine to crowd-source human opinion on the moral decisions that should be followed by self-driving cars. They asked millions of people from around the world to weigh in on variations of the classic "trolley problem" by choosing who a car should try to prioritize in an accident. The results show huge variation across different cultures.

    Establishing ethical standards also doesn’t necessarily change behavior. In June, for example, after Google agreed to discontinue its work on Project Maven with the Pentagon, it established a fresh set of ethical principles to guide its involvement in future AI projects. Only months later, many Google employees feel those principles have been placed by the wayside with a bid for a $10 billion Department of Defense contract. A recent study out of North Carolina State University also found that asking software engineers to read a code of ethics does nothing to change their behavior.

    Philip Alston, an international legal scholar at NYU’s School of Law, proposes a solution to the ambiguous and unaccountable nature of ethics: reframing AI-driven consequences in terms of human rights. "[Human rights are] in the constitution," Alston said at the same conference. "They’re in the bill of rights; they’ve been interpreted by courts," he said. If an AI system takes away people’s basic rights, then it should not be acceptable, he said.

    Philip Alston (NYU School of Law), Virginia Eubanks (University at Albany, SUNY), and Vincent Southerland (Center on Race, Inequality, and the Law at NYU) on stage at the symposium.
    Andrew Federman for AI Now Institute

    Alston isn’t the only one who has come up with this solution. Less than a week before the Symposium, the Data & Society Research Institute published a proposal for using international human rights to govern AI. The report includes recommendations for tech companies to engage with civil rights groups and researchers, and to conduct human rights impact assessments on the life cycles of their AI systems.

    "Until we start bringing [human rights] into the AI discussion," added Alston, "there’s no hard anchor."

    Keep up with the latest in AI at EmTech Digital.
    Don't be left behind.

    March 25-26, 2019
    San Francisco, CA

    Register now
    Sherrilyn Ifill (NAACP Legal Defense Fund), Timnit Gebru (Google), and Nicole Ozer (ACLU) in conversation at the AI Now 2018 Symposium.
    Andrew Federman for AI Now Institute
    Philip Alston (NYU School of Law), Virginia Eubanks (University at Albany, SUNY), and Vincent Southerland (Center on Race, Inequality, and the Law at NYU) on stage at the symposium.
    Andrew Federman for AI Now Institute
    More from Intelligent Machines

    Artificial intelligence and robots are transforming how we work and live.

    Want more award-winning journalism? Subscribe to Insider Online Only.
    • Insider Online Only {! insider.prices.online !}*

      {! insider.display.menuOptionsLabel !}

      Unlimited online access including articles and video, plus The Download with the top tech stories delivered daily to your inbox.

      See details+

      Unlimited online access including all articles, multimedia, and more

      The Download newsletter with top tech stories delivered daily to your inbox

    /3
    You've read of three free articles this month. for unlimited online access. You've read of three free articles this month. for unlimited online access. This is your last free article this month. for unlimited online access. You've read all your free articles this month. for unlimited online access. You've read of three free articles this month. for more, or for unlimited online access. for two more free articles, or for unlimited online access.