We noticed you're browsing in private or incognito mode.

To continue reading this article, please exit incognito mode or log in.

Not a subscriber? Subscribe now for unlimited access to online articles.

Photo of Harvard Square
  • Chensiyuan/Wikimedia commons
  • Intelligent Machines

    Harvard researchers want to school Congress about AI

    A tech boot camp will teach US politicians and policymakers about the potential, and the risks, of artificial intelligence.

    When Facebook chief executive Mark Zuckerberg testified in front of the US Congress, technology experts quickly realized how poorly informed his questioners were. As much as Congress wanted to regulate the tech behemoth, it was clear they had no idea how. Instead, they let him get away with grandiose claims about how AI would solve all the company’s problems.

    For Dipayan Ghosh, a research fellow at the Harvard Kennedy School (HKS), the hearing emphasized the pressing need to bring US policymakers up to speed on major technology issues—and AI in particular.

    “AI is a tremendous technology, but there are really salient problems that we’ve seen take a life of their own in society,” says Ghosh, who was a technology policy advisor in the Obama administration. “We need to inform people in positions of power about how these systems actually work, so the next time they launch a regulatory effort, they won’t be ill-informed.”

    Ghosh is co-directing a new AI policy initiative, launched today, with Tom Wheeler, a senior research fellow at HKS and the chairman of the US Federal Communications Commission under Obama.

    Funded by HKS’s Shorenstein Center on Media, Politics, and Public Policy, the initiative will focus on expanding the legal and academic scholarship around AI ethics and regulation. It will also host a boot camp for US Congress members to help them learn more about the technology. The hope is that with these combined efforts, Congress and other policymakers will be better equipped to effectively regulate and shepherd the growing impact of AI on society.

    Over the past year, a series of high-profile tech scandals have made increasingly clear the consequences of poorly implemented AI. This includes the use of machine learning to spread disinformation through social media and the automation of biased and discriminatory practices through facial recognition and other automated systems.

    In October, at the annual AI Now Symposium, technologists, human rights activists, and legal experts repeatedly emphasized the need for systems to hold AI accountable. 

    “The government has the long view,” said Sherrilyn Ifill, president and director-counsel of the NAACP Legal Defense Fund. “They hold, in many ways, the responsibility of communicating history to the corporations and other companies that are developing these technologies.”

    But the government lacks the knowledge to bear this responsibility, says Ghosh. “If you ask a member of Congress if AI is part of the disinformation problem, they might say ‘I don’t think so’ or ‘I don’t know,’” he says.

    As part of the initiative, Ghosh and Wheeler asked roughly 30 leading experts in computer science, philosophy, economics, and other fields to weigh in on issues including discrimination, fairness, transparency, and accountability.

    Those perspectives will be published over the following months, beginning with three this week: Catherine Tucker, a professor at the MIT Sloan School of Management, on the economic context of algorithmic bias; M.C. Elish and Danah Boyd, research lead and founder of the Data & Society Research Institute, respectively, on the ethics of when and how to use AI systems without exacerbating existing injustice; and Joseph Turow, a professor at the Annenberg School for Communication, on the discriminatory consequences of hyper-personalized marketing.

    The initiative will host a boot camp in Washington, DC, next February for members of Congress and their technology policy staff to help translate the articles into productive policy discussions. The camp will explore what it means to design AI ethically and what regulatory measures could be taken to mitigate its harm and foster its benefits.

    Ghosh recognizes that the current political climate makes it difficult to align both parties toward any goal, but he is hopeful they will find common ground over the urgency of the issue.

    Learn from the humans leading the way in machine learning at EmTech Next. Register Today!
    June 11-12, 2019
    Cambridge, MA

    Register now
    Photo of Harvard Square
    More from Intelligent Machines

    Artificial intelligence and robots are transforming how we work and live.

    Want more award-winning journalism? Subscribe to Print + All Access Digital.
    • Print + All Access Digital {! insider.prices.print_digital !}*

      {! insider.display.menuOptionsLabel !}

      The best of MIT Technology Review in print and online, plus unlimited access to our online archive, an ad-free web experience, discounts to MIT Technology Review events, and The Download delivered to your email in-box each weekday.

      See details+

      12-month subscription

      Unlimited access to all our daily online news and feature stories

      6 bi-monthly issues of print + digital magazine

      10% discount to MIT Technology Review events

      Access to entire PDF magazine archive dating back to 1899

      Ad-free website experience

      The Download: newsletter delivery each weekday to your inbox

      The MIT Technology Review App

    You've read of three free articles this month. for unlimited online access. You've read of three free articles this month. for unlimited online access. This is your last free article this month. for unlimited online access. You've read all your free articles this month. for unlimited online access. You've read of three free articles this month. for more, or for unlimited online access. for two more free articles, or for unlimited online access.