When Facebook chief executive Mark Zuckerberg testified in front of the US Congress, technology experts quickly realized how poorly informed his questioners were. As much as Congress wanted to regulate the tech behemoth, it was clear they had no idea how. Instead, they let him get away with grandiose claims about how AI would solve all the company’s problems.
For Dipayan Ghosh, a research fellow at the Harvard Kennedy School (HKS), the hearing emphasized the pressing need to bring US policymakers up to speed on major technology issues—and AI in particular.
“AI is a tremendous technology, but there are really salient problems that we’ve seen take a life of their own in society,” says Ghosh, who was a technology policy advisor in the Obama administration. “We need to inform people in positions of power about how these systems actually work, so the next time they launch a regulatory effort, they won’t be ill-informed.”
Ghosh is co-directing a new AI policy initiative, launched today, with Tom Wheeler, a senior research fellow at HKS and the chairman of the US Federal Communications Commission under Obama.
Funded by HKS’s Shorenstein Center on Media, Politics, and Public Policy, the initiative will focus on expanding the legal and academic scholarship around AI ethics and regulation. It will also host a boot camp for US Congress members to help them learn more about the technology. The hope is that with these combined efforts, Congress and other policymakers will be better equipped to effectively regulate and shepherd the growing impact of AI on society.
Over the past year, a series of high-profile tech scandals have made increasingly clear the consequences of poorly implemented AI. This includes the use of machine learning to spread disinformation through social media and the automation of biased and discriminatory practices through facial recognition and other automated systems.
In October, at the annual AI Now Symposium, technologists, human rights activists, and legal experts repeatedly emphasized the need for systems to hold AI accountable.
“The government has the long view,” said Sherrilyn Ifill, president and director-counsel of the NAACP Legal Defense Fund. “They hold, in many ways, the responsibility of communicating history to the corporations and other companies that are developing these technologies.”
But the government lacks the knowledge to bear this responsibility, says Ghosh. “If you ask a member of Congress if AI is part of the disinformation problem, they might say ‘I don’t think so’ or ‘I don’t know,’” he says.
As part of the initiative, Ghosh and Wheeler asked roughly 30 leading experts in computer science, philosophy, economics, and other fields to weigh in on issues including discrimination, fairness, transparency, and accountability.
Those perspectives will be published over the following months, beginning with three this week: Catherine Tucker, a professor at the MIT Sloan School of Management, on the economic context of algorithmic bias; M.C. Elish and Danah Boyd, research lead and founder of the Data & Society Research Institute, respectively, on the ethics of when and how to use AI systems without exacerbating existing injustice; and Joseph Turow, a professor at the Annenberg School for Communication, on the discriminatory consequences of hyper-personalized marketing.
The initiative will host a boot camp in Washington, DC, next February for members of Congress and their technology policy staff to help translate the articles into productive policy discussions. The camp will explore what it means to design AI ethically and what regulatory measures could be taken to mitigate its harm and foster its benefits.
Ghosh recognizes that the current political climate makes it difficult to align both parties toward any goal, but he is hopeful they will find common ground over the urgency of the issue.