Skip to Content
Artificial intelligence

Harvard researchers want to school Congress about AI

A tech boot camp will teach US politicians and policymakers about the potential, and the risks, of artificial intelligence.
November 14, 2018
Photo of Harvard Square
Photo of Harvard SquareChensiyuan/Wikimedia commons

When Facebook chief executive Mark Zuckerberg testified in front of the US Congress, technology experts quickly realized how poorly informed his questioners were. As much as Congress wanted to regulate the tech behemoth, it was clear they had no idea how. Instead, they let him get away with grandiose claims about how AI would solve all the company’s problems.

For Dipayan Ghosh, a research fellow at the Harvard Kennedy School (HKS), the hearing emphasized the pressing need to bring US policymakers up to speed on major technology issues—and AI in particular.

“AI is a tremendous technology, but there are really salient problems that we’ve seen take a life of their own in society,” says Ghosh, who was a technology policy advisor in the Obama administration. “We need to inform people in positions of power about how these systems actually work, so the next time they launch a regulatory effort, they won’t be ill-informed.”

Ghosh is co-directing a new AI policy initiative, launched today, with Tom Wheeler, a senior research fellow at HKS and the chairman of the US Federal Communications Commission under Obama.

Funded by HKS’s Shorenstein Center on Media, Politics, and Public Policy, the initiative will focus on expanding the legal and academic scholarship around AI ethics and regulation. It will also host a boot camp for US Congress members to help them learn more about the technology. The hope is that with these combined efforts, Congress and other policymakers will be better equipped to effectively regulate and shepherd the growing impact of AI on society.

Over the past year, a series of high-profile tech scandals have made increasingly clear the consequences of poorly implemented AI. This includes the use of machine learning to spread disinformation through social media and the automation of biased and discriminatory practices through facial recognition and other automated systems.

In October, at the annual AI Now Symposium, technologists, human rights activists, and legal experts repeatedly emphasized the need for systems to hold AI accountable. 

“The government has the long view,” said Sherrilyn Ifill, president and director-counsel of the NAACP Legal Defense Fund. “They hold, in many ways, the responsibility of communicating history to the corporations and other companies that are developing these technologies.”

But the government lacks the knowledge to bear this responsibility, says Ghosh. “If you ask a member of Congress if AI is part of the disinformation problem, they might say ‘I don’t think so’ or ‘I don’t know,’” he says.

As part of the initiative, Ghosh and Wheeler asked roughly 30 leading experts in computer science, philosophy, economics, and other fields to weigh in on issues including discrimination, fairness, transparency, and accountability.

Those perspectives will be published over the following months, beginning with three this week: Catherine Tucker, a professor at the MIT Sloan School of Management, on the economic context of algorithmic bias; M.C. Elish and Danah Boyd, research lead and founder of the Data & Society Research Institute, respectively, on the ethics of when and how to use AI systems without exacerbating existing injustice; and Joseph Turow, a professor at the Annenberg School for Communication, on the discriminatory consequences of hyper-personalized marketing.

The initiative will host a boot camp in Washington, DC, next February for members of Congress and their technology policy staff to help translate the articles into productive policy discussions. The camp will explore what it means to design AI ethically and what regulatory measures could be taken to mitigate its harm and foster its benefits.

Ghosh recognizes that the current political climate makes it difficult to align both parties toward any goal, but he is hopeful they will find common ground over the urgency of the issue.

Deep Dive

Artificial intelligence

Sam Altman says helpful agents are poised to become AI’s killer function

Open AI’s CEO says we won’t need new hardware or lots more training data to get there.

What’s next for generative video

OpenAI's Sora has raised the bar for AI moviemaking. Here are four things to bear in mind as we wrap our heads around what's coming.

Is robotics about to have its own ChatGPT moment?

Researchers are using generative AI and other techniques to teach robots new skills—including tasks they could perform in homes.

An AI startup made a hyperrealistic deepfake of me that’s so good it’s scary

Synthesia's new technology is impressive but raises big questions about a world where we increasingly can’t tell what’s real.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.