Skip to Content
Artificial intelligence

Harvard researchers want to school Congress about AI

A tech boot camp will teach US politicians and policymakers about the potential, and the risks, of artificial intelligence.
November 14, 2018
Photo of Harvard Square
Photo of Harvard SquareChensiyuan/Wikimedia commons

When Facebook chief executive Mark Zuckerberg testified in front of the US Congress, technology experts quickly realized how poorly informed his questioners were. As much as Congress wanted to regulate the tech behemoth, it was clear they had no idea how. Instead, they let him get away with grandiose claims about how AI would solve all the company’s problems.

For Dipayan Ghosh, a research fellow at the Harvard Kennedy School (HKS), the hearing emphasized the pressing need to bring US policymakers up to speed on major technology issues—and AI in particular.

“AI is a tremendous technology, but there are really salient problems that we’ve seen take a life of their own in society,” says Ghosh, who was a technology policy advisor in the Obama administration. “We need to inform people in positions of power about how these systems actually work, so the next time they launch a regulatory effort, they won’t be ill-informed.”

Ghosh is co-directing a new AI policy initiative, launched today, with Tom Wheeler, a senior research fellow at HKS and the chairman of the US Federal Communications Commission under Obama.

Funded by HKS’s Shorenstein Center on Media, Politics, and Public Policy, the initiative will focus on expanding the legal and academic scholarship around AI ethics and regulation. It will also host a boot camp for US Congress members to help them learn more about the technology. The hope is that with these combined efforts, Congress and other policymakers will be better equipped to effectively regulate and shepherd the growing impact of AI on society.

Over the past year, a series of high-profile tech scandals have made increasingly clear the consequences of poorly implemented AI. This includes the use of machine learning to spread disinformation through social media and the automation of biased and discriminatory practices through facial recognition and other automated systems.

In October, at the annual AI Now Symposium, technologists, human rights activists, and legal experts repeatedly emphasized the need for systems to hold AI accountable. 

“The government has the long view,” said Sherrilyn Ifill, president and director-counsel of the NAACP Legal Defense Fund. “They hold, in many ways, the responsibility of communicating history to the corporations and other companies that are developing these technologies.”

But the government lacks the knowledge to bear this responsibility, says Ghosh. “If you ask a member of Congress if AI is part of the disinformation problem, they might say ‘I don’t think so’ or ‘I don’t know,’” he says.

As part of the initiative, Ghosh and Wheeler asked roughly 30 leading experts in computer science, philosophy, economics, and other fields to weigh in on issues including discrimination, fairness, transparency, and accountability.

Those perspectives will be published over the following months, beginning with three this week: Catherine Tucker, a professor at the MIT Sloan School of Management, on the economic context of algorithmic bias; M.C. Elish and Danah Boyd, research lead and founder of the Data & Society Research Institute, respectively, on the ethics of when and how to use AI systems without exacerbating existing injustice; and Joseph Turow, a professor at the Annenberg School for Communication, on the discriminatory consequences of hyper-personalized marketing.

The initiative will host a boot camp in Washington, DC, next February for members of Congress and their technology policy staff to help translate the articles into productive policy discussions. The camp will explore what it means to design AI ethically and what regulatory measures could be taken to mitigate its harm and foster its benefits.

Ghosh recognizes that the current political climate makes it difficult to align both parties toward any goal, but he is hopeful they will find common ground over the urgency of the issue.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Providing the right products at the right time with machine learning

Amid shifting customer needs, CPG enterprises look to machine learning to bolster their data strategy, says global head of MLOps and platforms at Kraft Heinz Company, Jorge Balestra.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.