Skip to Content
Policy

Congress wants answers from Google about Timnit Gebru’s firing

The letter, signed by nine members of Congress, sends an important signal about how regulators will scrutinize tech giants.
December 17, 2020
yvette clarke
Representative Yvette ClarkeAP Photo/Kathy Willens

Nine members of the US Congress have sent a letter to Google asking it to clarify the circumstances around its former ethical AI co-lead Timnit Gebru’s forced departure. Led by Representative Yvette Clarke and Senator Ron Wyden, and co-signed by Senators Elizabeth Warren and Cory Booker, the letter sends an important signal about how Congress is scrutinizing tech giants and thinking about forthcoming regulation.

Gebru, a leading voice in AI ethics and one of a small handful of Black women at Google, was unceremoniously dismissed two weeks ago, after a protracted disagreement over a research paper. The paper detailed the risks of large AI language models trained on enormous amounts of text data, which are a core line of Google’s research, powering various products including its lucrative Google Search. 

Citing MIT Technology Review’s coverage, the letter raises three issues: the potential for bias in large language models, the growing corporate influence over AI research, and Google’s lack of diversity. It asks Google CEO Sundar Pichai for a concrete plan on how it will address each of these, as well as for its current policy on reviewing research and details on its ongoing investigation into Gebru’s exit (Pichai committed to this investigation in an internal memo, first published by Axios). “As Members of Congress actively seeking to enhance AI research, accountability, and diversity through legislation and oversight, we respectfully request your request to the following inquiries,” the letter states.

In April of 2019, Clarke and Wyden introduced a bill, the Algorithmic Accountability Act, that would require big companies to audit their machine-learning systems for bias and take corrective action in a timely manner if such issues were identified. It would also require those companies to audit all processes involving sensitive data—including personally identifiable, biometric, and genetic information—for privacy and security risks. At the time, many legal and technology experts praised the bill for its nuanced understanding of AI and data-driven technologies. “Great first step,” wrote Andrew Selbst, an assistant professor at the University California Los Angeles School of Law, on Twitter. “Would require documentation, assessment, and attempts to address foreseen impacts. That’s new, exciting & incredibly necessary.”

The latest letter doesn’t tie directly to the Algorithmic Accountability Act, but it is part of the same move by certain congressional members to craft legislation that would mitigate AI bias and the other harms of data-driven, automated systems. Notably, it comes amid mounting pressure for antitrust regulation. Earlier this month, the US Federal Trade Commission filed an antitrust lawsuit against Facebook for its “anticompetitive conduct and unfair methods of competition.” Over the summer, House Democrats published a 449-page report on Big Tech’s monopolistic practices.

The letter also comes in the context of rising geopolitical tensions. As US-China relations have reached an all-time low during the pandemic, US officials have underscored the strategic importance of emerging technologies like AI and 5G. The letter also raises this dimension, acknowledging Google’s leadership in AI and its role in maintaining US leadership. But it makes clear that this should not undercut regulatory action, a line of argument popularized by Facebook CEO Mark Zuckerberg. “To ensure America wins the AI race,” the letter says, “American technology companies must not only lead the world in innovation; they must also ensure such innovation reflects our nation’s values.”

“Our letter should put everyone in the technology sector, not just Google, on notice that we are paying attention,” said Clarke in a statement to MIT Technology Review. “Ethical AI is the battleground for the future of civil rights. Our concerns about recent developments aren’t just about one person; they are about what the 21st century will look like if academic freedom and inclusion take a back seat to other priorities. We can’t mitigate algorithmic bias if we impede those who seek to research and study it.”

Deep Dive

Policy

Is there anything more fascinating than a hidden world?

Some hidden worlds--whether in space, deep in the ocean, or in the form of waves or microbes--remain stubbornly unseen. Here's how technology is being used to reveal them.

Africa’s push to regulate AI starts now        

AI is expanding across the continent and new policies are taking shape. But poor digital infrastructure and regulatory bottlenecks could slow adoption.

Yes, remote learning can work for preschoolers

The largest-ever humanitarian intervention in early childhood education shows that remote learning can produce results comparable to a year of in-person teaching.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.