Skip to Content
Tech policy

A leading AI ethics researcher says she’s been fired from Google

Timnit Gebru says she’s facing retaliation for conducting research that was critical of Google and sending an email “inconsistent with the expectations of a Google manager.”
December 3, 2020
Timnit Gebru
Kimberly White / Stringer

On December 2, the AI research community was shocked to learn that Timnit Gebru had been fired from her post at Google. Gebru, one of the leading voices in responsible AI research, is known among other things for coauthoring groundbreaking work that revealed the discriminatory nature of facial recognition, cofounding the Black in AI affinity group, and relentlessly advocating for diversity in the tech industry.

But on Wednesday evening, she announced on Twitter that she had been terminated from her position as Google’s ethical AI co-lead. “Apparently my manager’s manager sent an email [to] my direct reports saying she accepted my resignation. I hadn’t resigned,” she said.

In an interview with Bloomberg on Thursday, Gebru said that the firing happened after a protracted fight with her superiors over the publication of an AI ethics research paper. One of Gebru’s tweets and a later internal email from Jeff Dean, head of Google AI, suggest that the paper was critical of the environmental costs and embedded biases of large language models.

Gebru, who had written the paper with four Google colleagues and two external collaborators, had submitted it to a research conference being held next year. After an internal review, she was asked to retract the paper or remove the names of the Google employees. She responded that she would do so if her superiors met a series of conditions. If they could not, she would “work on a last date,” she said.

She also sent a frustrated email to an internal listserv, Google Brain Women and Allies, detailing the repeated hardships she’d experienced as a Black female researcher. “We just had a Black research all hands with such an emotional show of exasperation,” she wrote. “Do you know what happened since? Silencing in the most fundamental way possible.”

Gebru then went on a vacation and received a termination email from Megan Kacholia, the VP of engineering at Google Research, before her return. “Thanks for making your conditions clear,” the email stated, as tweeted by Gebru. “We cannot agree to #1 and #2 as you are requesting. We respect your decision to leave Google as a result, and we are accepting your resignation.” Her email to the listserv was “inconsistent with the expectations of a Google manager,” it continued. “As a result, we are accepting your resignation immediately, effective today.”

On Thursday morning, after an outpouring of support for Gebru on social media, Dean sent an internal email to Google’s AI group with his account of the situation. He said that Gebru’s paper “didn’t meet our bar for publication” because “it ignored too much relevant research.” He also said that Gebru’s conditions included “revealing the identities of every person who Megan and I had spoken to and consulted as part of the review of the paper and the exact feedback.”

“Given Timnit's role as a respected researcher and a manager in our Ethical AI team, I feel badly that Timnit has gotten to a place where she feels this way about the work we’re doing,” he wrote. “I know we all genuinely share Timnit’s passion to make AI more equitable and inclusive.”

Neither Gebru, Dean, nor Google communications responded to requests for comment, and many details surrounding the exact progression of events, or cause of termination, remain unclear. As they continue to emerge, many have brought renewed attention to a November 30 tweet that Gebru pinned to the top of her profile. “Is there anyone working on regulation protecting Ethical AI researchers, similar to whistleblower protection?” it reads. “Because with the amount of censorship & intimidation that goes on towards people in specific groups, how does anyone trust any real research in this area can take place?”

Deep Dive

Tech policy

The US Navy wants swarms of thousands of small drones

Budget documents reveal plans for the Super Swarm project, a way to overwhelm defenses with vast numbers of drones attacking simultaneously.

A wrongfully terminated Chinese-American scientist was just awarded nearly $2 million in damages

"The settlement makes clear that when the government discriminates, it’s going to be held accountable," said Sherry Chen's lawyer.

Inside effective altruism, where the far future counts a lot more than the present

The giving philosophy, which has adopted a focus on the long term, is a conservative project, consolidating decision-making among a small set of technocrats.

The Chinese surveillance state proves that the idea of privacy is more “malleable” than you’d expect

The authors of "Surveillance State" discuss what the West misunderstands about Chinese state control and whether the invasive trajectory of surveillance tech can still be reversed.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.