Skip to Content
Artificial intelligence

AI researchers say scientific publishers help perpetuate racist algorithms

June 23, 2020
Focus on security CCTV camera or surveillance system with police officers on blurry background
pixinoo / Getty

The news: An open letter from a growing coalition of AI researchers is calling out scientific publisher Springer Nature for a conference paper it reportedly planned to include in its forthcoming book Transactions on Computational Science & Computational Intelligence. The paper, titled “A Deep Neural Network Model to Predict Criminality Using Image Processing,” presents a face recognition system purportedly capable of predicting whether someone is a criminal, according to the original press release. It was developed by researchers at Harrisburg University and was due to be presented at a forthcoming conference.

The demands: Citing the work of leading Black AI scholars, the letter debunks the scientific basis of the paper and asserts that crime-prediction technologies are racist. It also lists three demands: 1) for Springer Nature to rescind its offer to publish the study; 2) for it to issue a statement condemning the use of statistical techniques such as machine learning to predict criminality and acknowledging its role in incentivizing such research; and 3) for all scientific publishers to commit to not publishing similar papers in the future. The letter, which was sent to Springer Nature on Monday, was originally written by five researchers at MIT, Rensselaer Polytechnic Institute, McGill University, and the AI Now Institute. In a matter of days, it gained more than 600 signatures and counting across the AI ethics and academic communities, including from leading figures like Meredith Whittaker, cofounder of the AI Now Institute, and Ethan Zuckerman, former director of the Center for Civic Media at the MIT Media Lab.

Why it matters: While the letter highlights a specific paper, the authors’ goal is to demonstrate a systematic issue with the way scientific publishing incentivizes researchers to perpetuate unethical norms. “This is why we keep seeing race science emerging time and again,” said Chelsea Barabas, a PhD student at MIT and one of the letter’s coauthors. “It’s because publishers publish it.” “The real significance of this Springer piece is that it’s not unique whatsoever,” echoed Theodora Dryer, a postdoctoral researcher at AI Now and another coauthor. “It’s emblematic of a problem and a critique that has gone on for so, so long.” 

Springer’s response: In response to the letter, Springer said that it would not be publishing the paper. “The paper you are referring to was submitted to a forthcoming conference for which Springer had planned to publish the proceedings,” it said. “After a thorough peer review process the paper was rejected.” Harrisburg University also took down its press release, stating that “the faculty are updating the paper to address concerns raised.” Harrisburg University and a coauthor of the paper denied a request for comment as well as a request for a copy of the original paper. The letter’s signatories said they will continue to push for a fulfillment of their second and third demands.

The bigger picture: Since George Floyd’s death sparked an international movement for racial justice, the AI field and the tech industry at large have faced a reckoning about the role they have played in reinforcing structural racism. During the week of June 8, for example, IBM, Microsoft, and Amazon all announced the end or partial suspension of their face recognition products. The move was a culmination of two years of advocacy from researchers and activists to demonstrate a link between these technologies and the overpolicing of minority communities. The open letter is the latest development in this movement toward greater ethical accountability in AI.

“We really wanted to contribute to this growing movement,” said Sonja Solomun, the research director of the Centre for Media, Technology, and Democracy at McGill University. “Particularly when we look outside our windows and see what’s going on right now in the US and globally, the stakes are just so high.”

Update: After publication, Springer Nature issued a statement clarifying that "at no time was [the paper] accepted for publication...The series editor’s decision to reject the final paper was made on Tuesday 16th June and was officially communicated to the authors on Monday 22nd June."

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.