AI researchers say scientific publishers help perpetuate racist algorithms
The news: An open letter from a growing coalition of AI researchers is calling out scientific publisher Springer Nature for a conference paper it reportedly planned to include in its forthcoming book Transactions on Computational Science & Computational Intelligence. The paper, titled “A Deep Neural Network Model to Predict Criminality Using Image Processing,” presents a face recognition system purportedly capable of predicting whether someone is a criminal, according to the original press release. It was developed by researchers at Harrisburg University and was due to be presented at a forthcoming conference.
The demands: Citing the work of leading Black AI scholars, the letter debunks the scientific basis of the paper and asserts that crime-prediction technologies are racist. It also lists three demands: 1) for Springer Nature to rescind its offer to publish the study; 2) for it to issue a statement condemning the use of statistical techniques such as machine learning to predict criminality and acknowledging its role in incentivizing such research; and 3) for all scientific publishers to commit to not publishing similar papers in the future. The letter, which was sent to Springer Nature on Monday, was originally written by five researchers at MIT, Rensselaer Polytechnic Institute, McGill University, and the AI Now Institute. In a matter of days, it gained more than 600 signatures and counting across the AI ethics and academic communities, including from leading figures like Meredith Whittaker, cofounder of the AI Now Institute, and Ethan Zuckerman, former director of the Center for Civic Media at the MIT Media Lab.
Why it matters: While the letter highlights a specific paper, the authors’ goal is to demonstrate a systematic issue with the way scientific publishing incentivizes researchers to perpetuate unethical norms. “This is why we keep seeing race science emerging time and again,” said Chelsea Barabas, a PhD student at MIT and one of the letter’s coauthors. “It’s because publishers publish it.” “The real significance of this Springer piece is that it’s not unique whatsoever,” echoed Theodora Dryer, a postdoctoral researcher at AI Now and another coauthor. “It’s emblematic of a problem and a critique that has gone on for so, so long.”
Springer’s response: In response to the letter, Springer said that it would not be publishing the paper. “The paper you are referring to was submitted to a forthcoming conference for which Springer had planned to publish the proceedings,” it said. “After a thorough peer review process the paper was rejected.” Harrisburg University also took down its press release, stating that “the faculty are updating the paper to address concerns raised.” Harrisburg University and a coauthor of the paper denied a request for comment as well as a request for a copy of the original paper. The letter’s signatories said they will continue to push for a fulfillment of their second and third demands.
The bigger picture: Since George Floyd’s death sparked an international movement for racial justice, the AI field and the tech industry at large have faced a reckoning about the role they have played in reinforcing structural racism. During the week of June 8, for example, IBM, Microsoft, and Amazon all announced the end or partial suspension of their face recognition products. The move was a culmination of two years of advocacy from researchers and activists to demonstrate a link between these technologies and the overpolicing of minority communities. The open letter is the latest development in this movement toward greater ethical accountability in AI.
“We really wanted to contribute to this growing movement,” said Sonja Solomun, the research director of the Centre for Media, Technology, and Democracy at McGill University. “Particularly when we look outside our windows and see what’s going on right now in the US and globally, the stakes are just so high.”
Update: After publication, Springer Nature issued a statement clarifying that "at no time was [the paper] accepted for publication...The series editor’s decision to reject the final paper was made on Tuesday 16th June and was officially communicated to the authors on Monday 22nd June."
Geoffrey Hinton tells us why he’s now scared of the tech he helped build
“I have suddenly switched my views on whether these things are going to be more intelligent than us.”
ChatGPT is going to change education, not destroy it
The narrative around cheating students doesn’t tell the whole story. Meet the teachers who think generative AI could actually make learning better.
Deep learning pioneer Geoffrey Hinton has quit Google
Hinton will be speaking at EmTech Digital on Wednesday.
The future of generative AI is niche, not generalized
ChatGPT has sparked speculation about artificial general intelligence. But the next real phase of AI will be in specific domains and contexts.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.