Skip to Content
Artificial intelligence

Female black journalists and politicians get sent an abusive tweet every 30 seconds

Machine learning reveals a disturbing level of harassment, abuse, and trolling aimed at women and minorities on Twitter.
December 18, 2018
Marten Bjork | Unsplash

Twitter can be a toxic place. In recent years, trolling and harassment on the site have made it an extremely unpleasant and upsetting experience for many people, particularly women and minorities. But automatically identifying and stopping such abuse is difficult to do accurately and reliably. This is because, for all the recent progress in AI, machines generally still struggle to respond meaningfully to human communication. For example, AI usually finds it hard to pick up on abusive messages that may be sarcastic or disguised with a sprinkling of positive keywords.

A new study has used cutting-edge machine learning to get a more accurate snapshot of the scale of harassment on Twitter. Its analysis confirms what many people will already suspect: female and minority journalists and politicians face a shocking amount of abuse on the platform. 

The study carried out by Amnesty International in collaboration with Canadian firm ElementAI, shows that black women politicians and journalists are 84% more likely to be mentioned in abusive or “problematic” tweets than white women in the same profession.

“It’s just maddening,” says Julien Cornebise, director of research at ElementAI in London, an office focused on humanitarian applications of machine learning. “These women are a big part of how society works.”

ElementAI researchers first used a machine-learning tool similar to the one used to classify spam to identify abusive tweets. The researchers then gave volunteers a mix of pre-classified and previously unseen tweets to classify. The tweets identified as abusive were used to train a deep-learning network. The result is a system that can classify abuse with impressive accuracy, according to Cornebise.

The project focused on tweets sent to politicians and journalists. The study saw 6,500 volunteers from 150 countries help classify abuse in 228,000 tweets sent to 778 women politicians and journalists in the UK and US in 2017.

The study examined tweets sent to female members of the UK Parliament and the US Congress and Senate, as well as women journalists from publications like the Daily Mail, Gal Dem, the Guardian, Pink News, and the Sun in the UK and Breitbart and the New York Times in the US.

It found that 1.1 million abusive tweets were sent to the 778 women in this period—that’s the equivalent of one every 30 seconds. It also found that 7.1% of all tweets sent to women in these roles are abusive. The researchers behind the study have also released a tool, called Troll Patrol, to test whether a tweet constitutes abuse or harassment.

While the deep-learning approach was a big improvement on existing methods for spotting abuse, the researchers warn that machine learning or AI will not be enough to identify trolling all the time. Cornebise says the tool is often as good as human moderators but is also prone to error. “Some human judgment will be required for the foreseeable future,” he says.

Twitter has been widely criticized for not doing more to police its platform. Milena Marin, who worked on the project at Amnesty International, says the company should at least be more transparent about its policing methods.

“Troll Patrol isn’t about policing Twitter or forcing it to remove content,” says Marin. But she warned, “Twitter must start being transparent about how exactly it is using machine learning to detect abuse, and publish technical information about the algorithms it relies on.”

In response to the report, Twitter legal officer Vijaya Gadde pointed to the problem of defining abuse. “I would note that the concept of ‘problematic’ content for the purposes of classifying content is one that warrants further discussion,” Gadde said in a statement. “We work hard to build globally enforceable rules and have begun consulting the public as part of the process.”

 

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Providing the right products at the right time with machine learning

Amid shifting customer needs, CPG enterprises look to machine learning to bolster their data strategy, says global head of MLOps and platforms at Kraft Heinz Company, Jorge Balestra.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.