Twitter can be a toxic place. In recent years, trolling and harassment on the site have made it an extremely unpleasant and upsetting experience for many people, particularly women and minorities. But automatically identifying and stopping such abuse is difficult to do accurately and reliably. This is because, for all the recent progress in AI, machines generally still struggle to respond meaningfully to human communication. For example, AI usually finds it hard to pick up on abusive messages that may be sarcastic or disguised with a sprinkling of positive keywords.
A new study has used cutting-edge machine learning to get a more accurate snapshot of the scale of harassment on Twitter. Its analysis confirms what many people will already suspect: female and minority journalists and politicians face a shocking amount of abuse on the platform.
The study carried out by Amnesty International in collaboration with Canadian firm ElementAI, shows that black women politicians and journalists are 84% more likely to be mentioned in abusive or “problematic” tweets than white women in the same profession.
“It’s just maddening,” says Julien Cornebise, director of research at ElementAI in London, an office focused on humanitarian applications of machine learning. “These women are a big part of how society works.”
ElementAI researchers first used a machine-learning tool similar to the one used to classify spam to identify abusive tweets. The researchers then gave volunteers a mix of pre-classified and previously unseen tweets to classify. The tweets identified as abusive were used to train a deep-learning network. The result is a system that can classify abuse with impressive accuracy, according to Cornebise.
The project focused on tweets sent to politicians and journalists. The study saw 6,500 volunteers from 150 countries help classify abuse in 228,000 tweets sent to 778 women politicians and journalists in the UK and US in 2017.
The study examined tweets sent to female members of the UK Parliament and the US Congress and Senate, as well as women journalists from publications like the Daily Mail, Gal Dem, the Guardian, Pink News, and the Sun in the UK and Breitbart and the New York Times in the US.
It found that 1.1 million abusive tweets were sent to the 778 women in this period—that’s the equivalent of one every 30 seconds. It also found that 7.1% of all tweets sent to women in these roles are abusive. The researchers behind the study have also released a tool, called Troll Patrol, to test whether a tweet constitutes abuse or harassment.
While the deep-learning approach was a big improvement on existing methods for spotting abuse, the researchers warn that machine learning or AI will not be enough to identify trolling all the time. Cornebise says the tool is often as good as human moderators but is also prone to error. “Some human judgment will be required for the foreseeable future,” he says.
Twitter has been widely criticized for not doing more to police its platform. Milena Marin, who worked on the project at Amnesty International, says the company should at least be more transparent about its policing methods.
“Troll Patrol isn’t about policing Twitter or forcing it to remove content,” says Marin. But she warned, “Twitter must start being transparent about how exactly it is using machine learning to detect abuse, and publish technical information about the algorithms it relies on.”
In response to the report, Twitter legal officer Vijaya Gadde pointed to the problem of defining abuse. “I would note that the concept of ‘problematic’ content for the purposes of classifying content is one that warrants further discussion,” Gadde said in a statement. “We work hard to build globally enforceable rules and have begun consulting the public as part of the process.”
Yann LeCun has a bold new vision for the future of AI
One of the godfathers of deep learning pulls together old ideas to sketch out a fresh path for AI, but raises as many questions as he answers.
The dark secret behind those cute AI-generated animal images
Google Brain has revealed its own image-making AI, called Imagen. But don't expect to see anything that isn't wholesome.
The hype around DeepMind’s new AI model misses what’s actually cool about it
Some worry that the chatter about these tools is doing the whole field a disservice.
AI’s progress isn’t the same as creating human intelligence in machines
Honorees from this year's 35 Innovators list are employing AI to find new molecules, fold proteins, and analyze massive amounts of medical data.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.