Skip to Content

Yahoo Has a Tool that Can Catch Online Abuse Surprisingly Well

Computers are getting better at spotting trolls, but they often can’t understand the meaning of messages.
July 26, 2016

Trolls seem to lurk in every corner of the Internet, and they delight in ruining your day. But if our e-mail inboxes can be kept relatively spam-free, why can’t machines automatically purge abusive messages from tweets or comments?

It’s a question that seems relevant to the very fabric of Internet culture today. Last week, Twitter banned a journalist that it accused of orchestrating a campaign of abuse aimed at one of the stars of the all-female Ghostbusters reboot. Twitter said it would introduce new guidelines and tools for reporting abuse through its service. Certainly, countless other incidents on Twitter and elsewhere go unnoticed every day.

Researchers are, in fact, making some progress toward technology that can help stop the abuse. A team at Yahoo recently developed an algorithm capable of catching abusive messages better than any other automated system to date. The researchers created a data set of abuse by collecting messages on Yahoo articles that were flagged as offensive by the company’s own comment editors.

The Yahoo team used a number of conventional techniques, including looking for abusive keywords, punctuation that often seemed to accompany abusive messages, and syntactic clues as to the meaning of a sentence.

But the researchers also applied a more advanced approach to automated language understanding, using a way of representing the meaning of words as vectors with many dimensions. This approach, known as “word embedding,” allows semantics to be processed in a sophisticated way. For instance, even if a comment contains a string of words that have not been identified as abusive, the representations of that string in vector space may be enough to identify it as such.

When everything was combined, the team was able to identify abusive messages (from its own data set) with roughly 90 percent accuracy.

Catching the remaining 10 percent may prove tricky. Although AI researchers are making significant progress in training machines to parse language, artificial intelligence has yet to equip computers with the brainpower needed to untangle meaning. As a contest held at a recent AI conference shows, computers cannot disentangle the most simple ambiguities in sentences.

Many tech companies, including Twitter, have AI researchers dedicated to advancing the state of the art in areas such as image recognition and text comprehension. But so far surprisingly little effort seems to have been put into catching abuse or harassment systematically. Twitter declined to say if its AI team is actively working on the problem (although it seems likely). But it is unlikely that the company will introduce a magic bullet for filtering out malicious messages. The problem with automated hate filtering is that words are packed with meaning that can only be unpacked with real intelligence.

“Automatically identifying abuse is surprisingly difficult,” says Alex Krasodomski-Jones, who tracks online abuse as a researcher with the U.K.-based Centre for Analysis of Social Media. “The language of abuse is amorphous—changing frequently and often used in ways that do not connote abuse, such as when racially or sexually charged terms are appropriated by the groups they once denigrated. Given 10 tweets, a group of humans will rarely all agree on which ones should be classed as abusive, so you can imagine how difficult it would be for a computer.”

Until machines gain real intelligence, filtering out hateful messages will be impossible. But Krasodomski-Jones offers another, more human, reason why we might not want an automated solution: “In a world where what we read is increasingly dictated by algorithms and filters, we ought to be careful about demanding more computer interference.”

 

 

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.