Skip to Content

Quarantine for Cyberbullies: The Latest Strategy in the Fight Against Offensive Social Media Content

If people post offensive content, cutting off their contact to the network can prevent their messages from spreading, say network theorists.

Cyberbullying and hate mail is one of the scourges of the modern age. There are numerous well documented cases where this kind of behavior has made people’s lives a misery and even lead to the victim committing suicide.

Clearly, an important question is how to deal with this kind of behavior. On many social networks, it is possible to report offensive material and to block those responsible for it. On Twitter, for example, it is straightforward to block the tweets from another individual. But that does not stop these messages from reaching a broader community.

Today, Krystal Blanco at Boston University and a few pals say there is another way. These folks have developed a mathematical model of the way messages spread through networks like Twitter and say that a quarantine, in which offensive individuals are banned from contact with the community, is an effective way of preventing their messages from spreading.

Blanco and co begin by constructing a mathematical model of the way messages spread based on a 1965 model of rumor mongering. In this model, individuals are either ignorants, spreaders, or stiflers of information. In addition to these categories, Blanco and co introduce a category of quarantined users who are unable to contact other members of the population and so are unable to spread their messages.

There are a number of free parameters in this model, such as the number of links between individuals, the percentage of people who are stiflers to start off with and the probability that a user who is not a stifler of information then becomes one.

To measure these parameters are in the real world, the team searched Twitter for offensive homophobic tweets including the words “gay” and “disgusting.” They then extracted the parameters from the network associated with the spread of these messages.

They then go on to show that the rate of retweeting in a population of users is lower when some are quarantined than when they are not. “The tweet on average dies out more quickly,” they say.

In some ways, that is exactly what you might expect. A more difficult problem is how to enforce the quarantine effectively in the first place, something that this study sweeps under the carpet.

Possibilities include using natural language filters to pick out tweets that are likely to be offensive and then quarantining the authors. Another is a peer review model in which people rate the offensiveness of tweets and those responsible for the content deemed most offensive are quarantined.

But these approaches raise all kinds of practical questions: where should the cut-off lie between people who should be quarantined versus those who should not; how long should individuals be quarantined for, and so on. And what of users simply reregister under another name?

Just whether such a system would be workable in practice is by no means clear. What is clear, though, is that organizations running social networks need to take active steps towards minimizing and even preventing the kind of tweets that can lead to some unfortunate users paying the ultimate price.

Ref: arxiv.org/abs/1408.0694 : The Dynamics of Offensive Messages in the World of Social Media: the Control of Cyberbullying on Twitter

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.