Governments around the world have used targeted online hate and harassment campaigns to intimidate or silence people.
The news: A report out today by the Institute for the Future, a California-based public policy group, details how widespread this has been. Hate mobs and anonymous threats have become key tools for repression of opposing ideas in both dictatorships and democracies.
How they do it: Using fake accounts, bots, and coordinated attacks by legions of followers, governments make it extremely difficult to distinguish between public opinion and sponsored trolls. Many times the attacks promote violence or sexual assault, especially when women are the targets.
For example: The Indian government reportedly paid throngs of people to make coordinated posts to support Prime Minister Narendra Modi and attack his opponents. The report details similar instances in Ecuador, Malta, and Mexico as well. The result has been self-censoring by journalists, arrests, and even assassinations of some of the people who have been targeted. According to the report, the US isn’t blameless either. “The strategy of inciting or fueling trolling campaigns has been witnessed in the United States,” it says, “where hyperpartisan news outlets such as Breitbart and sources close to Trump signal to trolls who to target.”
How can we stop it? The report recommends three main avenues for creating policies that could limit state-sponsored trolling: international human rights law, US law (since the US is where most social-media companies are based), and content policies of major tech companies. But making change happen through any of these channels will take agreement from a lot of parties, some of which have a lot to lose by changing the status quo.
How to preserve your digital memories
Following recent announcements by Google and Twitter, more data deletion policies are coming.
Your digital life isn’t as permanent as you think it is
Google will delete accounts after two years of inactivity, and experts expect more data deletion policies to come
Catching bad content in the age of AI
Why haven’t tech companies improved at content moderation?
Behind the scenes of Carnegie Mellon’s heated privacy dispute
Researchers at Carnegie Mellon University wanted to create a privacy-preserving smart sensor. They were accused of violating privacy instead.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.