Skip to Content
Silicon Valley

Evidence is piling up that Facebook can incite violence

August 21, 2018

Higher use of the world’s dominant social network has now been strongly linked with more attacks on refugees in Germany.

Greater use, greater violence: Specifically, in towns where “per-person Facebook use rose to one standard deviation above the national average,” attacks on refugees “increased by about 50 percent,” the New York Times reported today, citing a University of Warwick study.

Researchers there carried out a detailed analysis of more than 3,000 incidents in Germany over a two-year period. Crucially, the link held true regardless of the city’s size, political leanings, or economic status—and didn’t correlate with general patterns of internet use. Those findings strengthen the case that using Facebook in particular can be a driving mechanism of greater violence.

Greater scrutiny: That’s more bad news for the embattled social network, which has long portrayed itself as a benevolent enterprise driven by a mission to draw the world closer together. But researchers recently found that coordinated hate speech and propaganda on the site helped fuel violence in Myanmar. And last year, Facebook itself eventually acknowledged that Russian agents had posted tens of thousands of inflammatory posts—which reached tens of millions of people—before and after the 2016 presidential election, in a massive campaign to deepen divisions in the United States.

Like-minded bubbles: Researchers told the Times that Facebook’s algorithm tends to funnel users into like-minded bubbles where they are isolated from moderating influences, leading them to believe that support for violence is more widely shared and accepted than it truly is. Facebook, which declined to comment directly on the study, has been slow to acknowledge the severity of the challenges it is facing. And it continues to struggle with what steps it can, or should, take to prevent the wide dissemination of hate speech and misinformation across the site.

Update: After the Times published this story, critics highlighted several potential issues with the methodology of the study, including the researchers' inability to directly measure Facebook use or track data in real time, as The Interface newsletter explains in greater detail. The study also wasn't peer reviewed, and the 50 percent figure noted above was subsequently revised down to 35 percent.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.