Higher use of the world’s dominant social network has now been strongly linked with more attacks on refugees in Germany.
Greater use, greater violence: Specifically, in towns where “per-person Facebook use rose to one standard deviation above the national average,” attacks on refugees “increased by about 50 percent,” the New York Times reported today, citing a University of Warwick study.
Researchers there carried out a detailed analysis of more than 3,000 incidents in Germany over a two-year period. Crucially, the link held true regardless of the city’s size, political leanings, or economic status—and didn’t correlate with general patterns of internet use. Those findings strengthen the case that using Facebook in particular can be a driving mechanism of greater violence.
Greater scrutiny: That’s more bad news for the embattled social network, which has long portrayed itself as a benevolent enterprise driven by a mission to draw the world closer together. But researchers recently found that coordinated hate speech and propaganda on the site helped fuel violence in Myanmar. And last year, Facebook itself eventually acknowledged that Russian agents had posted tens of thousands of inflammatory posts—which reached tens of millions of people—before and after the 2016 presidential election, in a massive campaign to deepen divisions in the United States.
Like-minded bubbles: Researchers told the Times that Facebook’s algorithm tends to funnel users into like-minded bubbles where they are isolated from moderating influences, leading them to believe that support for violence is more widely shared and accepted than it truly is. Facebook, which declined to comment directly on the study, has been slow to acknowledge the severity of the challenges it is facing. And it continues to struggle with what steps it can, or should, take to prevent the wide dissemination of hate speech and misinformation across the site.
Update: After the Times published this story, critics highlighted several potential issues with the methodology of the study, including the researchers' inability to directly measure Facebook use or track data in real time, as The Interface newsletter explains in greater detail. The study also wasn't peer reviewed, and the 50 percent figure noted above was subsequently revised down to 35 percent.
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
The Biggest Questions: What is death?
New neuroscience is challenging our understanding of the dying process—bringing opportunities for the living.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
How to fix the internet
If we want online discourse to improve, we need to move beyond the big platforms.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.