Skip to Content

The mass shooting in New Zealand shows how broken social media is


A gunman live-streamed the murder of dozens of innocents in two mosques in Christchurch, New Zealand, on Friday—and the world got a terrible reminder of how flawed existing social-media policies and algorithms are for policing violent and offensive content.

In the days before the shooting, the perpetrator apparently boasted of his plans and posted an online manifesto. He then broadcast the horrific act live on Facebook. The attack left 49 people dead and dozens more injured.

Live stream: Over the past 18 months, following harassment and fake-news scandals, social-media companies have invested heavily in content moderators. But this did little to stop video of the shooting from spreading. Not only was the live stream reportedly up for 20 minutes, but the resulting video was then reposted on YouTube, with some clips remaining up for over an hour.

Several factors contributed to letting the footage slip through the filters, according to experts.

Real-time challenge: It’s vital to catch a video quickly, so that it doesn’t spread onto other platforms. But social-media moderation simply isn’t geared toward catching content in real time. It is impossible to automate the process effectively, and identifying live streams that need to be shut down manually is “like finding a needle in the haystack of data that’s flowing over the network all the time,” says Charles Seife, a professor at NYU’s School of Journalism. He adds that Facebook could require users to build up a reputation before letting them live-stream content, to reduce the risks.

Whack-a-mole: Moderators are overwhelmed at the best of times. Video of the shooting hosted on YouTube most likely spread so quickly that the humans employed to check for inappropriate content didn’t have time to catch everything. These workers typically have a few seconds to make a call. The process can be partly automated, but those who reposted the footage apparently clipped it and introduced distortions to avoid these algorithms.

Algorithmic failure: Social-media companies also use algorithmic tweaks to de-prioritize suspicious content. But Mike Ananny, an associate professor at the University of Southern California, says these algorithms were probably thrown by the popularity of the offending videos.

Not our problem: These factors reflect the key systemic problem: Facebook, YouTube, and other big social platforms do not see themselves as the arbiters of content in the first place. Research has shown that far-right sources of information can be policed more proactively to prevent violent or hateful material from spreading. “They have this attitude of being post hoc,” says Ananny. “It’s a deep cultural thing.”