MIT Technology Review Subscribe

Snapchat Has a Plan to Fight Fake News: Ripping the ‘Social’ from the ‘Media’

The messaging platform has a pragmatic take on how to solve our misinformation problem—but will it work?

Time was, Snapchat was effectively a messaging app. But since it added the Stories feature, which allows publishers to push content to users, it’s increasingly been dealing with media content, too. Now, Axios reports that Snapchat has redesigned its app in an attempt to pull the two back apart. In a separate post on Axios, Evan Spiegel, the CEO of Snapchat parent company Snap, explains that the move comes loaded with lofty ambitions:

Advertisement

The personalized newsfeed revolutionized the way people share and consume content. But let’s be honest: this came at a huge cost to facts, our minds, and the entire media industry … We believe that the best path forward is disentangling the [combination of social and media] by providing a personalized content feed based on what you want to watch, not what your friends post.

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

To make that a reality, Spiegel says, Snapchat will start using machine-learning tricks, similar to those employed by Netflix, to generate suggested content for users. The idea is to understand what its users have actually enjoyed looking at in the past, rather than presenting them with content that’s elevated through feeds by friends or network effects. (Snap doesn’t say what data its AI will gobble up, telling Axios only that “dozens” of signals will be fed to the beast.) The content that appears in that AI-controlled feed, which will be called the Discover section, will itself be curated by an editorial team of … wait for it … actual humans.

It actually sounds quite sensible. And to be sure, it’s a far cry from the systems that Facebook has employed to land its users in a quagmire of misinformation. But it will be interesting to see how well it works in practice. There’s an obvious concern here: that a machine-learning algorithm will spoon-feed a deliciously predictable mush of content to its users. To that, Spiegel says it’s “important to remember that human beings write algorithms,” adding that they “can be designed to provide multiple sources of content and different points of view.”

Perhaps. But we’ll reserve judgment until those algorithms are ticking over.

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement