Skip to Content

Snapchat Has a Plan to Fight Fake News: Ripping the ‘Social’ from the ‘Media’

November 29, 2017

The messaging platform has a pragmatic take on how to solve our misinformation problem—but will it work?

Time was, Snapchat was effectively a messaging app. But since it added the Stories feature, which allows publishers to push content to users, it’s increasingly been dealing with media content, too. Now, Axios reports that Snapchat has redesigned its app in an attempt to pull the two back apart. In a separate post on Axios, Evan Spiegel, the CEO of Snapchat parent company Snap, explains that the move comes loaded with lofty ambitions:

The personalized newsfeed revolutionized the way people share and consume content. But let's be honest: this came at a huge cost to facts, our minds, and the entire media industry ... We believe that the best path forward is disentangling the [combination of social and media] by providing a personalized content feed based on what you want to watch, not what your friends post.

To make that a reality, Spiegel says, Snapchat will start using machine-learning tricks, similar to those employed by Netflix, to generate suggested content for users. The idea is to understand what its users have actually enjoyed looking at in the past, rather than presenting them with content that’s elevated through feeds by friends or network effects. (Snap doesn’t say what data its AI will gobble up, telling Axios only that “dozens” of signals will be fed to the beast.) The content that appears in that AI-controlled feed, which will be called the Discover section, will itself be curated by an editorial team of ... wait for it ... actual humans.

It actually sounds quite sensible. And to be sure, it’s a far cry from the systems that Facebook has employed to land its users in a quagmire of misinformation. But it will be interesting to see how well it works in practice. There’s an obvious concern here: that a machine-learning algorithm will spoon-feed a deliciously predictable mush of content to its users. To that, Spiegel says it’s “important to remember that human beings write algorithms,” adding that they “can be designed to provide multiple sources of content and different points of view.”

Perhaps. But we’ll reserve judgment until those algorithms are ticking over.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.