Skip to Content

Facebook Says You Filter News More Than Its Algorithm Does

A Facebook study of 10 million users shows that your selection of friends holds more sway than filtering algorithms when it comes to seeing news from opposing political viewpoints.

Ever wonder how much news Facebook’s algorithm may be sorting out of your News Feed that you don’t agree with politically? Not much, the social network says.

Facebook studied millions of its most political users and determined that while its algorithm tweaks what you see most prominently in your feed, you’re the one really limiting how much news and opinion you take in from people of different political viewpoints.

In an effort to explore how people consume news shared by friends of different ideological leanings, Facebook’s researchers pored over millions of URLs shared by its U.S.-based users who identify themselves in their profiles as politically liberal or conservative. The work, which sheds more light on how we glean information from our ever-growing, technologically enhanced tangles of social connections, was published in a paper in Science on Thursday.

Eytan Bakshy, a research scientist on Facebook’s data science team and coauthor of the paper, says the group found that Facebook’s News Feed algorithm only slightly decreases users’ exposure to news shared by those with opposing viewpoints.

“In the end, we find individual choices, both in terms of who they choose to be friends with and what they select, matters more than the effect of algorithmic sorting,” he says.

The work comes more than three years after Bakshy and other researchers concluded that while you’re more likely to look at and share information with your closest connections, most of the information you get on Facebook stems from the web of people you’re weakly connected to—refuting the idea that online social networks create “filter bubbles” limiting what we see to what we want to see (see “What Facebook Knows”).

However, Bakshy says, the previous research, published in 2012, didn’t directly measure the extent to which you’re exposed to information from people whose ideological viewpoints are opposite from yours.

In an effort to sort that out, researchers looked at anonymized data for 10.1 million Facebook users who define themselves as liberal or conservative, and seven million URLs for news stories shared on Facebook from July 7 to January 7. After using software to identify URLs that consisted of “hard” news stories (pieces focused on topics like national news and politics) that were shared by a minimum of 20 users who had a listed political affiliation, researchers labeled each story as being aligned with liberal, neutral, or conservative ideologies, depending on the average political leaning of those who shared the stories.

Researchers found that 24 percent of the “hard” stories that liberal Facebook users’ friends shared were aligned with conservative users, while 35 percent of the “hard” stories that conservative Facebook users’ friends shared were aligned with liberal users—an average of 29.5 percent exposure, overall, to content from the other side of the political spectrum.

The researchers also looked at the impact of Facebook’s News Feed ranking algorithm on the kind of news you see. Bakshy says that overall, the algorithm reduces users’ exposure to content from friends who have opposing viewpoints by less than 1 percentage point—from 29.5 percent to 28.9 percent.

And when it came down to what users ended up actually reading, researchers report that conservatives were 17 percent less likely to click on liberally aligned articles than other “hard” stories in their news feeds, while liberals were 6 percent less likely to click on conservatively aligned articles presented to them.

Sharad Goel, an assistant professor at Stanford who has studied filter bubbles, says people in the field have talked about this issue for several years but Facebook alone was in a position to explore it. He says one thing worth keeping in mind is that people may get their news from many sources, which can dwarf the impact of what they see on Facebook.

“I do agree with one of their main messages—that the algorithm itself is not driving a lot of polarization,” he says.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.