Skip to Content

Facebook’s Filter Study Raises Questions About Transparency

Social scientists would like Facebook to be more open about its goals and guidelines for research on user behavior.

Facebook’s latest scientific research, about the way it shapes the political perspectives users are exposed to, has led some academics to call for the company to be more open about what it chooses to study and publish.

This week the company’s data science team published a paper in the prominent journal Science confirming what many had long suspected: that the network’s algorithms filter out some content that might challenge a person’s political leanings. However, the paper also suggested that the effect was fairly small, and less significant than a user’s own filtering behavior (see “Facebook Says You Filter News More Than Its Algorithm  Does”).

Several academics have pointed to limitations of the study, such as the fact that the only people involved had indicated their political affiliation on their Facebook page. Critics point out that those users might behave in a different way from everyone else. But beyond that, a few academics have noted a potential tension between Facebook’s desire to explore the scientific value of its data and its own corporate interests.

Zeynep Tufekci, an assistant professor at the University of North Carolina, says the study is fascinating but adds that she would like to see more transparency about the way research is conducted at Facebook. “The study is interesting; I’m thrilled they’re publishing this stuff,” says Tufekci. “But who knows what else they found?”

Tufekci suggests that besides the new paper showing the “filter bubble” phenomenon to be less pronounced than some had thought, several other Facebook papers have painted the network in a positive light.

Facebook has published several important social-science studies in recent years. The enormous amount of data it collects is extremely valuable as an academic resource (see “What Facebook Knows”).

Indeed, many social scientists outside the company are keen to tap into Facebook’s data. Christian Sandvig, a professor at the University of Michigan, has used Facebook’s application programming interface (normally used to develop games or apps that run on Facebook) to study its users. “There’s a huge amount of data, and everybody’s hoping we can benefit from it,” he says.

But Sandvig also thinks greater transparency might help. “If a study is published in Science, and all three authors work for a pharmaceutical company and it says something positive about that company, we have a way to think about that,” he says.

Facebook’s approach to scientific research is evidently evolving. Last July, its data team published a study showing that both positive and negative emotions can spread between users (see “Facebook’s Emotion Study Is Just the Latest Effort to Prod Users”). That study proved controversial because the company had manipulated the information some users got to see in a way that made them feel more depressed (in fact, the changes made to users’ news feeds were minuscule, but the principle still upset many).

In response to the controversy over that study, Facebook’s chief technology officer, Mike Schroepfer, wrote a Facebook post that acknowledged people’s concerns and described new guidelines for its scientific research. “We’ve created a panel including our most senior subject-area researchers, along with people from our engineering, research, legal, privacy and policy teams, that will review projects falling within these guidelines,” he wrote. When asked about how Facebook decides to what to publish, a spokeswoman referred MIT Technology Review to those guidelines.

For some academics outside Facebook, those guidelines could perhaps be refined further still—for example, to guarantee that researchers can publish anything of scientific interest. “I want to empower the data team at Facebook,” Tufekci adds. “My problem is I don’t think they’re given the independence they should have.”

Keep Reading

Most Popular

Scientists are finding signals of long covid in blood. They could lead to new treatments.

Faults in a certain part of the immune system might be at the root of some long covid cases, new research suggests.

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.