Skip to Content
Policy

Technologists are trying to fix the “filter bubble” problem that tech helped create

But research shows online polarization isn’t as clear-cut as people think.
August 22, 2018

Last fall, Deb Roy, one of the US’s foremost experts on social media, attended a series of roundtables in small towns in middle America—places like Platteville, Wisconsin, and Anamosa, Iowa. It wasn’t what Roy, who runs the Laboratory for Social Machines at the MIT Media Lab, was used to: there were no computer screens in the rooms, no tweets or posts to examine. Instead, he just listened to community leaders and local residents talk, face to face, about their neighbors.

What he heard alarmed him greatly.

“‘I found out what they said on Facebook,’” Roy recalls one elderly woman saying. “‘Their views are so extreme and so unacceptable to me that I no longer see the point in engaging with them.’” It was a sentiment he heard again and again.

“These are people they see on a regular basis in their small towns,” he says. “They used to agree to disagree. When divisiveness and balkanization are reflected at the hyperlocal level—where even when we have access to one another, the digital realm is actually silencing our speech and cutting us off from one another in the physical realm—something is profoundly wrong.”

In 2014, Roy set up his MIT lab specifically to study, among other things, how social media could be used to help break through the partisan arguing that typically divides people. He may be uniquely positioned to make such an attempt. From 2013 to 2017, the Canadian-born engineer served as “chief media scientist” for Twitter, collecting and analyzing social-media chatter. When he opened his lab, Twitter not only granted him full access to “the firehose”—every single tweet ever produced, in real time—but ponied up $10 million to help him make sense of all this information about people’s interests, preferences, and activities, and find ways to use it for public benefit.

For Roy and a number of other researchers who study the internet’s impact on society, the most concerning problem highlighted by the 2016 election isn’t that the Russians used Twitter and Facebook to spread propaganda, or that the political consulting firm Cambridge Analytica illicitly gained access to the private information of more than 50 million Facebook users. It’s that we have all, quite voluntarily, retreated into hyperpartisan virtual corners, owing in no small part to social media and internet companies that determine what we see by monitoring what we have clicked on in the past and giving us more of the same. In the process, opposing perspectives are sifted out, and we’re left with content that reinforces what we already believe.

This is the famous “filter bubble,” a concept popularized in the 2011 book of the same name by Eli Pariser, an internet activist and founder of the viral video site Upworthy. “Ultimately, democracy works only if we citizens are capable of thinking beyond our narrow self-interest,” wrote Pariser. “But to do so, we need a shared view of the world we coinhabit. The filter bubble pushes us in the opposite direction—it creates the impression that our narrow self-interest is all that exists.”

Or does it? The research suggests that things are not quite that simple.

Some kind of war

The legal scholar Cass Sunstein warned way back in 2007 that the internet was giving rise to an “era of enclaves and niches.” He cited a 2005 experiment in Colorado in which 60 Americans from conservative Colorado Springs and liberal Boulder, two cities about 100 miles apart, were assembled into small groups and asked to deliberate on three controversial issues (affirmative action, gay marriage, and an international treaty on global warming). In almost every case, people held more extreme positions after they spoke with like-minded others.

“The Internet makes it exceedingly easy for people to replicate the Colorado experiment online, whether or not that is what they are trying to do,” Sunstein wrote in the Chronicle of Higher Education. “There is a general risk that those who flock together, on the Internet or elsewhere, will end up both confident and wrong, simply because they have not been sufficiently exposed to counterarguments. They may even think of their fellow citizens as opponents or adversaries in some kind of ‘war.’”

But is social media really at fault here? In a study published earlier this year in Proceedings of the National Academy of Sciences, researchers at Stanford University examined political polarization in the US and found that it was increasing far faster among the demographic groups least likely to use social media and the internet. “The 65-year-olds are polarizing more quickly than the younger age group, which is the opposite of what you’d expect if social media and the internet were the driver,” says Levi Boxell, the lead author of the study.

What’s more, most people aren’t as stuck in echo chambers as some would have us think, according to Grant Blank, a research fellow at the Oxford Internet Institute, and collaborators who surveyed adults in the UK and Canada.

“We have five different ways in which the echo chamber could be defined, and it really doesn’t matter which one you use because the results are very consistent across all of them—there is no echo chamber,” Blank says. “People actually read lots of media. They consume, on average, five different media sources, about three offline and two online, and they encounter diverse opinions. People encounter things they disagree with, and they change their mind based on things that they encounter in media.”

Even Pariser, who gave the filter bubble its name, agrees the internet isn’t entirely to blame. It might explain why liberal elites didn’t see Trump coming, since a large portion of middle America was absent from liberals’ social-media feeds: indeed, Blank’s work concluded that most researchers finding such an effect were studying only these cultural elites. But for most Trump supporters, talk radio, local news, and Fox—a pre-internet filter bubble—were far more important sources than tweets or fake news on Facebook.

Data from the polling firm Pew backs up the idea that polarization doesn’t come just from the internet. After the 2016 election, Pew found that 62 percent of Americans got news from social-media sites, but—in a parenthetical ignored in most articles about the study—only 18 percent said they did so “often.” A more recent Pew study found that only about 5 percent said they had “a lot” of trust in the information.

“The internet is absolutely not the causal factor here,” says Ethan Zuckerman, who directs MIT’s Center for Civic Media. “But I think we’re experiencing a phenomenon that began with Fox News and now is sort of extending into the social-media space.”

Right. So what, if anything, can we do about it?

Three attempts at a fix

After the 2016 election, Zuckerman and some collaborators designed a tool called Gobo that lets people adjust their own bubbles via sliders that control content filters. For instance, the “politics” slider ranges from “my perspective” to “lots of perspectives.” Choosing the latter end exposes people to media outlets they probably wouldn’t normally see.

Facebook, however, showed little interest in adopting Gobo. “What Facebook is worried about is that they believe—and they’re probably right—that very few people would actually want to diversify their feed,” Zuckerman says.

Another tool, Social Mirror, was developed by members of Deb Roy’s lab. Earlier this year they reported on the results of an experiment conducted with the tool, which uses data visualization to give Twitter users a bird’s-eye view of how their network of followers and friends fits into the overall universe of Twitter. Most of those recruited to use the tool were politically active Twitter users, and many were surprised to learn just how cocooned inside far-right or far-left bubbles they were.

The impact of the experiment was short-lived, however. Though a week after it ended some participants were following a more diverse set of Twitter accounts than before, two to three weeks later most had gone back to homogeneity. And in another twist, people who ended up following more contrarian accounts—suggested by the researchers to help them diversify their Twitter feeds—subsequently reported that they’d be even less inclined to talk to people with opposing political views.

Lousy results such as this have led Zuckerman toward a more radical idea for countering filter bubbles: the creation of a taxpayer-funded social-media platform with a civic mission to provide a “diverse and global view of the world.”

The early United States, he noted in an essay for the Atlantic, featured a highly partisan press tailored to very specific audiences. But publishers and editors for the most part abided by a strong cultural norm, republishing a wide range of stories from different parts of the nation and reflecting different political leanings. Public broadcasters in many democracies have also focused on providing a wide range of perspectives. It’s not realistic, Zuckerman argues, to expect the same from outlets like Facebook: their business model drives them to pander to our natural human desire to congregate with others like ourselves.

A public social-media platform with a civic mission, says Zuckerman, could push unfamiliar perspectives into our feeds and push us out of our comfort zones. Scholars could review algorithms to make sure we’re seeing an unbiased representation of views. And yes, he admits, people would complain about publicly funding such a platform and question its even-­handedness. But given the lack of other viable solutions, he says, it’s worth a shot.

The problem is us

Jay Van Bavel, a social psychologist at New York University, has studied social posts and analyzed which ones are most likely to gain traction. He’s found that “group identification” posts activate the most primitive non-intellectual parts of the brain. So, for example, if a Republican politician tells people that immigrants are moving in and changing the culture or taking locals’ jobs, or if a Democrat tells female students that Christian activists want to ban women’s rights, their words have power. Bavel’s research suggests that if you want to overcome partisan divisions, avoid the intellect and focus on the emotions.

After the Social Mirror experiment, members of Roy’s lab debuted a project called FlipFeed, which exposed people on Twitter to others with different political views. Martin Saveski, the study’s lead author, says the point was to change how people felt about the other side. One of the experiments prompted participants to imagine, whenever they came across an opposing view, that they were disagreeing with a friend. Those given this prompt were more likely to say they would like to speak with the person in the future, and that they understood why the other person held an opposing view.

The results were congruent with another observation made by Pariser. He’s noticed that some of the best political discussions online happen in sports forums, where people are already united by the common love of a team. The assumption there is that all are fans of the team first, and conservative or liberal second. There’s an emotional connection before politics even enters the discussion.

If you look at all the various projects from Zuckerman and Roy and others, what they’re really trying to do is employ technology to get us to engage with content outside our political bubbles. But is that workable? As Roy himself says, “I don’t think there are any pure, simply technological fixes.”

Maybe in the end it’s up to us to decide to expose ourselves to content from a wider array of sources, and then to engage with it. Sound unappealing? Well, consider the alternative: your latest outraged political post didn’t accomplish much, because the research shows that anyone who read it almost certainly agreed with you already.

Adam Piore is the author of The Body Builders: Inside the Science of the Engineered Human, published in 2017.

Deep Dive

Policy

Is there anything more fascinating than a hidden world?

Some hidden worlds--whether in space, deep in the ocean, or in the form of waves or microbes--remain stubbornly unseen. Here's how technology is being used to reveal them.

Africa’s push to regulate AI starts now        

AI is expanding across the continent and new policies are taking shape. But poor digital infrastructure and regulatory bottlenecks could slow adoption.

Yes, remote learning can work for preschoolers

The largest-ever humanitarian intervention in early childhood education shows that remote learning can produce results comparable to a year of in-person teaching.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.