Skip to Content
Artificial intelligence

DeepMind is asking how AI helped turn the internet into an echo chamber

Researchers found that the more accurately a recommendation engine pegs your interests, the faster it traps you in an information bubble.
March 7, 2019
MS. TECH | PHOTOS: YOUTUBEMs. Tech | Photos: YouTube

One of the most common applications of machine learning today is in recommendation algorithms. Netflix and YouTube use them to push you new shows and videos; Google and Facebook use them to rank the content in your search results and news feed. While these algorithms offer a great deal of convenience, they have some undesirable side effects. You’ve probably heard of them before: filter bubbles and echo chambers.

Concern about these effects is not new. In 2011, Eli Pariser, now the CEO of Upworthy, warned about filter bubbles on the TED stage. Even before that, in his book Republic.com, Harvard law professor Cass Sunstein accurately predicted a “group polarization” effect, driven by the rise of the Internet, that would ultimately challenge a healthy democracy. Facebook wouldn’t exist for another three years.

Both ideas were quickly popularized in the aftermath of the 2016 US election, which led to an upswell of relevant research. Now Google’s own AI subsidiary, DeepMind, is adding to the body of scholarship. (Better late than never, right?)

In a new paper, researchers analyzed how different recommendation algorithms can speed up or slow down both phenomena, which the researchers define separately. Echo chambers, they say, reinforce users’ interests through repeated exposure to similar content. Filter bubbles, by comparison, narrow the scope of content users are exposed to. Both are examples in academic-speak of “degenerate feedback loops.” A higher level of degeneracy, in this case, refers to a stronger filter bubble or echo chamber effect.

They ran simulations of five different recommendation algorithms, which placed different degrees of priority on accurately predicting exactly what the user was interested in over randomly promoting new content. The algorithms that prioritized accuracy more highly, they found, led to much faster system degeneracy. In other words, the best way to combat filter bubbles or echo chambers is to design the algorithms to be more exploratory, showing you things that are less certain to capture your interest. Expanding the overall set of information from which the recommendations are drawn from can also help.

Joseph A. Konstan, a computer science professor at the University of Minnesota, who has previously conducted research on filter bubbles, says the results from DeepMind’s analysis are not surprising. Researchers have long understood the tension between accurate prediction and effective exploration in recommendation systems, he says.

Despite past studies showing that users will tolerate lower levels of accuracy to gain the benefit of diverse recommendations, developers still have a disincentive to design their algorithms that way. “It is always easier to ‘be right’ by recommending safe choices,” Konstan says.

Konstan also critiques the DeepMind study for approaching filter bubbles and echo chambers as machine-learning simulations rather than interactive systems involving humans—a limitation the researchers noted as well. “I am always concerned about work that is limited to simulation studies (or offline data analyses),” he says. “People are complex. On the one hand we know they value diversity, but on the other hand we also know that if we stretch the recommendations too far—to the point where users feel we are not trustworthy—we may lose the users entirely.”

Correction: The headline was updated to better reflect the scope of the research.

Deep Dive

Artificial intelligence

chasm concept
chasm concept

Artificial intelligence is creating a new colonial world order

An MIT Technology Review series investigates how AI is enriching a powerful few by dispossessing communities that have been dispossessed before.

open sourcing language models concept
open sourcing language models concept

Meta has built a massive new language AI—and it’s giving it away for free

Facebook’s parent company is inviting researchers to pore over and pick apart the flaws in its version of GPT-3

spaceman on a horse generated by DALL-E
spaceman on a horse generated by DALL-E

This horse-riding astronaut is a milestone in AI’s journey to make sense of the world

OpenAI’s latest picture-making AI is amazing—but raises questions about what we mean by intelligence.

labor exploitation concept
labor exploitation concept

How the AI industry profits from catastrophe

As the demand for data labeling exploded, an economic catastrophe turned Venezuela into ground zero for a new model of labor exploitation.

Stay connected

Illustration by Rose WongIllustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.