Skip to Content
Uncategorized

DeepMind is asking how AI helped turn the internet into an echo chamber

Researchers found that the more accurately a recommendation engine pegs your interests, the faster it traps you in an information bubble.
March 7, 2019
MS. TECH | PHOTOS: YOUTUBEMs. Tech | Photos: YouTube

One of the most common applications of machine learning today is in recommendation algorithms. Netflix and YouTube use them to push you new shows and videos; Google and Facebook use them to rank the content in your search results and news feed. While these algorithms offer a great deal of convenience, they have some undesirable side effects. You’ve probably heard of them before: filter bubbles and echo chambers.

Concern about these effects is not new. In 2011, Eli Pariser, now the CEO of Upworthy, warned about filter bubbles on the TED stage. Even before that, in his book Republic.com, Harvard law professor Cass Sunstein accurately predicted a “group polarization” effect, driven by the rise of the Internet, that would ultimately challenge a healthy democracy. Facebook wouldn’t exist for another three years.

Both ideas were quickly popularized in the aftermath of the 2016 US election, which led to an upswell of relevant research. Now Google’s own AI subsidiary, DeepMind, is adding to the body of scholarship. (Better late than never, right?)

In a new paper, researchers analyzed how different recommendation algorithms can speed up or slow down both phenomena, which the researchers define separately. Echo chambers, they say, reinforce users’ interests through repeated exposure to similar content. Filter bubbles, by comparison, narrow the scope of content users are exposed to. Both are examples in academic-speak of “degenerate feedback loops.” A higher level of degeneracy, in this case, refers to a stronger filter bubble or echo chamber effect.

They ran simulations of five different recommendation algorithms, which placed different degrees of priority on accurately predicting exactly what the user was interested in over randomly promoting new content. The algorithms that prioritized accuracy more highly, they found, led to much faster system degeneracy. In other words, the best way to combat filter bubbles or echo chambers is to design the algorithms to be more exploratory, showing you things that are less certain to capture your interest. Expanding the overall set of information from which the recommendations are drawn from can also help.

Joseph A. Konstan, a computer science professor at the University of Minnesota, who has previously conducted research on filter bubbles, says the results from DeepMind’s analysis are not surprising. Researchers have long understood the tension between accurate prediction and effective exploration in recommendation systems, he says.

Despite past studies showing that users will tolerate lower levels of accuracy to gain the benefit of diverse recommendations, developers still have a disincentive to design their algorithms that way. “It is always easier to ‘be right’ by recommending safe choices,” Konstan says.

Konstan also critiques the DeepMind study for approaching filter bubbles and echo chambers as machine-learning simulations rather than interactive systems involving humans—a limitation the researchers noted as well. “I am always concerned about work that is limited to simulation studies (or offline data analyses),” he says. “People are complex. On the one hand we know they value diversity, but on the other hand we also know that if we stretch the recommendations too far—to the point where users feel we are not trustworthy—we may lose the users entirely.”

Correction: The headline was updated to better reflect the scope of the research.

Keep Reading

Most Popular

Scientists are finding signals of long covid in blood. They could lead to new treatments.

Faults in a certain part of the immune system might be at the root of some long covid cases, new research suggests.

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.