Skip to Content
Artificial intelligence

YouTube is experimenting with ways to make its algorithm even more addictive

Publicly, the platform says it’s trying to do what it can to minimize the amplification of extreme content. But it’s still looking for ways to keep users on the site.
September 27, 2019
Illustration showing YouTube interface with ASMR videos
Illustration showing YouTube interface with ASMR videosMs. Tech; Images: screengrabs from YouTube | Gibi ASMR/ASMR Darling/W magazine/ASMR Glow

Recommendation algorithms are some of the most powerful machine-learning systems today because of their ability to shape the information we consume. YouTube’s algorithm, especially, has an outsize influence. The platform is estimated to be second only to Google in web traffic, and 70% of what users watch is fed to them through recommendations.

In recent years, this influence has come under heavy scrutiny. Because the algorithm is optimized for getting people to engage with videos, it tends to offer choices that reinforce what someone already likes or believes, which can create an addictive experience that shuts out other views. This also often rewards the most extreme and controversial videos, which studies have shown can quickly push people into deep rabbit holes of content and lead to political radicalization.

While YouTube has publicly said that it’s working on addressing these problems, a new paper from Google, which owns YouTube, seems to tell a different story. It proposes an update to the platform’s algorithm that is meant to recommend even more targeted content to users in the interest of increasing engagement.

Here’s how YouTube’s recommendation system currently works. To populate the recommended-videos sidebar, it first compiles a shortlist of several hundred videos by finding ones that match the topic and other features of the one you are watching. Then it ranks the list according to the user’s preferences, which it learns by feeding all your clicks, likes, and other interactions into a machine-learning algorithm.

Among the proposed updates, the researchers specifically target a problem they identify as “implicit bias.” It refers to the way recommendations themselves can affect user behavior, making it hard to decipher whether you clicked on a video because you liked it or because it was highly recommended. The effect is that over time, the system can push users further and further away from the videos they actually want to watch.

To reduce this bias, the researchers suggest a tweak to the algorithm: each time a user clicks on a video, it also factors in the video’s rank in the recommendation sidebar. Videos that are near the top of the sidebar are given less weight when fed into the machine-learning algorithm; videos deep down in the ranking, which require a user to scroll, are given more. When the researchers tested the changes live on YouTube, they found significantly more user engagement.

Though the paper doesn’t say whether the new system will be deployed permanently, Guillaume Chaslot, an ex-YouTube engineer who now runs, said he was “pretty confident” that it would happen relatively quickly: “They said that it increases the watch time by 0.24%. If you compute the amount, I think that’s maybe tens of millions of dollars.”

Several experts who reviewed the paper said the changes could have perverse effects. “In our research, we have found that YouTube’s algorithms created an isolated far-right community, pushed users toward videos of children, and promoted misinformation,” Jonas Kaiser, an affiliate at the Berkman Klein Center for Internet & Society, said. “On the fringes, this change might […] foster the formation of more isolated communities than we have already seen.” Jonathan Albright, the director of the digital forensics initiative at the Tow Center for Digital Journalism, said that while “reducing position bias is a good start to slow the low-quality content feedback loop,” in theory the change could also further favor extreme content.

Becca Lewis, a former researcher at Data & Society who studies online extremism, said that it was difficult to know how the changes would play out. “That’s true for YouTube internally as well,” she said. “There are so many different communities on YouTube, different ways that people use YouTube, different types of content, that the implications are going to be different in so many cases. We become test subjects for YouTube.”

When reached for comment, a YouTube spokesperson said its engineers and product teams had determined that the changes would not lead to filter bubbles. In contrast, the company expects the changes to decrease them and diversify recommendations overall.

All three outside researchers MIT Technology Review contacted recommend that YouTube spend more time exploring the impact of algorithmic changes through methods such as interviews, surveys, and user input. YouTube has done this to some extent, the spokesperson said, working to remove extreme content in the form of hate speech on its platform.

“YouTube should spend more energy in understanding which actors their algorithms favors and amplifies than how to keep users on the platform,” Kaiser said.

“The frustrating thing is it’s not in YouTube’s business interest to do that,” Lewis added. “But there is an ethical imperative.”

Corrections: The impact of YouTube’s change would likely be on the order of tens of millions, not billions, of dollars. The story was also updated on Sept. 27, 2019 at 3:30pm ET to reflect YouTube's response.

To have more stories like this delivered directly to your inbox, sign up for our Webby-nominated AI newsletter The Algorithm. It's free.

Deep Dive

Artificial intelligence

AI for everything: 10 Breakthrough Technologies 2024

Generative AI tools like ChatGPT reached mass adoption in record time, and reset the course of an entire industry.

What’s next for AI in 2024

Our writers look at the four hot trends to watch out for this year

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.