Recommendation algorithms are some of the most powerful machine-learning systems today because of their ability to shape the information we consume. YouTube’s algorithm, especially, has an outsize influence. The platform is estimated to be second only to Google in web traffic, and 70% of what users watch is fed to them through recommendations.
In recent years, this influence has come under heavy scrutiny. Because the algorithm is optimized for getting people to engage with videos, it tends to offer choices that reinforce what someone already likes or believes, which can create an addictive experience that shuts out other views. This also often rewards the most extreme and controversial videos, which studies have shown can quickly push people into deep rabbit holes of content and lead to political radicalization.
While YouTube has publicly said that it’s working on addressing these problems, a new paper from Google, which owns YouTube, seems to tell a different story. It proposes an update to the platform’s algorithm that is meant to recommend even more targeted content to users in the interest of increasing engagement.
Here’s how YouTube’s recommendation system currently works. To populate the recommended-videos sidebar, it first compiles a shortlist of several hundred videos by finding ones that match the topic and other features of the one you are watching. Then it ranks the list according to the user’s preferences, which it learns by feeding all your clicks, likes, and other interactions into a machine-learning algorithm.
Among the proposed updates, the researchers specifically target a problem they identify as “implicit bias.” It refers to the way recommendations themselves can affect user behavior, making it hard to decipher whether you clicked on a video because you liked it or because it was highly recommended. The effect is that over time, the system can push users further and further away from the videos they actually want to watch.
To reduce this bias, the researchers suggest a tweak to the algorithm: each time a user clicks on a video, it also factors in the video’s rank in the recommendation sidebar. Videos that are near the top of the sidebar are given less weight when fed into the machine-learning algorithm; videos deep down in the ranking, which require a user to scroll, are given more. When the researchers tested the changes live on YouTube, they found significantly more user engagement.
Though the paper doesn’t say whether the new system will be deployed permanently, Guillaume Chaslot, an ex-YouTube engineer who now runs AlgoTransparency.org, said he was “pretty confident” that it would happen relatively quickly: “They said that it increases the watch time by 0.24%. If you compute the amount, I think that’s maybe tens of millions of dollars.”
Several experts who reviewed the paper said the changes could have perverse effects. “In our research, we have found that YouTube’s algorithms created an isolated far-right community, pushed users toward videos of children, and promoted misinformation,” Jonas Kaiser, an affiliate at the Berkman Klein Center for Internet & Society, said. “On the fringes, this change might […] foster the formation of more isolated communities than we have already seen.” Jonathan Albright, the director of the digital forensics initiative at the Tow Center for Digital Journalism, said that while “reducing position bias is a good start to slow the low-quality content feedback loop,” in theory the change could also further favor extreme content.
Becca Lewis, a former researcher at Data & Society who studies online extremism, said that it was difficult to know how the changes would play out. “That’s true for YouTube internally as well,” she said. “There are so many different communities on YouTube, different ways that people use YouTube, different types of content, that the implications are going to be different in so many cases. We become test subjects for YouTube.”
When reached for comment, a YouTube spokesperson said its engineers and product teams had determined that the changes would not lead to filter bubbles. In contrast, the company expects the changes to decrease them and diversify recommendations overall.
All three outside researchers MIT Technology Review contacted recommend that YouTube spend more time exploring the impact of algorithmic changes through methods such as interviews, surveys, and user input. YouTube has done this to some extent, the spokesperson said, working to remove extreme content in the form of hate speech on its platform.
“YouTube should spend more energy in understanding which actors their algorithms favors and amplifies than how to keep users on the platform,” Kaiser said.
“The frustrating thing is it’s not in YouTube’s business interest to do that,” Lewis added. “But there is an ethical imperative.”
Corrections: The impact of YouTube’s change would likely be on the order of tens of millions, not billions, of dollars. The story was also updated on Sept. 27, 2019 at 3:30pm ET to reflect YouTube's response.