Skip to Content
Tech policy

Mark Zuckerberg still won’t address the root cause of Facebook’s misinformation problem

The CEO was asked repeatedly at a congressional hearing whether Facebook’s recommendation algorithms fuel political polarization.

Zuckerberg testifies remotely
Zuckerberg testifies remotely
Michael Reynolds-Pool/Getty Images

In a congressional hearing about disinformation on Thursday, Representative Debbie Dingell, a Michigan Democrat, asked Facebook CEO Mark Zuckerberg to respond to a claim he once made about his own company: that the more likely content posted to Facebook is to violate the company’s community standards, the more engagement it will receive. 

Is this, she asked, still accurate?  

Dingell cited a recent investigative report by MIT Technology Review’s Karen Hao based on interviews with former and current members of the company’s AI team. The story dove into how AI models that drive Facebook’s recommendation algorithms allow misinformation and abuse to continue to thrive on the site. As Hao wrote, and Dingell paraphrased, “A former Facebook AI researcher who joined in 2018 says he and his team conducted ‘study after study’ confirming the same basic idea: models that maximize engagement increase polarization.” 

As Hao wrote, a study from New York University of partisan publishers’ Facebook pages found “those that regularly posted political misinformation received the most engagement in the lead-up to the 2020 US presidential election and the Capitol riots.” 

Zuckerberg, after saying that “a bunch of inaccurate things” about Facebook’s incentives for allowing and amplifying misinformation and polarizing content had been shared at the hearing by members of Congress, added: 

“People don’t want to see misinformation or divisive content on our services. People don’t want to see clickbait and things like that. While it may be true that people may be more likely to click on it in the short term, it’s not good for our business or our product or our community for it to be there.” 

His answer is a common Facebook talking point and skirts the fact that the company has not undertaken a centralized, coordinated effort to examine and reduce the way its recommendation systems amplify misinformation. To learn more, read Hao’s reporting

Zuckerberg’s comments came during the House Committee on Energy and Commerce hearing on disinformation, where members of Congress asked Zuckerberg, Google CEO Sundar Pichai, and Twitter CEO Jack Dorsey about the spread of misinformation about the US election in November, the January 6 attack on the Capitol building, and covid vaccines, among other things. 

As has become common in these hearings, conservative legislators also questioned the CEOs about perceived anti-conservative bias on their platforms, a longtime right-wing claim that data doesn’t support.

Both US political parties have called for reforming Section 230 of the Communications Decency Act to hold Silicon Valley companies responsible for the content on their platforms and their decisions about moderating it. 

As part of his prepared remarks submitted ahead of the hearing, Zuckerberg proposed a change to the rule that, in its current state, grants platforms immunity from liability for user-posted content. Congress, Zuckerberg said, should “consider making platforms’ intermediary liability protection for certain types of unlawful content conditional on companies’ ability to meet best practices to combat the spread of this content.” 

In other words, companies would be held liable for content posted to their platforms if the firms failed to adhere to best practices for content moderation (which the government would, in theory, define). But if companies adopt these practices and harmful content still shows up on their sites, Zuckerberg doesn’t believe that platforms should then be liable. 

Deep Dive

Tech policy

Europe's AI Act concept
Europe's AI Act concept

A quick guide to the most important AI law you’ve never heard of

The European Union is planning new legislation aimed at curbing the worst harms associated with artificial intelligence.

security cameraa
security cameraa

The world’s biggest surveillance company you’ve never heard of

Hikvision could be sanctioned for aiding the Chinese government’s human rights violations in Xinjiang. Here’s everything you need to know.

Minneapolis police officer obscured face
Minneapolis police officer obscured face

Minneapolis police used fake social media profiles to surveil Black people

An alarming report outlines an extensive pattern of racial discrimination within the city’s police department.

Hoan Ton-That, CEO of Clearview AI
Hoan Ton-That, CEO of Clearview AI

The walls are closing in on Clearview AI

The controversial face recognition company was just fined $10 million for scraping UK faces from the web. That might not be the end of it.

Stay connected

Illustration by Rose WongIllustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.