Skip to Content
Policy

Mark Zuckerberg still won’t address the root cause of Facebook’s misinformation problem

The CEO was asked repeatedly at a congressional hearing whether Facebook’s recommendation algorithms fuel political polarization.

Zuckerberg testifies remotely
Michael Reynolds-Pool/Getty Images

In a congressional hearing about disinformation on Thursday, Representative Debbie Dingell, a Michigan Democrat, asked Facebook CEO Mark Zuckerberg to respond to a claim he once made about his own company: that the more likely content posted to Facebook is to violate the company’s community standards, the more engagement it will receive. 

Is this, she asked, still accurate?  

Dingell cited a recent investigative report by MIT Technology Review’s Karen Hao based on interviews with former and current members of the company’s AI team. The story dove into how AI models that drive Facebook’s recommendation algorithms allow misinformation and abuse to continue to thrive on the site. As Hao wrote, and Dingell paraphrased, “A former Facebook AI researcher who joined in 2018 says he and his team conducted ‘study after study’ confirming the same basic idea: models that maximize engagement increase polarization.” 

As Hao wrote, a study from New York University of partisan publishers’ Facebook pages found “those that regularly posted political misinformation received the most engagement in the lead-up to the 2020 US presidential election and the Capitol riots.” 

Zuckerberg, after saying that “a bunch of inaccurate things” about Facebook’s incentives for allowing and amplifying misinformation and polarizing content had been shared at the hearing by members of Congress, added: 

“People don’t want to see misinformation or divisive content on our services. People don’t want to see clickbait and things like that. While it may be true that people may be more likely to click on it in the short term, it’s not good for our business or our product or our community for it to be there.” 

His answer is a common Facebook talking point and skirts the fact that the company has not undertaken a centralized, coordinated effort to examine and reduce the way its recommendation systems amplify misinformation. To learn more, read Hao’s reporting

Zuckerberg’s comments came during the House Committee on Energy and Commerce hearing on disinformation, where members of Congress asked Zuckerberg, Google CEO Sundar Pichai, and Twitter CEO Jack Dorsey about the spread of misinformation about the US election in November, the January 6 attack on the Capitol building, and covid vaccines, among other things. 

As has become common in these hearings, conservative legislators also questioned the CEOs about perceived anti-conservative bias on their platforms, a longtime right-wing claim that data doesn’t support.

Both US political parties have called for reforming Section 230 of the Communications Decency Act to hold Silicon Valley companies responsible for the content on their platforms and their decisions about moderating it. 

As part of his prepared remarks submitted ahead of the hearing, Zuckerberg proposed a change to the rule that, in its current state, grants platforms immunity from liability for user-posted content. Congress, Zuckerberg said, should “consider making platforms’ intermediary liability protection for certain types of unlawful content conditional on companies’ ability to meet best practices to combat the spread of this content.” 

In other words, companies would be held liable for content posted to their platforms if the firms failed to adhere to best practices for content moderation (which the government would, in theory, define). But if companies adopt these practices and harmful content still shows up on their sites, Zuckerberg doesn’t believe that platforms should then be liable. 

Deep Dive

Policy

Is there anything more fascinating than a hidden world?

Some hidden worlds--whether in space, deep in the ocean, or in the form of waves or microbes--remain stubbornly unseen. Here's how technology is being used to reveal them.

A brief, weird history of brainwashing

L. Ron Hubbard, Operation Midnight Climax, and stochastic terrorism—the race for mind control changed America forever.

Africa’s push to regulate AI starts now        

AI is expanding across the continent and new policies are taking shape. But poor digital infrastructure and regulatory bottlenecks could slow adoption.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.