Mark Zuckerberg still won’t address the root cause of Facebook’s misinformation problem
The CEO was asked repeatedly at a congressional hearing whether Facebook’s recommendation algorithms fuel political polarization.
In a congressional hearing about disinformation on Thursday, Representative Debbie Dingell, a Michigan Democrat, asked Facebook CEO Mark Zuckerberg to respond to a claim he once made about his own company: that the more likely content posted to Facebook is to violate the company’s community standards, the more engagement it will receive.
Is this, she asked, still accurate?
Dingell cited a recent investigative report by MIT Technology Review’s Karen Hao based on interviews with former and current members of the company’s AI team. The story dove into how AI models that drive Facebook’s recommendation algorithms allow misinformation and abuse to continue to thrive on the site. As Hao wrote, and Dingell paraphrased, “A former Facebook AI researcher who joined in 2018 says he and his team conducted ‘study after study’ confirming the same basic idea: models that maximize engagement increase polarization.”
As Hao wrote, a study from New York University of partisan publishers’ Facebook pages found “those that regularly posted political misinformation received the most engagement in the lead-up to the 2020 US presidential election and the Capitol riots.”
Zuckerberg, after saying that “a bunch of inaccurate things” about Facebook’s incentives for allowing and amplifying misinformation and polarizing content had been shared at the hearing by members of Congress, added:
“People don’t want to see misinformation or divisive content on our services. People don’t want to see clickbait and things like that. While it may be true that people may be more likely to click on it in the short term, it’s not good for our business or our product or our community for it to be there.”
His answer is a common Facebook talking point and skirts the fact that the company has not undertaken a centralized, coordinated effort to examine and reduce the way its recommendation systems amplify misinformation. To learn more, read Hao’s reporting.
Zuckerberg’s comments came during the House Committee on Energy and Commerce hearing on disinformation, where members of Congress asked Zuckerberg, Google CEO Sundar Pichai, and Twitter CEO Jack Dorsey about the spread of misinformation about the US election in November, the January 6 attack on the Capitol building, and covid vaccines, among other things.
As has become common in these hearings, conservative legislators also questioned the CEOs about perceived anti-conservative bias on their platforms, a longtime right-wing claim that data doesn’t support.
Both US political parties have called for reforming Section 230 of the Communications Decency Act to hold Silicon Valley companies responsible for the content on their platforms and their decisions about moderating it.
As part of his prepared remarks submitted ahead of the hearing, Zuckerberg proposed a change to the rule that, in its current state, grants platforms immunity from liability for user-posted content. Congress, Zuckerberg said, should “consider making platforms’ intermediary liability protection for certain types of unlawful content conditional on companies’ ability to meet best practices to combat the spread of this content.”
In other words, companies would be held liable for content posted to their platforms if the firms failed to adhere to best practices for content moderation (which the government would, in theory, define). But if companies adopt these practices and harmful content still shows up on their sites, Zuckerberg doesn’t believe that platforms should then be liable.
How to preserve your digital memories
Following recent announcements by Google and Twitter, more data deletion policies are coming.
Your digital life isn’t as permanent as you think it is
Google will delete accounts after two years of inactivity, and experts expect more data deletion policies to come
Catching bad content in the age of AI
Why haven’t tech companies improved at content moderation?
Behind the scenes of Carnegie Mellon’s heated privacy dispute
Researchers at Carnegie Mellon University wanted to create a privacy-preserving smart sensor. They were accused of violating privacy instead.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.