Skip to Content

Offensive Content Still Plagues Facebook

New reports of failure to remove sexualized images of children raise questions about whether enough is being done to keep troubling content from servers.

Facebook is coming under renewed pressure to redouble its efforts to remove offensive content.

A new investigation by the BBC reveals that the social network failed to take down sexualized content relating to children when its presence was reported. The news organization alerted Facebook to 100 pieces of content, such as sexualized images of children and pages said to be “explicitly for men with a sexual interest in children,” using the report button that sits alongside content. Only 18 were deemed offensive and taken down upon initial reporting.

Facebook says that it has since “removed all items that were illegal or against our standards” and reported some to the police. But the news has raised concerns among politicians about whether or not the social network is doing enough to respond to inappropriate material.

They might have a point. The Wall Street Journal today explains that this time last year Facebook was rushing to prepare its new Live video streaming feature. But, reports the newspaper, the pace left employees with little time to plan how to deal with inappropriate content. In fact, it’s a problem that it still wrestles with today. Both pieces of news suggest that Facebook may not be doing all it can to protect users from offensive material.

It’s not a new problem for Facebook. In the past it’s come under heavy criticism for playing host to the kinds of content that can be used to radicalize young people and influence them to join terrorist organizations. 

Mark Zuckerberg has explained in the past that he hopes AI will help ease the problem in the future. But, as with its fake-news problem, there are plenty of issues standing in the way of implementing such technology, including the challenges of training a machine to accurately spot problematic content, as well as the difficulties surrounding freedom of speech and censorship when content issues become subjective. Instead, humans remain part of the vetting process, but it's unclear how many people deal with what must be a large volume of data.

Zuckerberg has recently envisioned a world where his powerful social network could be used to make the world a better place—to break down barriers, connect communities, and build one big, happy global Facebook family. Part of that vision was a vow to make the social network as safe and welcoming as possible. Those efforts, it seems, can't kick in soon enough.

(Read more: BBC, Guardian, Wall Street Journal, “Mark Zuckerberg Has Laid Out His Vision of a World United by Facebook,” “Facebook Will Try to Outsource a Fix for Its Fake-News Problem,” “Fighting ISIS Online’)

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.