MIT Technology Review Subscribe

Offensive Content Still Plagues Facebook

New reports of failure to remove sexualized images of children raise questions about whether enough is being done to keep troubling content from servers.

Facebook is coming under renewed pressure to redouble its efforts to remove offensive content.

A new investigation by the BBC reveals that the social network failed to take down sexualized content relating to children when its presence was reported. The news organization alerted Facebook to 100 pieces of content, such as sexualized images of children and pages said to be “explicitly for men with a sexual interest in children,” using the report button that sits alongside content. Only 18 were deemed offensive and taken down upon initial reporting.

Advertisement

Facebook says that it has since “removed all items that were illegal or against our standards” and reported some to the police. But the news has raised concerns among politicians about whether or not the social network is doing enough to respond to inappropriate material.

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

They might have a point. The Wall Street Journal today explains that this time last year Facebook was rushing to prepare its new Live video streaming feature. But, reports the newspaper, the pace left employees with little time to plan how to deal with inappropriate content. In fact, it’s a problem that it still wrestles with today. Both pieces of news suggest that Facebook may not be doing all it can to protect users from offensive material.

It’s not a new problem for Facebook. In the past it’s come under heavy criticism for playing host to the kinds of content that can be used to radicalize young people and influence them to join terrorist organizations. 

Mark Zuckerberg has explained in the past that he hopes AI will help ease the problem in the future. But, as with its fake-news problem, there are plenty of issues standing in the way of implementing such technology, including the challenges of training a machine to accurately spot problematic content, as well as the difficulties surrounding freedom of speech and censorship when content issues become subjective. Instead, humans remain part of the vetting process, but it’s unclear how many people deal with what must be a large volume of data.

Zuckerberg has recently envisioned a world where his powerful social network could be used to make the world a better place—to break down barriers, connect communities, and build one big, happy global Facebook family. Part of that vision was a vow to make the social network as safe and welcoming as possible. Those efforts, it seems, can’t kick in soon enough.

(Read more: BBC, Guardian, Wall Street Journal, “Mark Zuckerberg Has Laid Out His Vision of a World United by Facebook,” “Facebook Will Try to Outsource a Fix for Its Fake-News Problem,” “Fighting ISIS Online’)

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement