Skip to Content

Facebook’s Telltale Heart

A vivid image of a beating heart tested Facebook’s system for handling complaints from the public.
September 3, 2015

An MIT Technology Review story with an unforgettable GIF of a beating heart gave us a firsthand look at how Facebook polices images—and how that system will need to improve if the social network is going to be a reliable partner for news organizations.

Not long after we published the story Tuesday, Facebook blocked people younger than 18 from seeing a post about it on our page on the social network. The post remained visible for adults, but it was emblazoned with this notice: “WARNING: Graphic Photo. Photos that contain graphic content can shock, offend and upset people. Are you sure you want to see this?”

Is it graphic? Well, even a static version of the image does meet Merriam-Webster’s definition of graphic as “vividly or plainly shown.” There’s no getting around the fact that this is a heart pulsing away outside a body. It’s in a box developed by a startup company whose technology might significantly expand the availability of organs that can be used in life-saving transplants.

Could it shock, offend, or upset people? Surely the answer is yes. Just about anything that is interesting could upset someone. Indeed, amid the thousands of “likes” and dozens of comments about the substance of the story and the potential importance of the technology, one person took issue with the image: “OMG! What a terrible selection!!!  … Not all readers are OK with blood and parts of the body full exposed on their FB timeline.”

I am not trying to scold anyone who finds the picture gross or upsetting. I also know I’m far from the first person to point out that Facebook, in an effort to maintain a chipper atmosphere, appears to err on the side of censoring images related to the human body. It took years for Facebook to get comfortable with images of mothers breastfeeding. And finally, I recognize that policing images, especially exploitative ones, is vital work.

The issue, though, is whether Facebook really should be the host to more news stories, which, if they are any good, will often be shocking and upsetting. If news organizations are going to have a fruitful relationship with Facebook, it will need an image-analysis system that is not too quick to deem something beyond the pale.

A single complaint from anyone about the content of a post triggers a review. In this case of the disembodied heart, Facebook put a canned message on our page that said “someone reported your photo for containing graphic violence.”

Facebook says all such reviews are made by people, not image-detecting computers—people who, in the aggregate, check out millions of posts every week. In this instance, someone determined that the heart was unacceptable, even in the context of the biomedical news story it accompanied.

After I queried the company for details about its image-policing process, Facebook spokesman Will Nevius said the reviewer made the wrong call about the bloody heart. The warning label came down.

Nonetheless, the fact that a human heart could even create a judgment call is a reminder of how Facebook can make for an awkward partner for news organizations. If publishers, desperate for the audience Facebook offers, post more of their stories directly and perhaps exclusively to the site, will they deliver only a sanitized subset, or risk having a single reader complaint shield an article from readers under 18? In explaining Facebook’s review process, Nevius said in a statement: “We aim to find the right balance between giving people a place to express themselves and promoting a welcoming and safe environment for our diverse, global community.” That’s an admirable spirit for a social network, but if it requires being on hair-trigger alert for potentially upsetting images, maybe Facebook’s heart can’t ever truly be in the news business.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.