Skip to Content

Facebook’s Emotional Manipulation Study Is Just the Latest Effort to Prod Users

With emotion-triggering effort, Facebook pushes beyond data-driven studies on voting, sharing, and organ-donation prompts, to make people feel good or bad.

Facebook’s controversial study exploring whether it could manipulate people’s moods by tweaking their news feeds to favor negative or positive content produced a particularly negative emotional response, but it is far from the social network’s first effort to control user behavior.

With huge amounts of data flooding in from more than a billion users, the company has a unique position to study their every move, and to perform experiments by measuring how behavior changes under different conditions (see “What Facebook Knows”). This helps Facebook persuade users to spend more time on the site. But in the past three years it has also been probing everything from voting to the effect of encouraging people to make organ donations.

The company has a data science team dedicated to running experiments, both to advance its business aims and to do social-science research on the side, often with collaborators in academia. Other academics perform research on Facebook without collaborating with the company—either by simply observing users, or creating apps that ask them to take part in a project.

The recent study, done in January 2012 but published only recently, hit a nerve partly because it had a negative effect on some users, but also because the affected users were not asked for permission to participate (agreeing to Facebook’s terms and conditions was taken as consent).

“What’s different about this study is that participants did not explicitly consent to being part of an experimental manipulation for the study, and that the results were published,” says Lorrie Cranor a computer scientist at Carnegie Mellon University, where she directs the CyLab Usable Privacy and Security Laboratory.

Facebook ran an experiment on 689,003 users to see if it could manipulate their emotions by varying the selection of posts in their news feeds. One group had stories with positive words filtered out; another experimental group had stories with negative words filtered out. Taken as a group, the people subjected to these changes tended to write posts that echoed those moods, though the effect was small.

But small effects can add up. Past Facebook studies have shown that relatively minor restructuring of its pages and prompts can have significant social effects. Perhaps most dramatically, a 2012 study showed that on Election Day in 2010, when Facebook posted reminders to vote, that action prompted 340,000 more people to vote than otherwise would have (see “How Facebook Drove Voters to the Polls”).

And in 2012, Facebook showed it might have the power to get people to donate their organs. The company put a clickable box on Timeline pages to let people indicate that they were registered donors—the campaign was associated with a huge boost to donor enrollments. (In that case, though, extensive media coverage of Facebook’s effort complicated the analysis of whether Facebook’s effort directly caused the increased enrollments.)

In some ways, Facebook’s published research is just part of a vast ongoing effort at Web-based manipulation. “What’s far more concerning is the lack of transparency about Facebook’s practices overall,” says Zeynep Tufeki, an assistant professor at the University of North Carolina, Chapel Hill, and a former fellow at the Center for Information Technology Policy at Princeton University. “I’m concerned about these practices—testing and manipulating the user experience every day. What else does Facebook do every day? We have no idea.”

Mining personal data is a billion-dollar business (see “The Data Made Me Do It”) designed to elicit purchases, garner eyeballs, and shape behavior. “Advertising and the media work to manipulate our emotions all the time, so I don’t find this study to be particularly problematic,” Cranor says. “We are all laboratory rats without being aware of it.”

The real issue, Cranor and others say, may be that when academic institutions are involved—researchers at Cornell University and the University of California, San Francisco, participated in the emotion study—their academic institutional review boards should take a closer look. Currently this is mainly done when federal funds are involved.

Federal policy for protecting human subjects in federally funded research, called the common rule, requires that subjects give their “informed consent,” and that a statement of the procedures includes a “description of any reasonably foreseeable risks or discomforts to the subject.” Facebook’s data use policy is far more vague, saying that it might use your data for “internal operations, including troubleshooting, data analysis, testing, research, and service improvement.”

This lack of consent is concerning to Antonio Damasio, a neuroscientist at the University of Southern California who has made key findings in the understanding of the brain processes underlying emotion. “I agree that emotion manipulation is quite common, not only on the Web but in daily life,” he says. “That is what advertising in general and marketing in particular are about, but that does not authorize researchers to conduct experiments without proper consent. I can find no excuse for this behavior and no way of condoning it.”

On Monday Facebook said it had nothing to add beyond the apology its researcher, Adam Kramer, posted on the matter.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.