Skip to Content

Refriending Facebook

Outrage over Facebook’s “emotional contagion” experiment shows a general misunderstanding of what Facebook is and how it works.

The Facebook feed is a bit like a sausage. Everyone eats it, even though nobody knows how it is made.

The gap between our use of Facebook and our understanding of how it works, however, is a problem. By now most people are aware of the outrage triggered by a paper in the Proceedings of the National Academy of Sciences that presented evidence of “emotional contagion” derived from an experiment conducted in Facebook.

The cries of outrage, however, express a general misunderstanding of what Facebook is and how it works. Also, as other’s have pointed out, the outrage is really part of a general negative sentiment toward this social media platform.

So before bringing down Goliath, let’s pause and understand what Facebook is, how the study was conducted, and how it fits in the context of standard business practices.

First, Facebook is a “micro-broadcasting” platform, meaning that it is not a private diary or a messaging service. This is not an official definition, but one that emerges from Facebook’s design: everything you post on Facebook has the potential to go viral.

This distinction is important since the study has raised many complaints about privacy, and many people appear to expect the privacy of Facebook to be equivalent to that of e-mails and phone calls. A Facebook post, however, is not as public as a tweet or as private as a phone call. It is something in between. In Facebook we share content with a group that can include tens of people, or thousands. Regardless of how many Facebook friends we have, these friends are empowered to push our posts to reach a wider audience than we originally intended.

Second, the idea that the experiment violated privacy is also at odds with the experimental design. After all, the experiment was based on what is known technically as a sorting operation. Yet, a sorting operation cannot violate privacy. To violate privacy, content needs to be revealed to an unintended audience. Sorting and prioritizing the content presented to a user’s intended audience (her existing Facebook friends) cannot reveal content to that user’s unintended audience. Imagine a mailman that puts letters in your mailbox sorted by size, or by the last name of the sender. This ordering might affect the order in which you open the letters, and even your emotional response. For instance, opening a large bill before opening your letter from grandma might ruin your mood, but the sorting operation conducted by the mailman does not reveal the content of the letters to anyone, and hence, does not violate your privacy. So if there are privacy violations, and there might be, these are not coming from the experiment’s sorting operation.

Finally, it is important to remember that Facebook did not generate the content that affected the mood of users. You and I generated this content. So if we are willing to point the gun at Facebook for sorting the content created by us, we should also point the gun at ourselves, for creating that content.

This brings us to the way that Facebook filters content, or how the Facebook sausage is made. Many users seem to believe that Facebook simply shows them all of the content that their contacts generated. For a long time, this has not been the case. The algorithm performing this sorting is called Edgerank, and it is Facebook’s sausage recipe. Edgerank decides which content appears in the news feed of each user. It is an automated editor, if you may.

Edgerank learns which posts you like by associating “features” in posts with the probability that you would like them, click on them, or comment on them. So for instance, if you often like posts that include videos, Edgerank can prioritize posts containing videos.

Edgerank exists, among other things, for user interface considerations. There is a reason that most websites, like Netflix or Amazon, have interfaces centered on algorithms that choose default content for us. We are lazy, and websites that want traffic have learned that defaults customized on behavioral data (such as likes and clicks) work better than questions. Surveys, registration forms, and manual controls are an effective way of bouncing people off your website.

So tinkering with Edgerank is important for Facebook, just like predicting which movies you will watch is important for Netflix. Yet, since there are many possible features that can be extracted from a Facebook post, Facebook engineers need to teach Edgerank what features to look for, and they need to discover which features matter. So just like any business, one of Facebook’s main engineering challenges is to tinker its product recipe to maximize user engagement, which in Facebook’s case includes the time you spend on the site, the amount you interact with others, and how frequently you visit it, among other things.

So what does Edgerank have to do with the ethics of the study? First, I assume we can agree that there is nothing unethical with the fact that the 2014 Edgerank is not the same as the 2011 Edgerank. Just like a used car salesman gets to decide whether to park minivans or Corvettes at the front of the lot, Facebook gets to decide how Edgerank works.

The experiment simply involved tinkering with Edgerank based on a “new” feature: the emotional content of words. By doing this, researchers found statistically significant  evidence of emotional contagion, meaning that “happy” or “sad” posts were accompanied by a tiny increase in equivalent posts.

Is that an ethical dilemma?

Well, if changes in Edgerank across time are not unethical, are changes in Edgerank to a subset of the population unethical? The scope of the change (global versus local) clearly cannot by itself modify the ethics of a change, if that change is acceptable at the global level. So any ethical problem needs to lie elsewhere.

The next issue is the content of the change. Is using sentiment analysis as a feature unethical? Probably not. Most of us filter the content we present to others based on emotional considerations. In fact, we do not just filter content. We often modify it based on emotional reasons. For instance, is it unethical to soften an unhappy or aggressive comment from a colleague when sharing it with others? Is that rewording operation unethical? Or does the failure of ethics emerge when an algorithm—instead of, say, a professional editor—performs the filtering?

If there is a somehow unethical use of emotions surrounding this experiment, it is the framing of fear used by the media to sell the story. Certainly, the media framed the story around ethics and fear for a reason: they know from their own click data that fear sells. Indeed, it is ironic that the media’s gut reaction to a study about emotional contagion was to flood news sources with negative emotions.

Cesar A. Hidalgo is the ABC Professor of Career Development at the MIT Media Lab.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.