Facebook’s Rules for Experimenting on You
Questionable research ethics have gotten Facebook in trouble in recent years, but the company says it now has a system in place to prevent that from happening again.
Before they can use information about you in science projects, Facebook’s data scientists must run their research proposals through a formal review process to make sure the research is ethical and sound in the areas of privacy and security. Some details of the process, which the company says has been in place for two years, were revealed in a new paper published in the Washington and Lee Law Review.
The research practices of companies that collect and analyze enormous amounts of data about their users—Facebook being the most prominent—represent a growing area of concern to policymakers. Last month the White House called on users of “big data” to create better systems for tackling challenges related to data ethics, security, and privacy.
Two years ago Facebook drew heavy criticism after publishing a research paper describing an experiment in which company scientists manipulated the news feeds of users to test whether they could influence their emotions. It has faced a backlash from academics critical of the methods used in other studies as well.
The review process described in the new paper is in large part a “response to feedback we’ve received,” Molly Jackman, Facebook’s public policy research manager, said Tuesday at a meeting about big data research ethics at the Future of Privacy Forum in Washington, D.C.
The crux of it is a standing committee of five employees, including experts in law, ethics, communications, and policy. This is inspired in part by the federal model for institutional review boards, which vet academic research proposals and identify ethical concerns.
Much of the research that Facebook does is the type that is generally exempt from an academic institutional review board, said Jackman. Other projects that involve more sensitive subjects or populations warrant further review. The suicide prevention program the company launched Tuesday is an example of a project that falls in that category, she said. Senior managers decide whether proposals should either be expedited or reviewed by the committee.
Jackman said the process is vital because the company’s vast data about human behavior puts it in a position to “push on the boundaries of science.”
But while Facebook’s data may contain insights that could help society, company-led research inevitably raises tricky questions. For instance, how will the members of the review board separate the company’s interests from the interests of the research subjects or the greater public? “You need someone on that review board who can’t get fired,” Rey Junco, a professor of education and human computer interaction at Iowa State University, said at the Future of Privacy Forum meeting.
Another controversial issue is consent. Whereas subjects taking part in federally funded research must provide “informed consent” before they can be included, Facebook users consent to research when they sign up for the service. Junco said researchers should place a greater emphasis on consent, so that users have the opportunity to “self-determine what they are going to be engaging in.”
The inside story of how ChatGPT was built from the people who made it
Exclusive conversations that take us behind the scenes of a cultural phenomenon.
How Rust went from a side project to the world’s most-loved programming language
For decades, coders wrote critical systems in C and C++. Now they turn to Rust.
Design thinking was supposed to fix the world. Where did it go wrong?
An approach that promised to democratize design may have done the opposite.
Sam Altman invested $180 million into a company trying to delay death
Can anti-aging breakthroughs add 10 healthy years to the human life span? The CEO of OpenAI is paying to find out.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.