How Facebook Learns About Your Offline Life

If you have a Facebook account, then you know the deal: you get to connect with your friends, family, loved ones, and those people from high school you never talk to, all for free. In return, Facebook collects information about you—your profile information, articles or pages that you “like,” videos you watch, and so on—and uses that to sell ads.
But it’s not that simple. As an ongoing investigation by ProPublica has shown, Facebook is going beyond the tacit agreement that it provides a free service in exchange for online personal information. It has contracts with several data brokers that provide Facebook with information about your offline life—things like how much money you make, where you like to eat out, and how many credit cards you keep.
It is using that data to flesh out its advertising profile of you, and it isn’t telling you about it.
Facebook’s advertising operation is a remarkable machine. Sure, the social network has an extraordinarily large user base, but what really turns advertisers on is that it lets marketers narrowly define the subset of users who will see an ad based on all sorts of parameters, including users’ shared interests, political leanings, age, and mobile devices.
That kind of microtargeting is incredibly valuable—and what better way to augment it than to buy up offline data sets that can then be matched to Facebook’s users? Far better than simply knowing that someone clicked “like” on the Food Network’s page, for example, is knowing how much money they make each year, or whether they shop at low- or high-end retail stores.
The thing is, Facebook has made a show of being transparent with how it collects users’ information and what categories of interest it assigns to users. Anyone who wants to can look this information up on Facebook’s site.
As part of its investigation, ProPublica built a tool to help users with this—and encouraged them to share what they found. Since September, the publication has gathered over 52,000 categories of interest in this way ranging from, as they say, “Pretending to Text in Awkward Situations” to “Breastfeeding in Public.”
But when ProPublica went into Facebook’s advertising platform to see what parameters ad buyers could use to target an ad, it found close to 600 categories that were described as “provided by a third party.” Most of those had to do with users’ financial attributes, and none of them showed up in the crowdsourced list that users sent in. Turns out, Facebook’s transparency has its limits.
(Read more: ProPublica, “Facebook at a Crossroads,” “What Facebook Knows”)
Keep Reading
Most Popular
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
Data analytics reveal real business value
Sophisticated analytics tools mine insights from data, optimizing operational processes across the enterprise.
The Biggest Questions: What is death?
New neuroscience is challenging our understanding of the dying process—bringing opportunities for the living.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.