In confidential documents seen by the Intercept, Facebook touts its ability to “improve” marketing outcomes with what it calls “loyalty prediction.”
Newspeak: The AI software that powers this capability, called “FBLearner Flow,” was first announced in 2016, though it was presented as a technology to make user experience better, not as a marketing tool.
How it works: The data it uses is anonymized, but includes users’ “location, device information, Wi-Fi network details, video usage, affinities, and details of friendships, including how similar a user is to their friends.”
Facebook's defense: A Facebook spokesperson had this to say about the story: "Facebook, just like many other ad platforms, uses machine learning to show the right ad to the right person. We don't claim to know what people think or feel, nor do we share an individual's personal information with advertisers."
Happy Friday the 13th: This is just the latest in a seemingly unending parade of ethical dilemmas in Facebook’s 14 years of existence. Of course, this one follows on the heels of CEO Mark Zuckerberg’s two days of testimony on Capitol Hill in connection with a separate scandal. Another data privacy drama will certainly fuel calls to regulate the social-media giant.
Why Meta’s latest large language model survived only three days online
Galactica was supposed to help scientists. Instead, it mindlessly spat out biased and incorrect nonsense.
A bot that watched 70,000 hours of Minecraft could unlock AI’s next big thing
Online videos are a vast and untapped source of training data—and OpenAI says it has a new way to use it.
Google’s new AI can hear a snippet of song—and then keep on playing
The technique, called AudioLM, generates naturalistic sounds without the need for human annotation.
Responsible AI has a burnout problem
Companies say they want ethical AI. But those working in the field say that ambition comes at their expense.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.