In confidential documents seen by the Intercept, Facebook touts its ability to “improve” marketing outcomes with what it calls “loyalty prediction.”
Newspeak: The AI software that powers this capability, called “FBLearner Flow,” was first announced in 2016, though it was presented as a technology to make user experience better, not as a marketing tool.
How it works: The data it uses is anonymized, but includes users’ “location, device information, Wi-Fi network details, video usage, affinities, and details of friendships, including how similar a user is to their friends.”
Facebook's defense: A Facebook spokesperson had this to say about the story: "Facebook, just like many other ad platforms, uses machine learning to show the right ad to the right person. We don't claim to know what people think or feel, nor do we share an individual's personal information with advertisers."
Happy Friday the 13th: This is just the latest in a seemingly unending parade of ethical dilemmas in Facebook’s 14 years of existence. Of course, this one follows on the heels of CEO Mark Zuckerberg’s two days of testimony on Capitol Hill in connection with a separate scandal. Another data privacy drama will certainly fuel calls to regulate the social-media giant.
DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.
“This is a profound moment in the history of technology,” says Mustafa Suleyman.
AI hype is built on high test scores. Those tests are flawed.
With hopes and fears about the technology running wild, it's time to agree on what it can and can't do.
You need to talk to your kid about AI. Here are 6 things you should say.
As children start back at school this week, it’s not just ChatGPT you need to be thinking about.
AI language models are rife with different political biases
New research explains you’ll get more right- or left-wing answers, depending on which AI model you ask.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.