Teens’ Coded Language is Latest Challenge for Facebook’s Ad Algorithms
Facebook makes money by showing its members ads targeted based on what they reveal about themselves whilst using the site. But research from Pew Internet published this week shows that many teenaged users of the site deliberately hide what they’re really talking about using coded language and images. It’s a practice teens use to take control of their online privacy, but also one that could make pitching relevant ads at the group more difficult.
Pew found that some 58 percent of teens intentionally use inside jokes or obscure references to conceal what they’re talking about, with older teens doing it more than younger teens. Microsoft researcher Danah Boyd has studied this activity for years – she calls it social steganography and says its becoming more common – and wrote a response to Pew’s new research in which she explains the practice:
“Over the last few years, I’ve watched as teens have given up on controlling access to content. It’s too hard, too frustrating, and technology simply can’t fix the power issues. Instead, what they’ve been doing is focusing on controlling access to meaning. A comment might look like it means one thing, when in fact it means something quite different. By cloaking their accessible content, teens reclaim power over those who they know who are surveilling them. This practice is still only really emerging en masse, so I was delighted that Pew could put numbers to it. I should note that, as Instagram grows, I’m seeing more and more of this. A picture of a donut may not be about a donut.”
One consequence is that a picture of a donut may not mean a teen is likely to respond to an ad for donuts. Social steganography is apparently mostly aimed at duping parents and other adults, not Facebook. But the pervasiveness of the practice may create an incentive for the company to set its ad targeters to work writing algorithms that can decode teens’ linguistic and photographic encryption.
Keep Reading
Most Popular
Scientists are finding signals of long covid in blood. They could lead to new treatments.
Faults in a certain part of the immune system might be at the root of some long covid cases, new research suggests.
Large language models can do jaw-dropping things. But nobody knows exactly why.
And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.
OpenAI teases an amazing new generative video model called Sora
The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.
Google’s Gemini is now in everything. Here’s how you can try it out.
Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.