Skip to Content
Uncategorized

Teens’ Coded Language is Latest Challenge for Facebook’s Ad Algorithms

Most teenagers deliberately hide what they are really talking about on Facebook - a practice that could make it harder to pitch ads at them.

Facebook makes money by showing its members ads targeted based on what they reveal about themselves whilst using the site. But research from Pew Internet published this week shows that many teenaged users of the site deliberately hide what they’re really talking about using coded language and images. It’s a practice teens use to take control of their online privacy, but also one that could make pitching relevant ads at the group more difficult.

Is this donut a coded message? (Credit: Ken Hawkins)

Pew found that some 58 percent of teens intentionally use inside jokes or obscure references to conceal what they’re talking about, with older teens doing it more than younger teens. Microsoft researcher Danah Boyd has studied this activity for years – she calls it social steganography and says its becoming more common – and wrote a response to Pew’s new research in which she explains the practice:

“Over the last few years, I’ve watched as teens have given up on controlling access to content. It’s too hard, too frustrating, and technology simply can’t fix the power issues. Instead, what they’ve been doing is focusing on controlling access to meaning. A comment might look like it means one thing, when in fact it means something quite different. By cloaking their accessible content, teens reclaim power over those who they know who are surveilling them. This practice is still only really emerging en masse, so I was delighted that Pew could put numbers to it. I should note that, as Instagram grows, I’m seeing more and more of this. A picture of a donut may not be about a donut.”

One consequence is that a picture of a donut may not mean a teen is likely to respond to an ad for donuts. Social steganography is apparently mostly aimed at duping parents and other adults, not Facebook. But the pervasiveness of the practice may create an incentive for the company to set its ad targeters to work writing algorithms that can decode teens’ linguistic and photographic encryption.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.