Skip to Content

This Dopey, Decade-Old Tech Privacy Video Actually Got A Lot Of Things Right

“Ordering Pizza In 2015” didn’t anticipate geotracking or mobile apps, but a privacy expert says its predictions weren’t far off base.
February 8, 2013

Sometime in the mid-2000s, the ACLU produced a video PSA about technology’s threat to personal privacy called “Ordering a pizza in 2015.” It bubbled back up on social media this week, and I was curious to see how its predictions held up (2015 isn’t that far off, after all). 

At first glance, the video seems laughable. A man calls his favorite pizza place to place an order, and he’s soon roped into a 1984-esque dystopia (with Windows 95-esque graphics) in which everything from his voting and employment history to his health records and library activity are “wired in” to a sinister uber-database, which the pizza place uses to bully him into ordering food he doesn’t want (because it’s better for his health) for an inflated price (delivery costs $20 extra because the customer lives in a high-crime area, according to the pizza place’s records). Uh, I used GrubHub last week and nothing remotely like this happened to me. Ha ha, ACLU #fail! …Right?

Not necessarily, says Lorrie Faith Cranor, a tech privacy expert at Carnegie Mellon University’s CyLab Usable Privacy and Security Laboratory. “It is actually not all that farfetched,” she told me via email. “I’ve shown this video many times in my classes and it is always a good way to start a discussion.”

What the video gets right

“There are companies that do gather most of the information that the pizza shop in the video has. I think it is less likely that information about library books would be available in such a profile, as libraries usually try pretty hard to protect information about what people read. But information about what magazines you subscribe to, travel plans, and clothing sizes is the sort of information that companies are collecting.”

What it gets less right

“Companies don’t necessarily want you to know they have all this information about you, because people tend to find it creepy. So I’m not sure a pizza shop would really let on that they know all this. 

The video doesn’t anticipate location tracking or information that can be collected through mobile apps. The pizza shop does not know whether the customer is calling from home or work because he calls from his cell phone. It is not too much of a stretch to believe that companies may be able to pinpoint precisely where you are calling from on your cell phone in the future.”

Privacy as perception: a user-experience design problem

What I find most interesting about Cranor’s comments is the interface-design aspect of privacy. If the pizza place simply didn’t mention everything it was doing behind the scenes to aggregate and interpret the customer’s personal data, and simply offered opt-in recommendations, it might not have seemed so dystopian to the customer. And in fact, this approach is what Google, Facebook, and other “all in the cloud” personal-data-integrators are all about. Don’t show how the data-mining sausage is made; just offer useful functionality.

A decade after the ACLU made this ham-handed video, a lot of what it depicts has come to pass, and we don’t much mind–because privacy is perception. This works both ways. Remember when Path got lambasted last year for uploading users’ iOS contacts into its database without telling them? Sounds pretty creepy, and the internet freaked out about it… even though apps had been doing this for years already, often for sound technical reasons. Instagram’s Terms-of-Service flap also stemmed from a perception problem. The new TOS sounded more creepy and privacy-violating than it actually was, so Instgram reverted to older legalese that was less emotionally “triggering” but more potentially privacy-violating.

Tech privacy in this decade is a lot weirder than the ACLU could’ve predicted in the last one. Do our privacy tolerances depend more on the subtleties of design and communication than on the brute-force capabilities of the technology itself?

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.