Skip to Content

Not OK, Glass

New software lets you mark places as off-limits for wearable camera gadgets like Google Glass.
January 28, 2014

With last year’s launch of the Narrative Clip and Autographer, and Google Glass poised for release this year, technologies that can continuously capture our daily lives with photos and videos are inching closer to the mainstream. These gadgets can generate detailed visual diaries, drive self-improvement, and help those with memory problems. But do you really want to record in the bathroom or a sensitive work meeting?

Assuming that many people don’t, computer scientists at Indiana University have developed software that uses computer vision techniques to automatically identify potentially confidential or embarrassing pictures taken with these devices and prevent them from being shared. A prototype of the software, called PlaceAvoider, will be presented at the Network and Distributed System Security Symposium in San Diego in February.

“There simply isn’t the time to manually curate the thousands of images these devices can generate per day, and in a socially networked world that might lead to the inadvertent sharing of photos you don’t want to share,” says Apu Kapadia, who co-leads the team that developed the system. “Or those who are worried about that might just not share their life-log streams, so we’re trying to help people exploit these applications to the full by providing them with a way to share safely.”

Kapadia’s group began by acknowledging that devising algorithms that can identify sensitive pictures solely on the basis of visual content is probably impossible, since the things that people do and don’t want to share can vary widely and may be difficult to recognize. They set about designing software that users train by taking pictures of the rooms they want to blacklist. PlaceAvoider then flags new pictures taken in those rooms so the user will review them.

The system uses an existing computer-vision algorithm called scale-invariant feature transform (SIFT) to pinpoint regions of high contrast around corners and edges within the training images that are likely to stay visually constant even in varying light conditions and from different perspectives. For each of these, it produces a “numerical fingerprint” consisting of 128 separate numbers relating to properties such as color and texture, as well as its position relative to other regions of the image. Since images are sometimes blurry, PlaceAvoider also looks at more general properties such as colors and textures of walls and carpets, and takes into account the sequence in which shots are taken.

In tests, the system accurately determined whether images from streams captured in the homes and workplaces of the researchers were from blacklisted rooms an average of 89.8 percent of the time.

PlaceAvoider is currently a research prototype; its various components have been written but haven’t been combined as a completed product, and researchers used a smartphone worn around the neck to take photos rather than an existing device meant for life-logging. If developed to work on a life-logging device, an interface could be designed so that PlaceAvoider can flag potentially sensitive images at the time they are taken or place them in quarantine to be dealt with later.

The system’s image analysis techniques could have applications beyond privacy protection, too, such as smartly building photo collections with the best images from important events like birthdays or trips. “Identifying photos we don’t want to share is one dimension,” says David Crandall, the other research team co-leader. “But more broadly, algorithms could be used to automatically organize these huge collections of images to make them safer, more browseable, searchable, and useful.”

Jonathan Zittrain, a law professor at Harvard Law School and cofounder of the school’s Berkman Center for Internet and Society, says PlaceAvoider is a “promising approach” that could help avert some of the harmful by-products of life-streaming. Still, he adds, “It’s not just the person operating a recording device who will need help. There need to be ways for people in common environments—students in a class or workers at a meeting—to set default expectations about what levels of privacy they can expect.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.