With last year’s launch of the Narrative Clip and Autographer, and Google Glass poised for release this year, technologies that can continuously capture our daily lives with photos and videos are inching closer to the mainstream. These gadgets can generate detailed visual diaries, drive self-improvement, and help those with memory problems. But do you really want to record in the bathroom or a sensitive work meeting?
Assuming that many people don’t, computer scientists at Indiana University have developed software that uses computer vision techniques to automatically identify potentially confidential or embarrassing pictures taken with these devices and prevent them from being shared. A prototype of the software, called PlaceAvoider, will be presented at the Network and Distributed System Security Symposium in San Diego in February.
“There simply isn’t the time to manually curate the thousands of images these devices can generate per day, and in a socially networked world that might lead to the inadvertent sharing of photos you don’t want to share,” says Apu Kapadia, who co-leads the team that developed the system. “Or those who are worried about that might just not share their life-log streams, so we’re trying to help people exploit these applications to the full by providing them with a way to share safely.”
Kapadia’s group began by acknowledging that devising algorithms that can identify sensitive pictures solely on the basis of visual content is probably impossible, since the things that people do and don’t want to share can vary widely and may be difficult to recognize. They set about designing software that users train by taking pictures of the rooms they want to blacklist. PlaceAvoider then flags new pictures taken in those rooms so the user will review them.
The system uses an existing computer-vision algorithm called scale-invariant feature transform (SIFT) to pinpoint regions of high contrast around corners and edges within the training images that are likely to stay visually constant even in varying light conditions and from different perspectives. For each of these, it produces a “numerical fingerprint” consisting of 128 separate numbers relating to properties such as color and texture, as well as its position relative to other regions of the image. Since images are sometimes blurry, PlaceAvoider also looks at more general properties such as colors and textures of walls and carpets, and takes into account the sequence in which shots are taken.
In tests, the system accurately determined whether images from streams captured in the homes and workplaces of the researchers were from blacklisted rooms an average of 89.8 percent of the time.
PlaceAvoider is currently a research prototype; its various components have been written but haven’t been combined as a completed product, and researchers used a smartphone worn around the neck to take photos rather than an existing device meant for life-logging. If developed to work on a life-logging device, an interface could be designed so that PlaceAvoider can flag potentially sensitive images at the time they are taken or place them in quarantine to be dealt with later.
The system’s image analysis techniques could have applications beyond privacy protection, too, such as smartly building photo collections with the best images from important events like birthdays or trips. “Identifying photos we don’t want to share is one dimension,” says David Crandall, the other research team co-leader. “But more broadly, algorithms could be used to automatically organize these huge collections of images to make them safer, more browseable, searchable, and useful.”
Jonathan Zittrain, a law professor at Harvard Law School and cofounder of the school’s Berkman Center for Internet and Society, says PlaceAvoider is a “promising approach” that could help avert some of the harmful by-products of life-streaming. Still, he adds, “It’s not just the person operating a recording device who will need help. There need to be ways for people in common environments—students in a class or workers at a meeting—to set default expectations about what levels of privacy they can expect.”
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
Data analytics reveal real business value
Sophisticated analytics tools mine insights from data, optimizing operational processes across the enterprise.
Driving companywide efficiencies with AI
Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.