A camera developed by computer scientists at the University of California, Berkeley, would obscure, with an oval, the faces of people who appear on surveillance videos. These so-called respectful cameras, which are still in the research phase, could be used for day-to-day surveillance applications and would allow for the privacy oval to be removed from a given set of footage in the event of an investigation.
“Cameras are here to stay, and there’s no avoiding it,” says UC Berkeley computer scientist Ken Goldberg. “Let’s figure out new technology to make them less invasive.” According to a 2006 report prepared by the New York Civil Liberties Union, the number of publicly and privately owned video cameras in Lower Manhattan increased by a factor of five between 1998 and 2005, and several thousand cameras are in place in Greenwich Village and Soho alone. The United Kingdom, however, holds the record for video surveillance. In a report filed on Tuesday, the information commissioner there estimates that there are four million video-surveillance cameras in the United Kingdom–that’s one for every 14 people. Goldberg thinks of the respectful cameras as a compromise between advocates for privacy and those concerned about security.
In its current state of development, the camera is only able to obscure the faces of people who are wearing a marker, in the form of a yellow hat or a green vest. The camera system was developed by the National Science Foundation-funded Team for Research in Ubiquitous Secure Technologies, and it currently works in real time with Panasonic’s robotic security cameras operating at 10 frames per second and a resolution of 640-by-480-pixel videos. The researchers use a statistical classification approach called adaptive boosting to train the system to identify the marker in environments with a high degree of visual noise. But they also combined this classifier with a tracker, which takes into account the subject’s velocity, along with other interframe information. At a construction site where the researchers tested their camera with the vest, the system correctly identified the marker 93 percent of the time. Under more-uniform lighting conditions in their lab environment, they report 96 percent success at identifying the hat, even when two marked individuals cross paths.
The marker requirement is a trade-off, Goldberg admits, but he says that face-detection algorithms are simply not up to task for real-time operations in complex environments. “The idea is called structuring the environment,” he says. “If you’re willing to meet the system halfway and say, ‘I’ll help the computer,’ then that’s useful.” In areas with heavy surveillance, markers could be made available, just outside the camera’s view, to those who wish to maintain their privacy. In the future, Goldberg says, it may be possible to use a less conspicuous marker, like a button, particularly with systems of multiple cameras, which would be less susceptible to visual obstructions.
View a video of the surveillance system in action.
View a second video of the surveillance system.
The cameras have impressed civil-liberties-minded lawyers. Kevin Bankston, a staff attorney with the Electronic Frontier Foundation, in San Francisco, says, “Any technological measures that can be taken to mitigate the privacy invasion and avoid the chilling of legitimate conduct in public or private spaces that are being recorded is a good thing.” The markers are a limitation, he says, but “that’s not an argument against this type of research. In fact, it’s an argument for this type of research.”
Bankston says that laws governing video surveillance in public spaces around the world offer little protection to those concerned about privacy. In a few cases, embarrassing or lewd footage recorded by security cameras has been posted on the Internet. Bankston contends that the overwhelming issue is the unease generated by knowing that someone out there may be watching you.
But even if privacy-shielding camera systems were put into use, there would be great debate about how hard it should be for governments to see fully unobscured video footage. Christopher Slobogin, a law professor at the University of Florida who has written on public camera surveillance, says, “I don’t think the government should have to demonstrate probable cause in order to find out the identity of some person.” Suspicious behavior, he argues, should be sufficient. He cites Terry v. Ohio, a well-known U.S. Supreme Court case that ruled that law-enforcement officers do not need a warrant to stop, detain, and frisk people.
Goldberg says that there may someday be “legislation where you can put up security cameras, but you have to use the p-chip, some privacy chip that encrypts the face. My hunch is that people will say that’s a step in the right direction.”
Geoffrey Hinton tells us why he’s now scared of the tech he helped build
“I have suddenly switched my views on whether these things are going to be more intelligent than us.”
Meet the people who use Notion to plan their whole lives
The workplace tool’s appeal extends far beyond organizing work projects. Many users find it’s just as useful for managing their free time.
Learning to code isn’t enough
Historically, learn-to-code efforts have provided opportunities for the few, but new efforts are aiming to be inclusive.
Deep learning pioneer Geoffrey Hinton has quit Google
Hinton will be speaking at EmTech Digital on Wednesday.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.