The number of images recorded by security cameras each day vastly exceeds human analysts’ ability to examine them. “Computer vision” systems aren’t much help: they’re still far too primitive to tell a prowler from a postman. But researchers say the human brain can subconsciously register an anomaly in a scene – say, a shadow where there shouldn’t be one – much faster than a person can visually and verbally identify it. If computers could somehow monitor the brain and flag these “aha” moments, surveillance analysts might be able to scan many times more images per hour.
That’s what Paul Sajda, a bioengineer at Columbia University’s Laboratory for Intelligent Imaging and Neural Computing, hopes to enable with his “cortically coupled computer vision” system, or “C3Vision.” Sajda’s prototype, built with a grant from the U.S. Defense Advanced Research Projects Agency, includes a bonnet of electrodes that is placed on a subject’s head, where it monitors changes in the brain’s electrical activity. A computer scrutinizes those changes for the neural signatures of interesting events and images, as the subject watches a video running at 10 times its normal speed. The flagged images are picked out for a more intensive examination.
“We are aiming to speed up [visual] search by 300 percent,” says Sajda. “The system is designed not only for finding very specific targets but also things image analysts think are ‘unusual,’ which is very difficult to do with a computer vision system.”
Such devices could help law enforcement or counterterrorism officials spot signs of suspicious activity that would otherwise slip by as they scanned surveillance images. “Any system that can help process those images and prioritize them as to likelihood of containing important data is a vast improvement over the current situation,” says Leif Finkel, a professor of bioengineering at the University of Pennsylvania, who was Sajda’s doctoral thesis advisor.
Outside the security realm, radiologists hooked up to the C3Vision system could quickly screen hundreds of mammograms to identify those requiring a closer look, and photo researchers could use it to single out pictures of a particular person among the millions of photographs on the Web. “People are amazingly accurate at identifying whether a particular image – say, of Marilyn Monroe, or the Washington Monument – was presented as one photo” in a series of hundreds, even at a speed of 10 to 20 images per second, says Finkel.
According to Sajda’s recent tests, subjects spotted 90 percent of the “suspicious” images in a series running at 10 images per second.
“I think that it is too early to tell whether this particular approach is going to work in real applications,” says Misha Pavel, a professor of biomedical engineering at Oregon Health Sciences University. “But I have no doubt that we will learn from this approach, and the consequences may be entirely unexpected, novel applications.”
A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?
Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.
A startup says it’s begun releasing particles into the atmosphere, in an effort to tweak the climate
Make Sunsets is already attempting to earn revenue for geoengineering, a move likely to provoke widespread criticism.
10 Breakthrough Technologies 2023
These exclusive satellite images show that Saudi Arabia’s sci-fi megacity is well underway
Weirdly, any recent work on The Line doesn’t show up on Google Maps. But we got the images anyway.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.