Despite all the power of computers, they are still lousy at certain simple tasks, such as recognizing faces and knowing the difference between a table and a cow. Now researchers at Microsoft are trying to tap into some of the specialized–and often subconscious–computing power in the human brain, and use it to solve problems that have so far been intractable for machines.
Desney Tan, a researcher at Microsoft Research, and Pradeep Shenoy, a graduate student at the University of Washington, have devised a scheme that uses electro-encephalograph (EEG) caps to collect the brain activity of people looking at pictures of faces and nonfaces, such as horses, cars, and landscapes. The pair found that even when the subjects’ objective wasn’t to distinguish the faces from the nonfaces, their brain activity indicated that they subconsciously identified the difference. The researchers wrote software that churns through the EEG data and classifies faces and nonfaces based on the subjects’ response. When a single person viewed an image once, the system was able to identify faces with up to 72.5 percent accuracy. Results were even better using data from eight people who had viewed a particular image twice: accuracy jumped to 98 percent.
“Given that the brain is constantly processing external information,” says Tan, “we can start to use the brain as a processor.” In one scenario, he explains, pictures would be placed in people’s peripheral vision, which doesn’t require focused cognitive attention, so they could go about their daily tasks.
Today it takes relatively large supercomputers many hours to recognize faces–something a human can do almost instantly. One application for this face-recognition technique could be to use it for quickly sorting snapshots from surveillance videos to find frames with faces and those without, although Tan says this early work is mainly a proof of concept.
In addition to finding faces, Tan says, there is evidence that the strategy could be useful for identifying other types of objects, such as dogs or cats, and different types of words. Subconscious brain power could therefore improve automated image search by preclassifying objects to help a computer more accurately identify pictures.
It’s not a new idea to use human brain power to supplement the abilities of computers, but most of this information is consciously provided by a person. For instance, Google’s Image Labeler game lets people rack up points for identifying specific objects in pictures; the information is used to train machines to better classify pictures. But subconscious computing is a nascent field. “There are a bunch of ethical considerations before any of this can be taken to the mass market,” Tan says. For example, how distracting would it be to have pictures flash in a person’s peripheral vision?
“I think it’s a pretty cool idea that has a lot of potential,” says Luis von Ahn, a professor of computer science at Carnegie Mellon University, in Pittsburgh. However, he admits that quite a few people might have problems with the notion of their subconscious responses being recorded. “It’s kind of freaky,” he says.
A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?
Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.
A startup says it’s begun releasing particles into the atmosphere, in an effort to tweak the climate
Make Sunsets is already attempting to earn revenue for geoengineering, a move likely to provoke widespread criticism.
10 Breakthrough Technologies 2023
The viral AI avatar app Lensa undressed me—without my consent
My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.