Despite all the power of computers, they are still lousy at certain simple tasks, such as recognizing faces and knowing the difference between a table and a cow. Now researchers at Microsoft are trying to tap into some of the specialized–and often subconscious–computing power in the human brain, and use it to solve problems that have so far been intractable for machines.
Desney Tan, a researcher at Microsoft Research, and Pradeep Shenoy, a graduate student at the University of Washington, have devised a scheme that uses electro-encephalograph (EEG) caps to collect the brain activity of people looking at pictures of faces and nonfaces, such as horses, cars, and landscapes. The pair found that even when the subjects’ objective wasn’t to distinguish the faces from the nonfaces, their brain activity indicated that they subconsciously identified the difference. The researchers wrote software that churns through the EEG data and classifies faces and nonfaces based on the subjects’ response. When a single person viewed an image once, the system was able to identify faces with up to 72.5 percent accuracy. Results were even better using data from eight people who had viewed a particular image twice: accuracy jumped to 98 percent.
“Given that the brain is constantly processing external information,” says Tan, “we can start to use the brain as a processor.” In one scenario, he explains, pictures would be placed in people’s peripheral vision, which doesn’t require focused cognitive attention, so they could go about their daily tasks.
Today it takes relatively large supercomputers many hours to recognize faces–something a human can do almost instantly. One application for this face-recognition technique could be to use it for quickly sorting snapshots from surveillance videos to find frames with faces and those without, although Tan says this early work is mainly a proof of concept.
In addition to finding faces, Tan says, there is evidence that the strategy could be useful for identifying other types of objects, such as dogs or cats, and different types of words. Subconscious brain power could therefore improve automated image search by preclassifying objects to help a computer more accurately identify pictures.
It’s not a new idea to use human brain power to supplement the abilities of computers, but most of this information is consciously provided by a person. For instance, Google’s Image Labeler game lets people rack up points for identifying specific objects in pictures; the information is used to train machines to better classify pictures. But subconscious computing is a nascent field. “There are a bunch of ethical considerations before any of this can be taken to the mass market,” Tan says. For example, how distracting would it be to have pictures flash in a person’s peripheral vision?
“I think it’s a pretty cool idea that has a lot of potential,” says Luis von Ahn, a professor of computer science at Carnegie Mellon University, in Pittsburgh. However, he admits that quite a few people might have problems with the notion of their subconscious responses being recorded. “It’s kind of freaky,” he says.
The hype around DeepMind’s new AI model misses what’s actually cool about it
Some worry that the chatter about these tools is doing the whole field a disservice.
The walls are closing in on Clearview AI
The controversial face recognition company was just fined $10 million for scraping UK faces from the web. That might not be the end of it.
A quick guide to the most important AI law you’ve never heard of
The European Union is planning new legislation aimed at curbing the worst harms associated with artificial intelligence.
These materials were meant to revolutionize the solar industry. Why hasn’t it happened?
Perovskites are promising, but real-world conditions have held them back.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.