Skip to Content

Human-Aided Computing

Microsoft researchers are trying to harness untapped brain power.
June 22, 2007

Despite all the power of computers, they are still lousy at certain simple tasks, such as recognizing faces and knowing the difference between a table and a cow. Now researchers at Microsoft are trying to tap into some of the specialized–and often subconscious–computing power in the human brain, and use it to solve problems that have so far been intractable for machines.

Brain drain: The top picture is an artist’s rendering of subconscious computing, which, using EEG, accesses the processing power of the human brain for tasks–such as face recognition–that are difficult for machines. The bottom picture is a layout of the placement of EEG connections on the head.

Desney Tan, a researcher at Microsoft Research, and Pradeep Shenoy, a graduate student at the University of Washington, have devised a scheme that uses electro-encephalograph (EEG) caps to collect the brain activity of people looking at pictures of faces and nonfaces, such as horses, cars, and landscapes. The pair found that even when the subjects’ objective wasn’t to distinguish the faces from the nonfaces, their brain activity indicated that they subconsciously identified the difference. The researchers wrote software that churns through the EEG data and classifies faces and nonfaces based on the subjects’ response. When a single person viewed an image once, the system was able to identify faces with up to 72.5 percent accuracy. Results were even better using data from eight people who had viewed a particular image twice: accuracy jumped to 98 percent.

“Given that the brain is constantly processing external information,” says Tan, “we can start to use the brain as a processor.” In one scenario, he explains, pictures would be placed in people’s peripheral vision, which doesn’t require focused cognitive attention, so they could go about their daily tasks.

Today it takes relatively large supercomputers many hours to recognize faces–something a human can do almost instantly. One application for this face-recognition technique could be to use it for quickly sorting snapshots from surveillance videos to find frames with faces and those without, although Tan says this early work is mainly a proof of concept.

In addition to finding faces, Tan says, there is evidence that the strategy could be useful for identifying other types of objects, such as dogs or cats, and different types of words. Subconscious brain power could therefore improve automated image search by preclassifying objects to help a computer more accurately identify pictures.

It’s not a new idea to use human brain power to supplement the abilities of computers, but most of this information is consciously provided by a person. For instance, Google’s Image Labeler game lets people rack up points for identifying specific objects in pictures; the information is used to train machines to better classify pictures. But subconscious computing is a nascent field. “There are a bunch of ethical considerations before any of this can be taken to the mass market,” Tan says. For example, how distracting would it be to have pictures flash in a person’s peripheral vision?

“I think it’s a pretty cool idea that has a lot of potential,” says Luis von Ahn, a professor of computer science at Carnegie Mellon University, in Pittsburgh. However, he admits that quite a few people might have problems with the notion of their subconscious responses being recorded. “It’s kind of freaky,” he says.

Keep Reading

Most Popular

It’s time to retire the term “user”

The proliferation of AI means we need a new word.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Sam Altman says helpful agents are poised to become AI’s killer function

Open AI’s CEO says we won’t need new hardware or lots more training data to get there.

An AI startup made a hyperrealistic deepfake of me that’s so good it’s scary

Synthesia's new technology is impressive but raises big questions about a world where we increasingly can’t tell what’s real.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.