Skip to Content

Imaging Deception in the Brain

Can brain imaging truly detect lies?
February 7, 2007

Polygraph tests are notoriously unreliable, yet thousands of employers, attorneys, and law-enforcement officials use them routinely. Could an alternative system using functional magnetic resonance imaging (fMRI), a technology that indirectly measures brain activity, better detect deceit? The U.S. government is certainly interested–it’s funding research in the area–and two companies have already sprung up to commercialize this use of fMRI. But a recent scientific symposium concluded that little evidence exists to suggest that fMRI can accurately detect lies under real-life circumstances. Scientists who attended the symposium worried that this new generation of lie detectors will follow the path of the polygraph–a widely used technology with little scientific support and broad potential to do harm.

Scientists say that fMRI-based technology designed to detect deception is not yet ready for commercialization.

“As we move forward, we don’t want to make the same mistakes as with the polygraph,” said Marcus Raichle, a neuroscientist at Washington University Medical School, in St. Louis, and a speaker on the panel, which was sponsored by the American Academy of Arts and Sciences, an independent policy research center in Cambridge, MA. He emphasized that, like the physiological changes monitored during polygraphs, the brain-activity patterns measured during fMRI are not specific to deception, making it challenging to identify a brain pattern that definitively identifies a lie.

“The great danger is that something like fMRI is adopted as a means of lie detection and becomes the standard before it has been scientifically evaluated for this purpose,” says Raichle in an e-mail written after the symposium. “The federal government does [approximately] 40,000 polygraphs a year, and I have heard speculation that as much as 10 times that amount may be being used in the private sector. If these numbers are anything like the real circumstance, then to have fMRI take over such an agenda prematurely would be very bad indeed.”

The potential to detect lies by peering into the brain has been widely covered by the media in the past year or two, conjuring images of mind-reading chambers adjacent to metal detectors at airport security checkpoints. One company, California-based No Lie MRI, already has its product on the market. It advertises to employers, lawyers, the government, and individuals, claiming a 90 percent accuracy rate in identifying deception. But neuroscientists at the symposium criticized commercialization as premature. “I think there is very little basis for using those machines for [lie] detection, at least for now,” says Emilio Bizzi, an MIT neuroscientist and president of the American Academy of Arts and Sciences.

But the intense interest in developing an alternative to the polygraph means that the technology is likely here to stay. Unbiased studies are needed to determine if and when fMRI could reliably detect deceit, scientists on the panel said. “Put this in the backdrop of the tens or hundreds of thousands of polygraph sessions being conducted in government and in the corporate world,” says John Gabrieli, a cognitive neuroscientist at MIT.

Polygraph tests rely on measures of stress, such as heart rate and blood pressure, which can shoot up when one is telling a lie. But the stress of being accused of a crime can also trigger a stress response, making it difficult for examiners to interpret the results. FMRI-based lie-detection systems seek to assess a more direct measure of deceit: the level of activity in brain areas linked with lying. Previous studies have shown that the brain appears more active when someone is telling a falsehood, especially the brain areas involved in resolving conflict and cognitive control. Scientists think that lying is more cognitively complex than telling the truth, and therefore it activates more of the brain.

A few scientists say they have devised algorithms to identify deceit-specific patterns in individuals. In one study published in 2005, for example, subjects were asked to commit a fake crime–they stole a watch or a ring–and were then instructed to answer a series of questions, giving false answers to those about the crime but answering truthfully when asked about other things. Using such an algorithm, scientists were correctly able to detect lies 90 percent of the time.

But that’s just not good enough, said Nancy Kanwisher, a neuroscientist at MIT who also spoke on the panel. She said that these studies don’t recreate the real-world situation well enough to truly uncover lies. “Making a false response when ordered to do so is not a lie,” said Kanwisher. “The stakes in the real world are much higher. Someone accused of a crime, guilty or not, will feel very anxious, and that will affect the data.”

Emotion also affects the results of lie-detection tests, according to Elizabeth Phelps, a neuroscientist at New York University who spoke at the symposium. Previous research has shown that brain-activity patterns change when a person is asked to, say, read emotionally charged words rather than neutral ones. “The neural circuitry used for lie detection is significantly modified by emotion,” Phelps said.

Those developing fMRI for lie detection say that the criticisms are too harsh. According to Steven Laken, CEO of Cephos Corporation, one of the companies that hopes to commercialize fMRI, “Too often, people present this as a done deal. We are continuing to do research and develop the technology as much as we can.” He adds that Cephos’s scientific collaborators, based at the Medical University of South Carolina and at the University of Texas Southwestern Medical Center, in Dallas, are already exploring some of the issues brought up by the panel. They are planning studies in which subjects must carry out tasks designed to elicit an emotional response, such as stabbing a dummy, and are tested with fMRI much later, as would happen in the real world.

One of the most important tests for the technology will likely be to identify the specific situations in which fMRI can reliably detect someone’s honesty or deceit. Joy Hirsch, a neuroscientist at Columbia University, in New York, says that she agrees that real-world deceit is different than giving a false answer on request, as is done in the lab. “But the situation that I think fMRI, with its current technology, can speak to is innocence,” says Hirsch. “If someone is telling the truth about something, we should be able to detect that.”

Cephos does not yet offer the technology commercially, but when it does, Laken says the company will be “very selective on who it is and how it is we will scan people.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.