MRI Lie Detectors

Can magnetic-resonance imaging show whether people are telling the truth?

12 min read
A new form of lie detector, one that uses magnetic resonance imaging (MRI).
Image: ISM/Phototake

Nervously, my heart pounding, I remove my clothing, watch, and wedding ring. No, it’s not an extramarital tryst. The only affair I’m involved in is reporting on a new form of lie detector, one that uses magnetic resonance imaging (MRI). That explains the need to shed my clothes, which might have magnetizable metal parts in them, along with the watch and ring, which could be sucked with dangerous force into the powerful magnet of the apparatus. (Accidents from flying metal have injured and even killed MRI subjects in the past.) I then don hospital garb and climb onto a platform that glides me into the heart of an impressively large if somewhat cramped scanner.

I’m here to investigate No Lie MRI, a San Diego company that is offering US $5000 “truth-verification” sessions. Around my head, a superconducting electromagnet cooled to within a few degrees of absolute zero generates a magnetic field that’s about 50 000 times as strong as Earth’s.

To the accompaniment of various clicks and clacks, a screen above my head flashes a series of questions in front of my eyes. Did I ever claim more than I should have for business expenses? Have I cheated on my wife? Have I pretended to be ill in the last year? I am tested on nine questions in all. The topics are serious enough to provoke strong emotional responses but innocent enough to save No Lie MRI from having to report me to the authorities should I appear to be covering something up. I had settled on the questions with the company beforehand and promised to provide truthful answers to all but one of them, giving No Lie MRI an 11 percent possibility of spotting my fib by chance alone.

While I’m in the machine, the questions arrive at unpredictable intervals and are repeated several times throughout the session. Control questions are shuffled in at random. After 10 minutes, my grilling is over, and the complex business of analyzing the data begins, a process that will take several days. I’ve done my best to deceive my interrogators, and they will do their best to read my mind.

No one is suggesting that such elaborate tests are suitable for petty criminal cases or regular employee screenings—especially at $5000 a pop. And the courts are far from accepting the results of such scans as evidence. But that hasn’t stopped lots of people from putting their money where their brain waves are. No Lie MRI’s clients include a store owner who wanted to prove that he did not commit arson for the insurance payout, a woman trying to convince her husband that she hadn’t been unfaithful, and a father denying allegations of child abuse.

Could this really work? A machine that could reliably separate truth from lies would be a police detective’s dream—and a civil libertarian’s nightmare. Your opinion may depend on which side of the device you’re on, but many people would like nothing better than having a truly foolproof lie detector. All that’s been available in the past has been the polygraph—a cobbled-together battery of sensors that monitor the subject’s pulse, sweating, and breathing rate. Polygraph testing is error prone, and experts struggle even to quantify its level of reliability.

One reason for that struggle is that the interpretation of a polygraph’s measurements is unavoidably subjective. Set the detection threshold low enough and you’ll net almost any liar. But you’ll also falsely identify many truth tellers. Laboratory studies of polygraph testing show that when you set the threshold so that the false positive rate is a troublingly high 30 percent, you’ll still detect lies only between 64 and 100 percent of the time. That’s a wide range, and the low end reflects rather poor performance for a lie detector. Also, experts generally agree that polygraph testing probably works worse in the real world than it does in the lab, though how much worse isn’t clear.

For these reasons, the U.S. National Academy of Sciences was somewhat vague in its overall assessment of polygraphs, saying that these machines could (at best) discriminate lies from the truth at rates “well above chance, though well below perfection” and remain “an unacceptable choice for security screening.” No wonder the U.S. legal system has never fully embraced this technology. In most other parts of the world, the courts, law enforcement, and even the business community just scoff at it.

So now we have a new breed of lie detector, based on MRI, that promises to do away with using unreliable physiological responses to reveal a person’s innermost thoughts. With the help of multimillion-dollar scanners, sophisticated pattern-matching algorithms, and cutting-edge neuroscience, you can now detect the hardwired patterns in the brain that indicate deception—or at least that’s what supporters claim. I was determined to find out for myself whether this was true, even if I had to ’fess up to some personal foibles to do it.

mri images

Images: No LIe MRI
The truth BE TOLD? [left] When the author was asked whether he had ever feigned illness to escape an obligation, his prefrontal cortex showed no unusual activity.
Liar, liar? [right] When the author was asked whether he had ever padded an expense report, his prefrontal cortex became highly active [areas highlighted with hot colors]. Click on the image for a larger view.

“The mechanism in your brain is the same regardless of whether you tell a big lie or a little lie,” says Joel Huizenga, chief executive officer of No Lie MRI. “It doesn’t matter whether you feel guilty or not, it doesn’t matter if you’ve memorized your story, and it doesn’t matter whether you believe your lie would save the world. We can still spot it.” Huizenga foresees a day when philanthropic foundations won’t hand over funds to charities and venture capitalists won’t invest in start-ups unless the prospective recipients pass an MRI brain scan for honesty.

The only other company now offering commercial MRI lie detection, Cephos Corp., based in Tyngsboro, Mass., grew out of academic research done at the Medical University of South Carolina. That research was funded in part by the Department of Defense’s Defense Academy for Credibility Assessment, the agency that oversees federal polygraph training. “We’ve done really good work that has been published and peer reviewed,” says Cephos president Steven Laken, a Ph.D. neuroscientist. “We have something that’s 97 percent accurate.”

It’s no great surprise that modern technology should be able to supplant traditional polygraph testing, which was first developed a century ago. What’s remarkable, though, is that no one has actually set about to design a new lie detector. But some have found the makings of one in the now-ubiquitous MRI scanner.

Since the 1980s, physicians have been using MRI scanners to diagnose disorders of soft tissues. These machines work by placing the portion of the body to be scanned within a powerful magnetic field. Weaker fields are then applied rapidly at angles to the main field, causing hydrogen nuclei within body tissues (mostly in water and fat molecules) to resonate and emit faint electromagnetic signals. The detailed characteristics of those signals depend on the position as well as the physical and chemical environment around the emitting nuclei. The data collected with an MRI scanner can thus be assembled into 3-D images that permit doctors to spot many sorts of abnormalities without requiring invasive procedures.

Scientists soon realized that MRI technology also provides a way to chart the functioning of certain organs. The trick is to make faster, less precise scans, which give rapid-fire snapshots of the body’s dynamic functioning, a methodology that became known as functional MRI, or fMRI for short.

Because nerve cells require more oxygenated blood when they’re busy processing information, an appropriately configured brain scan can trace the locus of mental activity

One of the things that can be tracked with fMRI is how oxygenated the blood is in a particular area. That’s because the magnetic properties of hemoglobin, the oxygen- carrying molecule in blood, depend on how much oxygen it has on board. And because the nerve cells of the brain require a greater quantity of oxygenated blood when they’re busy processing information, an appropriately configured fMRI brain scan can trace the locus of mental activity.

This technique was pioneered in the early 1990s, and once it was developed, psychologists became very interested in what it might show. Was it possible to correlate areas of increased brain activity with particular mental and emotional states? Could MRI scanners gather information not just about the brain but also about the mind itself? Neuroscientists were still debating these issues when some researchers, including Daniel Langleben at the University of Pennsylvania and Sean Spence at the University of Sheffield, in England, began fMRI experiments in the early 2000s focused on revealing brain states associated with deception.

Their efforts to use fMRI to detect lies relied on a technique called cognitive subtraction. The idea is that when a person tells the truth about something, many parts of his or her brain may become active. For example, if somebody shares with you that he likes colorful clothing, certain parts of his brain would have to shape that thought and the expression of it. Now, let’s say that same person tells a lie—perhaps that he loves your new yellow-and-purple-plaid golf pants. In this case, the same parts of his brain would presumably go to work, but there would be additional activity in other regions, too, perhaps those involved with inhibiting the chuckle he might be making to himself about your garish attire.

If an fMRI brain scan were performed in both instances—truth-telling and lying—the difference between the two scans would highlight certain areas of the brain. If you carried out similar fMRI measurements on a large number of people, you might be able to identify the brain’s deception centers. Detecting brain activity in those regions with an fMRI scanner would then, in theory, provide a way to tell when someone is being dishonest.

One shortcoming of this approach is that it hinges on the assumption that everyone’s brain works the same way. But another strategy that’s sometimes applied doesn’t depend so much on all of us being wired alike: You ask the subject in your fMRI scanner to provide both truthful and deceptive answers to a series of test questions, knowing which are truths and which are falsehoods. A computer can then train itself automatically to recognize what may be a complex and completely unique pattern of brain activity that occurs when this particular person lies.

Laboratory tests of both these approaches appeared promising, and by 2008, 16 peer-reviewed papers on the subject had been published. Most of them indicated that when people are lying, there is more activity in certain parts of the prefrontal cortex, the area of the brain thought to be involved in orchestrating a person’s thoughts and actions. And most of those studies reported no areas of the brain where activity was greater when someone told the truth.

This was just the boost Huizenga and Laken needed to launch their businesses. No Lie MRI acquired the patent rights to Langleben’s specific methodology and set up shop. Cephos used a different variation of cognitive-subtraction technique to do the same. Because both operations grew out of academic research projects, experts can scrutinize the methodology being applied and argue about its merits. And argue they do.

“I doubt that there is any large group of neuroscientists that would say single-subject fMRI analysis is useful for lie detection,” says Gary Glover, a professor of radiology, neurosciences, and biophysics at Stanford University’s School of Medicine. “The way that cognitive neuroscience works is that you scan 30 or 40 people, look for average results, and then publish those. The reason for doing this is that people vary quite a bit: One person’s anecdotal result may not hold for the population in general or for any other person.”

Moreover, argues Glover, the basic fMRI technique is unlikely to become more accurate with time. “It’s about as good as it’s going to get,” he says. He doesn’t deny that MRI technology might improve, but he thinks that variations in human physiology will fundamentally limit an interrogator’s ability to detect changing cognitive states. That’s why Glover believes much more research would be needed to demonstrate that the vague and fundamentally ambiguous signals fMRI generates could provide an adequate basis for a commercial lie-detection service.

Although he helped to pioneer fMRI lie detection, Spence shares Glover’s skepticism. “Certain central problems remain, not least the absence of replication by investigators of their own key findings. Further data are required to justify its application to the field of lie detection,” he concludes.

Researchers’ doubts notwithstanding, Cephos is working hard to introduce fMRI lie detection to the American legal system. The company’s most recent effort involves a Tennessee psychologist who was accused of submitting false insurance claims. His attorneys tried to offer as evidence tests that Cephos performed in an attempt to show that he genuinely had no intent to commit fraud. Awkwardly for the defense, it came out during the trial that one scan Cephos had made of the psychologist indicated that he was lying. The company later repeated that same test, and the new results showed him to be telling the truth. Prosecutors, reasonably enough, objected to the do-over, and this past June the Tennessee court declared the fMRI results to be inadmissible, mostly because the method hasn’t received any scientific real-world testing.

“We’re making progress, but there’s a catch-22. Prosecutors aren’t supposed to prosecute when they don’t think a person is guilty, so if we come to the table and really convince them, then they don’t prosecute”
—Joel Huizenga, CEO, No Lie MRI

No Lie MRI has been no more successful in getting its results accepted as evidence in a court of law. “We’re making progress, but there’s a catch-22,” Huizenga complains. “Prosecutors aren’t supposed to prosecute when they don’t think a person is guilty, so if we come to the table and really convince them, then they don’t prosecute.”

In November 2009, MRI brain scans were used in court for the first time, in an application that had nothing to do with lie detection. During a sentencing hearing in Illinois for the convicted multiple murderer Brian Dugan, defense attorneys used MRI scans that showed Dugan had abnormal brain functioning. Laken thinks that marks the beginning of a trend. “We’re on track,” he says. “In one of our cases, the judge made a number of favorable comments and said that she would seriously consider the technology. She admitted the technology as evidence but didn’t use it to make a ruling. These are cracks in the glass.”

Hank Greely, director of the Center for Law and Bioscience at Stanford Law School, is doing his best to stop those cracks from spreading. “Judges, jurors, all of us, we’ve got this longing for a magic tool that will tell us whether someone is lying or telling the truth,” he says. “But we have to be very cautious about thinking it’s here because we want it to be here. I haven’t seen anything today that leads me to believe fMRI is better than polygraphs. If we start using a bad technology, people’s lives are going to be hurt—not just innocent people who are falsely convicted but guilty people who are falsely exonerated and go on to ruin the lives of other people.”

Greely, who is a lawyer by training, believes that much more research is necessary. He is involved in some of the research himself and recently coauthored a report on the use of fMRI to determine whether a subject has recognized a particular face. Although the study indicated that fMRI could reliably show whether the subject thought he or she recognized a face, it couldn’t tell you whether the subject had truly seen that face before. This suggests that fMRI wouldn’t help in distinguishing false memories from true ones.

“Experiments are hard to design, but until we get more realistic studies, there’s no proof that what happens in the lab is relevant to what happens in the real world,” Greely says. Unfortunately, the studies needed to evaluate the reliability of fMRI lie detection in real-world situations would be extremely expensive. A five-year study covering a range of ages, languages, and cultures would run about $125 million, Greely estimates.

The one group that could afford to fund research on such a scale is not renowned for sharing its findings with scientists, lawyers, or businesses. “I know that DARPA [the Defense Advanced Research Projects Agency] has funded quite a lot of research in this area—and maybe other Department of Defense agencies for obvious interests that they might have,” says Glover. “This research is pretty much under cover of night. I happen to know about a few studies, but I would have to shoot you if I told you.”

Occasionally, evidence of the military’s interest in fMRI does see the light of day. For example, in 2006, DARPA solicited proposals for research to “understand and optimize brain functions during learning” using fMRI technology, followed a year later by requests for a transportable battlefield MRI scanner.

“For the intelligence community, what we’re interested in are going to be devices that you can use remotely,” says Sujeeta Bhatt, a research scientist with the Defense Intelligence Agency. “We can create a fantastic map of deception in fMRI, but what we use for national security has to be something that we can train anyone to use fairly easily, that’s fairly portable, and not outrageously expensive.”

Such a device won’t use fMRI, Bhatt believes. “Functional MRI has serious limitations. Countermeasures haven’t been seriously studied, but of the ones that have, simply moving your tongue can compromise the data,” she says. “And in the intelligence community, the people that you’re screening have really studied their cover stories. Will that look like truth or a lie? We’re not there yet, and in terms of using [fMRI] as a practical, everyday tool to detect human deception, I don’t think we’re ever going to be there.”

Huizenga contends that others in the military are right now seeking the know-how his company offers. “We are dealing with the military. The guys in the field are asking for this technology. They want to know whether people are telling them the truth or telling them lies.” He refuses to provide any specifics, other than saying that No Lie MRI hopes shortly to secure government funding for a multimillion- dollar, 1200-person study. If such a large study is actually carried out, it could well determine the future of fMRI lie detection.

“God knows what the intelligence community, the CIA, and MI6 are spending on this work,” says Greely. “All the studies are secret, and science doesn’t work well in secrecy.” It appears not to work all that well in San Diego, either, judging by the results of my own interrogation in the scanner.

According to No Lie MRI, when I denied that I misstated business expenses, the region of my prefrontal cortex associated with deception lit up like a Christmas tree. For the record, I never pad expense reports (note to editor: honest!).

On the other hand, when I claimed that I had not feigned illness to weasel out of an obligation, there was nothing going on out of the ordinary in my frontal cortex, and only two spots elsewhere in my brain became active, providing no evidence of deception. In fact, I have many times claimed, falsely, that I didn’t feel well enough to take on a household chore or attend what I expected to be a dreary party.

Huizenga cautions me not to imagine this means I would make a great con man. “In a real test, we make all the questions virtually identical, allowing us to compare your answers against known truths,” he says. Perhaps so. But if fMRI lie detection is ever to break out of its academic ghetto and storm the courtroom, boardroom, or battlefield, it will have to succeed in precisely those situations where the absolute truth is not known. And you don’t need to be a mind reader to see that that day is still a long way off.

This article originally appeared in print as “Liar!”

This article is for IEEE members only. Join IEEE to access our full archive.

Join the world’s largest professional organization devoted to engineering and applied sciences and get access to all of Spectrum’s articles, podcasts, and special reports. Learn more →

If you're already an IEEE member, please sign in to continue reading.

Membership includes:

  • Get unlimited access to IEEE Spectrum content
  • Follow your favorite topics to create a personalized feed of IEEE Spectrum content
  • Save Spectrum articles to read later
  • Network with other technology professionals
  • Establish a professional profile
  • Create a group to share and collaborate on projects
  • Discover IEEE events and activities
  • Join and participate in discussions