From SEED Magazine (first published June 8, 2010) by Moheb Costandi…☛…Brooklyn defense attorney David Zevin almost made legal history last month by attempting to submit functional magnetic resonance imaging (fMRI) data as evidence that a key witness was telling the truth. Zevin was defending Cynette Wilson, who claimed that the temping agency she had worked for stopped giving her work following her complaint of sexual harassment.
A co-worker at the agency claimed he had overheard Wilson’s supervisor saying that she should not be placed on jobs because of her complaint, so Zevin had the co-worker undergo questioning about the incident during a brain scan by Cephos, one of several American companies that claim they can reliably use fMRI to establish whether somebody is telling the truth or lying. fMRI measures the changes in cerebral blood flow that are related to brain activity. Some researchers claim that they can distinguish between the activation patterns associated with lying and telling the truth and, therefore, that the technique can be used for reliable lie detection.
fMRI data have actually been used in the court room before, one notable example being a controversial 2008 trial in India, in which a conviction was based on brain scan data that suggested the suspect was present at the crime scene. But Zevin’s was the first attempt to submit such data for the purposes of lie detection, and as such the case was considered a landmark. In the end, presiding judge Robert H. Miller decided to exclude the fMRI data, on the grounds that it contravenes the jury’s right to assess the credibility of the witness. So can fMRI be used reliably for lie detection? A very small number of previous studies suggest that the technique can be used to distinguish between truth and falsehood under controlled and highly contrived experimental conditions. It is, however, by no means clear whether this can be replicated in real-life situations. fMRI is generally thought to be no more or less reliable than the traditional polygraph test, which measures the changes in various physiological parameters—such as heart rate, blood pressure and skin conductance—that can change when someone is lying. The success rate of the polygraph is only a little higher than would be expected by chance and, according to a report by the National Academies’ National Research Council, the technique cannot be relied upon to give accurate results.
An important and related issue is memory, with respect to the reliability of eye-witness testimonies. We have known, since Frederick Bartlett’s work in the 1930s, and subsequently from the work of pioneering researchers such as Elizabeth Loftus, that memory is reconstructive, rather than reproductive. In other words, we do not always remember events as they actually happened; instead, our prejudices and biases shape our recollections. Ironically, a new fMRI study confirms this, casting further doubt on the reliability of the technique for lie detection. Jesse Rissman of Stanford University and his colleagues showed participants hundreds of photographs of faces, and then scanned the participants’ brains an hour later while viewing the same faces and some novel ones. The participants were asked to indicate whether or not they had seen each of the faces before, and how certain they were. While they did so, the researchers identified the patterns of brain activity associated with each response. Rissman’s team found that they could accurately predict the participants’ subjective experiences of each face as novel or familiar based on the measured brain activity alone, and that the accuracy of the predictions was highest (95 percent) when the participants were most confident about their answer. They could also predict if the novelty judgment for each face was right or wrong.
The researchers then tried to distinguish between the brain activity from trials in which old faces were rightly remembered and trials in which novel faces were falsely remembered. This time, the accuracy of their predictions fell to around 59 percent, and was even lower during trials in which the participants were not very confident about their judgement. The researchers were thus able to establish what their participants thought they remembered, but not what they actually remembered. “We need to do more work and plan to look at more in-depth memories and witness accounts,” Rissman told the BBC “So the practical application [in the courtroom] is far off yet.”
Rissman’s comment reflects the general consensus among neuroscientists: that fMRI data cannot yet reliably distinguish truth from lies. And his research shows that it may never be possible to use fMRI to do so, because of the very nature of memory itself. It follows, then, that such data should not be used in the courtroom for lie detection purposes. Inevitably, though, we will see more attempts to submit them as evidence in court cases and, it seems, they will be evaluated on a case-by-case basis. A good precedent was set earlier this week, in a federal court case in Tennessee, in which the attorney defending Lorne Semrau against charges of fraud attempted to submit brain scans as evidence that his client was telling the truth. After hearing evidence from experts including neuroethicist Martha Farah of the University of Pennsylvania and Cephos CEO Steve Laken, Judge Tu Pham, presiding over the case, deemed the data inadmissible, because they did not meet the guidelines for admitting expert scientific evidence. Pham added, however, that “fMRI-based lie detection… methodology may be found to be admissible… [in the future, after] undergo[ing] further testing, development, and peer review [to] improve upon standards.”
Pham’s ruling was sound: It is not currently possible to distinguish truth from lies on the basis of fMRI data. It may be possible one day, with advances in neuroimaging technology, and also— perhaps more crucially—on big leaps in our understanding of how the brain works. Until that day, fMRI data should not be admissible as evidence in courts of law.
Moheb Costandi is the author of the Neurophilosophy blog.