There's something deeply compelling about "seeing" the mind at work with the help of relatively new neuroscientific tools, such as functional Magnetic Resonance Imaging (fMRI), which furnish the images of brain activation that often accompany popular science coverage. Indeed, a well-known 2008 paper by McCabe and Castel reported that people thought articles containing fMRI images of the brain reflected better scientific reasoning than matched articles that did not.
The finding confirmed the widespread belief that people are all-too-easily duped by neuroscience, what with its aura of hard science and colorful offerings of "brain porn." Even before the publication of the influential 2008 study, Paul Bloom, a psychology professor at Yale, speculated that fMRI captures the popular imagination for psychological reasons of which we should be wary:
The media, critical funding decisions, precious column inches, tenure posts, science credibility and the popular imagination have all been influenced by fMRI's seductive but deceptive grasp on our attentions. It's a pervasive influence, and it's not because the science is better.
Why does it affect us so? Probably because fMRI seems more like real science than many of the other things that psychologists are up to. It has all the trappings of work with great lab-cred: big, expensive, and potentially dangerous machines, hospitals and medical centers, and a lot of people in white coats.
Yet a series of recent papers suggests the story may be more complex. If we are seduced by neuroscience, it might not be the pretty pictures that people find so alluring.
In one set of studies, authored by Hook and Farah and published in the September issue of the Journal of Cognitive Neuroscience, people judged research summaries that included fMRI images no more surprising, innovative, worthy of funding, or illustrative of good scientific reasoning than summaries accompanied by other images, such as photographs associated with the summarized research. (Hook and Farah's initial experiment did find that fMRI images increased people's ratings that the research summary was interesting, but this small effect wasn't replicated in their subsequent experiments.)
Another set of studies, authored by Schweitzer, Baker, and Risko and forthcoming in the December issue of the journal Cognition, found that neuroimages only boosted assessments of scientific reasoning under very particular conditions. When participants read two fake research summaries that both involved flawed reasoning, the second was perceived to be better if it was the only one of the two stories to contain a 3-dimensional fMRI image, suggesting that something about the comparison between (poor) studies that did and did not involve brain images led to an advantage for the former.
So brain images may have some effect on people's responses to scientific news reports, but it's unlikely to be a particularly powerful or pervasive one. These new results challenge both the 2008 findings and the compelling intuition that there's something special about a picture of the brain.
First, why the failure to replicate the original 2008 results?
One intriguing possibility, raised by Schweitzer and colleagues, is that responses to brain images have actually changed since 2008. They've not only lost some of their novelty, but have also suffered from a growing "neuro-backlash" that questions what we can and can't learn from contemporary neuroscience.
Another suggestion, raised by Jay Van Bavel and Dominic Packer in a nice post at Scientific American Mind, is that we've been asking the wrong question. Instead of asking "Do brain images seduce?", we should ask: "When and why might they seduce?" Even if effects of brain images aren't as powerful or pervasive as once thought, the findings by Schweitzer and colleagues suggest that they do appear under the right conditions – perhaps when they're made salient and people lack more reliable bases for evaluating the quality of research.
In fact, it may simply be that people are (sometimes) seduced by science, with no special role for the neuro or the image. In another well-known 2008 paper, Weisberg, Keil, Goodstein, Rawson, and Gray presented participants with explanations for psychological phenomena that either did or did not include an irrelevant bit of neuroscience, but with no brain images at all. While even non-expert participants were quite good at differentiating non-circular from circular explanations when no neuroscience was involved, their ability to do so significantly deteriorated with the addition of irrelevant neuroscience. Experts weren't similarly misled.
Along similar lines, a study published last year found that adding irrelevant math to a scientific abstract made non-experts, but not experts, judge the work to be of higher quality.
So anything that smacks of hard, objective science might be enough to fool people's judgments when more effective ways to assess scientific quality aren't readily available, either because information is limited or the individual lacks relevant expertise. There might not be anything special about neuroimages, or even about neuroscience.
But we're still left with the second puzzle – the puzzle of why it was (and is) so easy to believe that brain images have a powerful and pervasive allure when the evidence is actually so weak.
Above all, brain images seem to provide a direct window onto the mind, capturing nebulous aspects of thought in something with a spatiotemporal profile. In more technical terms, they seem to have the evidential or epistemic force of a photograph, a relatively unmediated representation of what's going on under the hood. But as the philosopher Adina Roskies points out, the analogy between a photograph and a neuroimage is potentially misleading. Generating a neuroimage requires a great deal of analysis and a great many assumptions; it is arguably no (evidentially) closer to the brain than a bar graph or another summary representation of a scientific result. As 13.7's own Alva Noë put it, brain images are not pictures of our brain in action.
Perhaps even non-experts appreciate these points, and aren't lulled into thinking brain images tell us more than they do. I think this is unlikely. Some members of the public may appreciate the inferential leaps behind a neuroimage, but the details get pretty complicated pretty quickly, and popular science coverage rarely even hints at this complexity.
Instead, it could be that neuroimages – and neuroscience in general – are a battleground for warring intuitions. On the one hand, there's a tendency towards reductionism that leads people to find concrete, spatially meaningful renderings of brain activity irresistibly real and objective in a way that beliefs and emotions and the other contents of mental life are not. On the other hand, we resist the idea that mental life is nothing more than a series of spatiotemporally localized physical events.
So, something about neuroscientific explanations for human experience and behavior could strike us as deeply right ... but deeply incomplete. And these warring intuitions could result in only small or inconsistent effects on people's responses to neuroimages.
If this is the right story, it shouldn't be too surprising. After all, these conflicting responses characterize the original enthusiasm over brain imaging technologies and the ensuing neuro-skepticism. Most people may simply be of two minds when it comes to the status of neuroscience. (And if you're not convinced, just wait for an fMRI study to prove me right.)
You can keep up with more of what Tania Lombrozo is thinking on Twitter: @TaniaLombrozo