Neuroscience and popular culture: Who do voodoo? They do! Social neuroscientists, that is
Hey, you've heard it all from the fashion mag at the local clip shop, ... so, like, what can I add, really?
Neuroscience shows why women love shopping, why gay guys read maps like women, why jealous guys ... come to think of it, why does social neuroscience only tell us what we already heard from that high school drop-out cousin, shooting pool down in the rec room between his split shifts at the loading dock?
Is this really science? Probably not, say a team of statisticians, who took a look at some of these studies. Basically, many of the claimed correlations were simply too high to be possible. That was because the "social neuroscience" people were cherry picking the data.
Here's the paper, "Voodoo Correlations in Social Neuroscience," in press at Perspectives on Psychological Science. The lead author Edward Vul is one brave MIT grad student, along with Christine Harris, Pietr Winkelman, and Harold Pashler.
Taking aim at social neuroscience, they said,
The newly emerging field of Social Neuroscience has drawn much attention in recent years, with high-profile studies frequently reporting extremely high (e.g., >.8) correlations between behavioral and self-report measures of personality or emotion and measures of brain activation obtained using fMRI. We show that these correlations often exceed what is statistically possible assuming the (evidently rather limited) reliability of both fMRI and personality/emotion measures. The implausibly high correlations are all the more puzzling because social-neuroscience method sections rarely contain sufficient detail to ascertain how these correlations were obtained.Vuh's team suggests specific statistical means of rescuing the questionable "red list" studies' findings, if the authors wish to perform them.
We surveyed authors of 54 articles that reported findings of this kind to determine the details of their analyses. More than half acknowledged using a strategy that computes separate correlations for individual voxels, and reports means of just the subset of voxels exceeding chosen thresholds. We show how this non-independent analysis grossly inflates correlations, while yielding reassuring-looking scattergrams. This analysis technique was used to obtain the vast majority of the implausibly high correlations in our survey sample. In addition, we argue that other analysis problems likely created entirely spurious correlations in some cases.
We outline how the data from these studies could be reanalyzed with unbiased methods to provide the field with accurate estimates of the correlations in question. We urge authors to perform such reanalyses and to correct the scientific record.
The authors of the "red list" (= highly questionable) studies have responded, denouncing the voodoo claim.
Very well, but here are some good reasons for taking "social neuroscience" with a really huge bag of sidewalk salt (not hard to find here in Toronto these days, due to a recent cold snap, and pictured above):
1. Brain studies should usually be somewhat imprecise because everyone's brain is different. Perhaps only a few situations will genuinely produce a high, predictable finding (extreme pain?). That isn't a criticism of the field; quite the contrary, recognition of inevitable limitations is a hallmark of good science. As Vuh's team puts it,
Although it is possible for voxels registered to the ‘average brain’ to be functionally matched across subjects, the variability in anatomical location of well-studied regions even in early visual cortex (V1, MT) and visual cognition (FFA) suggests to us that higher-level functions determining individual differences in personality and emotionality is not likely to be anatomically uniform across individuals (Saxe, Brett, & Kanwisher, 2006).2. The correlations really were just too good to be true. Cashing a winning lottery ticket is one thing. Cashing a number of them could lead to suspicion (and in one case I know of, criminal accusations).
3. Suspiciously, social neuroscience tells us what we already believe to be true, and puts a "science" spin on it. As Sharon Begley points out, quoting mutuallyoccluded, the skewered studies “vindicate the crudest of stereotypes." Real science, by contrast, often challenges popular ideas and forces us to think harder than we normally would.
4. Pop culture theories or prejudices must be distinguished from common sense inferences. Pop culture theories are typically based on pop psychology fads. Common sense, by contrast, is based on millennia of observation. Suppose, for example, someone claimed to "prove" through social neuroscience that most crack addicts are healthy and happy. Well, the parade of misery through emergency rooms, court rooms, jails, and police morgues would certainly suggest otherwise! It's not prejudice that makes us doubt that finding, but rather the weight of contrary evidence from other sources. Common sense tells us to believe the weight of the evidence, not some novel finding.
5. The papers Vuh's team has trashed were published in prestigious journals. That suggests that science, in this area, is in a rut. The most likely reason is that the scientists are seeking a certainty that just isn't there. And it is never going to be there. We live in a universe where indeterminacy is built in, and that won't change.
Social neuroscience, in my view, is just the latest instance of - in Bruce Thornton's phrase - "the things we know that ain't so."
Some other resources:
In "The 'Voodoo' Science of Brain Imaging," (Newsweek blog, January 09, 2009) Sharon Begley, co-author of The Mind and the Brain offers well-justified skepticism of the transparent pop science agenda. She thinks it's physics envy. A desire for precision in a field that studies the restless sea of the brain. Could be. How much easier to deal with predictable particles than constantly shifting brains!
British Psychological Society's Research Digest blog, Do you do voodoo?
By analogy with a purely behavioural experiment, imagine the author of a new psychometric measure claiming that his new test correlated with a target psychological construct, when actually he had arrived at his significant correlation only after he had first identified and analysed just those items that showed the correlation with the target construct. Indeed, Pashler and his collaborators speculated that the editors and reviewers of mainstream psychology journals would routinely pick up on the kind of flaws seen in imaging-based social neuroscience, but that the novelty and complexity of this new field meant such mistakes have slipped through the net.Here's Neurocritic's view ("Deconstructing the most sensationalistic recent findings in Human Brain Imaging, Cognitive Neuroscience, and Psychopharmacology")
Labels: neurobullshipping, neuroscience
<< Home