[...J]ournal editors and journalists do not necessarily exercise appropriate caution. That's not because journal editors are dumb and don't get statistics, but because scientific journals are looking for novel and interesting results, not "We did a study and look, we found exactly what you'd have expected before you'd plowed through our four pages of analysis." This "publication bias" means that journals are basically selecting for outliers. In other words, they are in the business of publishing papers that, for no failure of method but simply from sheer dumb luck, happened to get an unusual sample. They are going to select for those papers more than they should -- especially in fields that study humans, who are expensive and reluctant to sit still for your experiment, rather than something like bacteria, which can be studied in numbers ending in lots of zeroes.
Journalists, who unfortunately often don't understand even basic statistics, are even more in this business. They easily fall into the habit (and I'm sure an enterprising reader can come up with at least one example on my part), of treating studies not as a potentially interesting result from a single and usually small group of subjects, but as a True Fact About the World. Many bad articles get written using the words "studies show," in which some speculative finding is blown up into an incontrovertible certainty. This is especially true in the case of psychology, because the results often suggest deliciously dark things about human nature, and not occasionally, about the political enemies of the writers.
...
Well, as I said in a column about an earlier fiasco concerning research findings on American attitudes about gay marriage, "We reward people not for digging into something interesting and emerging with great questions and fresh uncertainty, but for coming away from their investigation with an outlier -- something really extraordinary and unusual. When we do that, we're selecting for stories that are too frequently, well, incredible." This is true of academics, who get rewarded with plum jobs not for building a well-designed study that offers a messy and hard-to-interpret result, but for generating interesting findings .
Likewise, journalists are not rewarded for writing stories that say "Gee, everything's complicated, it's hard to tell what's true, and I don't really have a clear narrative with heroes and villains." Readers like a neat package with a clear villain and a hero, or at least clear science that can tell them what to do. How do you get that story? That's right, by picking out the outliers. Effectively, academia selects for outliers, and then we select for the outliers among the outliers, and then everyone's surprised that so many "facts" about diet and human psychology turn out to be overstated, or just plain wrong.
She is still pissed that social studies show conservatives are low-information, biased, callous, error-prone authoritarians but doesn't have the nerve to directly attack someone like Bob Altemeyer.
3 comments:
I just see laziness in the Megan mess. Lots of other people, especially Ed Yong at the Atlantic*, have done good jobs talking about the Nosek project, so Megan has to drop some drivel
*Is the Atlantic bettor off without McArdle? - http://www.theatlantic.com/health/archive/2015/08/psychology-studies-reliability-reproducability-nosek/402466/
There's always that.
It was a hypothetical, not a statistic.
Post a Comment