Friday, October 24, 2008

Distortion of science by headline-grabbing

I'm always being concerned in scanning journals for possible postings in this blog that my eye is too readily caught by the flashy marketing phrase or popularizing twist, thus neglecting more boring, but possibly much more significant, work. An article in The Economist reinforces my concern by pointing to work of Young Ioannidis, and Al-Ubaydli in PLos Medicine. that uses economic commodity theory to show how the current scientific publishing system is biased towards favoring trumpeted results that are also more likely to be false.
In economic theory the winner’s curse refers to the idea that someone who places the winning bid in an auction may have paid too much. Consider, for example, bids to develop an oil field. Most of the offers are likely to cluster around the true value of the resource, so the highest bidder probably paid too much.

The same thing may be happening in scientific publishing, according to a new analysis. With so many scientific papers chasing so few pages in the most prestigious journals, the winners could be the ones most likely to oversell themselves—to trumpet dramatic or important results that later turn out to be false. This would produce a distorted picture of scientific knowledge, with less dramatic (but more accurate) results either relegated to obscure journals or left unpublished.

It starts with the nuts and bolts of scientific publishing. Hundreds of thousands of scientific researchers are hired, promoted and funded according not only to how much work they produce, but also to where it gets published. For many, the ultimate accolade is to appear in a journal like Nature or Science. Such publications boast that they are very selective, turning down the vast majority of papers that are submitted to them.

The assumption is that, as a result, such journals publish only the best scientific work. But Dr Ioannidis and his colleagues argue that the reputations of the journals are pumped up by an artificial scarcity of the kind that keeps diamonds expensive. And such a scarcity, they suggest, can make it more likely that the leading journals will publish dramatic, but what may ultimately turn out to be incorrect, research.

1 comment:

  1. There's also the problem that if you dredge datasets for relationships you are going to find statistically significant correlations that are just random artifacts. And, ironically, the higher you set your definition of statistically significant, the surer you are likely to be that the artifact that pops up is correct.