Jonah Lehrer has a fascinating article in the recent New Yorker which describes in detail a disturbing trend:
..all sorts of well-established multiply confirmed findings have started to look increasingly uncertain, It's as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable. This phenomenon doesn't yet have an official name, but it's occurring across a wide range of fields, from psychology to ecology. In the field of medicine the phenomenon seems extremely widespread, affecting not only antipsychotics but also therapies from cardiac stents to Vitamin E and antidepressants...a forthcoming analysis demonstrates that the efficacy of antidepressants has gone down as much as three-fold in recent decades.Lehrer tells the story of a number of serious scientists who have reported statistically significant effects with appropriate controls, only to find then disappear over time, seemingly iron-clad results that on repetition seemed to fade away. One example is "verbal overshadowing", subjects being shown a face and asked to describe it being much less likely to recognize the face when shown it later than those who had simply looked at it. Another theory that has fallen apart is the claim that females use symmetry as a proxy for reproductive fitness of males. A 2005 study found that of the 50 most cited clinical research studies (with randomized control trials), almost half were subsequently not replicated or had their effects significantly downgraded, and these studies had guided clinical practice (hormone replacement therapy for menopausal women, low-dose aspirin to prevent heart attacks and strokes).
It is not entirely clear why this is happening, and several possibilities are mentioned:
-statistical regression to the mean, an early statistical fluke gets canceled out.
-publication bias on the part of journals, who prefer positive data over null results
-selective reporting, or significance chasing. A review found that over 90% of psychological studies reporting statistically significant data, i.e. odds of being produced by chance less than 5% of the time, found the effect they were looking for. (One classic example of selective reporting concerns testing acupuncture in Asian countries - largely positive data - versus Western countries - less than half confirming. See today's other posting on MindBlog).
The problem of selective reporting doesn't derive necessarily from dishonesty, but from the fundamental cognitive flaw that we like proving ourselves right and hate being wrong. The decline effect may actually be a decline of illusion.
We shouldn't throw out the baby with the bath water, as Lehrer notes in a subsequent blog posting. These problems don't mean we shouldn't believe in evolution or climate change:
One of the sad ironies of scientific denialism is that we tend to be skeptical of precisely the wrong kind of scientific claims. In poll after poll, Americans have dismissed two of the most robust and widely tested theories of modern science: evolution by natural selection and climate change. These are theories that have been verified in thousands of different ways by thousands of different scientists working in many different fields. (This doesn’t mean, of course, that such theories won’t change or get modified – the strength of science is that nothing is settled.) Instead of wasting public debate on creationism or the rhetoric of Senator Inhofe, I wish we’d spend more time considering the value of spinal fusion surgery, or second generation antipsychotics, or the verity of the latest gene association study.