Praise for a dubious witch-hunt

Author: Stephanie Kovalchik

The spirit of All Saints continues with the November issue of the Atlantic and its annual round-up of individuals with the biggest, boldest and most controversial ideas. Earning the title of bravest-thinking scientist is John P. A. Ioannidis, a clinician and statistician at the University of Ioannina School of Medicine, whose decade-long efforts to expose false claims in medical research is profiled in David H. Freedman's `Lies, Damned Lies, and Medical Science' as a veritable witch-hunt. From his intrepid investigations, Ioannidis has reached the conclusion that 90% of discoveries in the health sciences are wrong, and that the demands of journals for significant and startling findings has turned medical research into little more than witchcraft.

The weight of the evidence? Weighing scale for witches, Witch Museum, Freiburg, Germany. (Photographer: Flominator, via Wikimedia commons)

The weight of the evidence?

Weighing scale for witches,

Witch Museum, Freiburg, Germany.

Photo: Flominator, Wikimedia commons.

What has principally cooked this crucible, and created the most download traffic for PLoS Medicine, is Ioannidis' theoretical consideration of the positive predictive value (PPV) of published medical findings - in other words, the chance that a reported discovery is in fact a true find.

A key component of the PPV is the odds of discovery: the ratio of true finds to false finds among tested hypotheses. Despite the Atlantic’s reported conclusion of a 10% PPV – that is, that as much as 90 percent of the published medical information that doctors rely on is flawed - Ioannidis’ study shows that this chance is extremely sensitive to the pre-study odds, the prior chance of a true find to the chance of a false find, with essentially all chances being possible over the range of odds of 1:1 or lower.

Depending on the field of study, accurately specifying the odds of discovery can be tricky. It is probably safe to suppose low pre-study odds for studies in genetic epidemiology, where the number of hypotheses is about the same as the number of genes, and so extremely high. Yet, for intervention studies in the medical sciences, there is greater uncertainty because the population of hypotheses are tested hypotheses, a population subject to selection bias. If shrewd investigators only test theories they believe to be true, it is probable that the pre-study odds is greater than 1:1, as scientists are unlikely to invest their careers chasing discoveries with only a flip-of-the-coin’s chance of being found. Rather than damned liars, medical researchers might just be choosy. They research only drugs or treatments that are likely to work – so the success rate of their research is correspondingly high.

This more optimistic view would seem to be challenged by another well-known study of Ioannidis: an empirical assessment of overturned finds based on a survey of the most cited articles appearing in high-impact medical journals (JAMA, 2005; 294: 218-228). The success rate of 91% that he found is put into doubt by the 30% of these results that were either directly contradicted by later studies, or found to be of a different strength. To claim, as Ioannidis does, that this proves a high rate of inconsistency overlooks the reality that, in the medical sciences, replication is the pillar of the scientific method with the shakiest footing. In theory, any experiment should be exactly repeatable to give exactly the same result. In practice, unlike studies in the laboratory, experiments with humans have too many variables and too great an expense to allow for true repeats. Although later trials might attempt to confirm earlier findings, differences in investigators, administrators, patients and protocols will always leave the possibility that apparent inconsistencies are actually an artifact of being unable to reproduce the experiment exactly.

Ioannidis deserves praise for calling attention to the possible sources of bias encouraged by the current peer-review process of clinical literature and demonstrating how these biases could inflate the success rate of reported medical findings.

But, until the reality of these biases can be better substantiated, it is a mistake to argue that clinical research is nothing more than witch’s brew.


Bookmark and Share

Comment on this article

Submit your comment
  1. Image of unique ID


Stephanie Kovalchik

David, thank you for your comments. I agree that Dr. Ioannidis is a thorough researcher who has devoted much of his career to improving the reporting of medical research. His contributions to methodology in meta-analysis have been particularly valuable. My intent in the commentary was to praise his efforts for pointing out the possible biases that can arise in medical research reporting as a consequence of the peer-review system, while criticizing conclusions in his work that do not fully reflect the uncertainties involved. The statement that `most published research findings are false' is misleading because it suggests that his article is a proof of this claim, which it is not. Still, I should have been more measured in my comments and not been carried away by the Halloween theme. My intended use of `witch-hunt' was its political sense as a campaign against persons with antithetical views. I should have taken more care to recognize its malevolent connotations and chosen a more appropriate alternative. Thank you for raising this issue.

reply to this comment

David H. Freedman

Stephanie, your comments about the trickiness of PPV and reproducing experiments are well worth considering, and of course you're right to call for caution in demonizing research and researchers when looking at the problems with research. I would, however, like to point out that characterizing Ioannidis' work as a "witch-hunt," or suggesting that my profile of him in The Atlantic characterizes him and his work this way, is far off the mark--in fact, about 180 degrees off. A witch-hunt, of course, is about viciously and ignorantly seeking to attack groups of people who are utterly innocent of what they are being accused of. As the article makes perfectly clear, Ioannidis is a meticulous gatherer of evidence that is mostly well-accepted by the medical research community itself, and he is extremely quick to defend research and researchers against anyone who tries to use his research to suggest that researchers are "bad" or that medical science isn't a highly worthwhile endeavor, in spite of the apparently high wrongness rates. He's no witch-hunter, and there are few in the medical-science community who would say that he is.

reply to this comment

Skip to Main Site Navigation / Login

Site Search Form

Site Search