The spirit of All Saints continues with the November issue of the Atlantic and its annual round-up of individuals with the biggest, boldest and most controversial ideas. Earning the title of bravest-thinking scientist is John P. A. Ioannidis, a clinician and statistician at the University of Ioannina School of Medicine, whose decade-long efforts to expose false claims in medical research is profiled in David H. Freedman's `Lies, Damned Lies, and Medical Science' as a veritable witch-hunt. From his intrepid investigations, Ioannidis has reached the conclusion that 90% of discoveries in the health sciences are wrong, and that the demands of journals for significant and startling findings has turned medical research into little more than witchcraft.
What has principally cooked this crucible, and created the most download traffic for PLoS Medicine, is Ioannidis' theoretical consideration of the positive predictive value (PPV) of published medical findings - in other words, the chance that a reported discovery is in fact a true find.
A key component of the PPV is the odds of discovery: the ratio of true ﬁnds to false ﬁnds among tested hypotheses. Despite the Atlantic’s reported conclusion of a 10% PPV – that is, that as much as 90 percent of the published medical information that doctors rely on is flawed - Ioannidis’ study shows that this chance is extremely sensitive to the pre-study odds, the prior chance of a true find to the chance of a false find, with essentially all chances being possible over the range of odds of 1:1 or lower.
Depending on the ﬁeld of study, accurately specifying the odds of discovery can be tricky. It is probably safe to suppose low pre-study odds for studies in genetic epidemiology, where the number of hypotheses is about the same as the number of genes, and so extremely high. Yet, for intervention studies in the medical sciences, there is greater uncertainty because the population of hypotheses are tested hypotheses, a population subject to selection bias. If shrewd investigators only test theories they believe to be true, it is probable that the pre-study odds is greater than 1:1, as scientists are unlikely to invest their careers chasing discoveries with only a ﬂip-of-the-coin’s chance of being found. Rather than damned liars, medical researchers might just be choosy. They research only drugs or treatments that are likely to work – so the success rate of their research is correspondingly high.
This more optimistic view would seem to be challenged by another well-known study of Ioannidis: an empirical assessment of overturned ﬁnds based on a survey of the most cited articles appearing in high-impact medical journals (JAMA, 2005; 294: 218-228). The success rate of 91% that he found is put into doubt by the 30% of these results that were either directly contradicted by later studies, or found to be of a different strength. To claim, as Ioannidis does, that this proves a high rate of inconsistency overlooks the reality that, in the medical sciences, replication is the pillar of the scientiﬁc method with the shakiest footing. In theory, any experiment should be exactly repeatable to give exactly the same result. In practice, unlike studies in the laboratory, experiments with humans have too many variables and too great an expense to allow for true repeats. Although later trials might attempt to conﬁrm earlier ﬁndings, differences in investigators, administrators, patients and protocols will always leave the possibility that apparent inconsistencies are actually an artifact of being unable to reproduce the experiment exactly.
Ioannidis deserves praise for calling attention to the possible sources of bias encouraged by the current peer-review process of clinical literature and demonstrating how these biases could inﬂate the success rate of reported medical ﬁndings.
But, until the reality of these biases can be better substantiated, it is a mistake to argue that clinical research is nothing more than witch’s brew.