Why Most Published Research Findings Are False
"Why Most Published Research Findings Are False"[1] is a 2005 essay written by John Ioannidis, a professor at the Stanford School of Medicine, and published in PLOS Medicine. It is considered foundational to the field of metascience.
In the paper, Ioannidis argued that a large number, if not the majority, of published medical research papers contain results that cannot be replicated. In simple terms, the essay states that scientists use hypothesis testing to determine whether scientific discoveries are significant. "Significance" is formalized in terms of probability and one formalized calculation ("P value") is reported in the scientific literature as a screening mechanism. Ioannidis posited assumptions about the way people perform and report these tests and then he constructed a statistical model which indicates that most published findings are false positive results.
Argument
Suppose that in a given scientific field there is a known baseline probability that a result is true, denoted by . When a study is conducted, the probability that a positive result is obtained is . Given these two factors, we want to compute the conditional probability , which is known as the positive predictive value (PPV). Bayes' theorem allows us to compute the PPV as:
where is the type I error rate and is the type II error rate; the statistical power is . It is customary in most scientific research to desire and . If we assume for a given scientific field, then we may compute the PPV for different values of and :
0.1 | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.7 | 0.8 | 0.9 | |
---|---|---|---|---|---|---|---|---|---|
0.01 | 0.91 | 0.90 | 0.89 | 0.87 | 0.85 | 0.82 | 0.77 | 0.69 | 0.53 |
0.02 | 0.83 | 0.82 | 0.80 | 0.77 | 0.74 | 0.69 | 0.63 | 0.53 | 0.36 |
0.03 | 0.77 | 0.75 | 0.72 | 0.69 | 0.65 | 0.60 | 0.53 | 0.43 | 0.27 |
0.04 | 0.71 | 0.69 | 0.66 | 0.63 | 0.58 | 0.53 | 0.45 | 0.36 | 0.22 |
0.05 | 0.67 | 0.64 | 0.61 | 0.57 | 0.53 | 0.47 | 0.40 | 0.31 | 0.18 |
However, the simple formula for PPV derived from Bayes' theorem does not account for bias in study design or reporting. In the presence of bias , the PPV is given by the more general expression:
The introduction of bias will tend to depress the PPV; in the extreme case when the bias of a study is maximized, . Even if a study meets the benchmark requirements for and , and is free of bias, there is still a 36% probability that a paper reporting a positive result will be incorrect; if the base probability of a true result is lower, then this will push the PPV lower too. Furthermore, there is strong evidence that the average statistical power of a study in many scientific fields is well below the benchmark level of 0.8.[2][3][4]
Given the realities of bias, low statistical power, and a small number of true hypotheses, Ioannidis concludes that the majority of studies in a variety of scientific fields are likely to report results that are false.
Corollaries
In addition to the main result, Ioannidis lists six corollaries for factors that can influence the reliability of published research:
- The smaller the studies conducted in a scientific field, the less likely the research findings are to be true.
- The smaller the effect sizes in a scientific field, the less likely the research findings are to be true.
- The greater the number and the lesser the selection of tested relationships in a scientific field, the less likely the research findings are to be true.
- The greater the flexibility in designs, definitions, outcomes, and analytical modes in a scientific field, the less likely the research findings are to be true.
- The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true.
- The hotter a scientific field (with more scientific teams involved), the less likely the research findings are to be true.
Reception and Influence
Despite skepticism about extreme statements made in the paper, Ioannidis' broader argument and warnings have been accepted by a large number of researchers.[5] The growth of metascience and the recognition of a scientific replication crisis have bolstered the paper's credibility, and led to calls for methodological reforms in scientific research.[6][7]
In commentaries and technical responses, statisticians Goodman and Greenland identified several fallacies in Ioannidis' model.[8][9] Ioannidis' use of dramatic and exaggerated language that he "proved" that most research findings claims are false and that "most research findings are false for most research designs and for most fields" [italics added] was rejected, and yet they agreed with his paper's conclusions and recommendations. Biostatisticians Jager and Leek criticized the model as being based on justifiable but arbitrary assumptions rather than empirical data and did an investigation of their own which calculated that the false positive rate in biomedical studies was estimated to be around 14%, not over 50% as Ionnidis asserted.[10] Their paper was published in a 2014 special edition of the journal Biostatistics along with extended, supporting critiques from other statisticians. Leek summarized the key points of agreement as: when talking about the science-wise false discovery rate one has to bring data; there are different frameworks for estimating the science-wise false discovery rate; and "it is pretty unlikely that most published research is false," but that probably varies by one's definition of "most" and "false."[11] Statistician Ullrich Schimmick reinforced the importance of the empirical basis for models by noting the reported false discovery rate in some scientific fields are not the actual discovery rate because non-significant results are rarely reported. Ioannidis' theoretical model fails to account for that, but when a statistical method ("z-curve") to estimate the number of unpublished non-significant results is applied to two examples, the false positive rate is between 8% and 17%, not greater than 50%.[12] Despite these weaknesses there is nonetheless general agreement with the problem and recommendations Ioannidis discusses, yet his tone has been described as "dramatic" and "alarmingly misleading," which runs the risk of making people unnecessarily skeptical or cynical about science.[8][13]
A lasting impact of this work has been awareness of the underlying drivers of the high false positive rate in clinical medicine and biomedical research, and efforts by journals and scientists to mitigate them. Ioannidis restated these drivers in 2016 as being:[14]
- Solo, siloed investigator limited to small sample sizes
- No pre-registration of hypotheses being tested
- Post-hoc cherry picking of hypotheses with best P values
- Only requiring P < .05
- No replication
- No data sharing
See also
References
- Ioannidis, John P. A. (2005). "Why Most Published Research Findings Are False". PLOS Medicine. 2 (8): e124. doi:10.1371/journal.pmed.0020124. ISSN 1549-1277. PMC 1182327. PMID 16060722.
- Button, Katherine S.; Ioannidis, John P. A.; Mokrysz, Claire; Nosek, Brian A.; Flint, Jonathan; Robinson, Emma S. J.; Munafò, Marcus R. (2013). "Power failure: why small sample size undermines the reliability of neuroscience". Nature Reviews Neuroscience. 14 (5): 365–376. doi:10.1038/nrn3475. ISSN 1471-0048. PMID 23571845.
- Szucs, Denes; Ioannidis, John P. A. (2017-03-02). "Empirical assessment of published effect sizes and power in the recent cognitive neuroscience and psychology literature". PLOS Biology. 15 (3): e2000797. doi:10.1371/journal.pbio.2000797. ISSN 1545-7885. PMC 5333800. PMID 28253258.
- Ioannidis, John P. A.; Stanley, T. D.; Doucouliagos, Hristos (2017). "The Power of Bias in Economics Research". The Economic Journal. 127 (605): F236–F265. doi:10.1111/ecoj.12461. ISSN 1468-0297.
- Belluz, Julia (2015-02-16). "John Ioannidis has dedicated his life to quantifying how science is broken". Vox. Retrieved 2020-03-28.
- "Low power and the replication crisis: What have we learned since 2004 (or 1984, or 1964)? « Statistical Modeling, Causal Inference, and Social Science". statmodeling.stat.columbia.edu. Retrieved 2020-03-28.
- Wasserstein, Ronald L.; Lazar, Nicole A. (2016-04-02). "The ASA Statement on p-Values: Context, Process, and Purpose". The American Statistician. 70 (2): 129–133. doi:10.1080/00031305.2016.1154108. ISSN 0003-1305.
- Goodman, Steven; Greenland, Sander (24 April 2007). "Why Most Published Research Findings Are False: Problems in the Analysis". PLOS Medicine. pp. e168. doi:10.1371/journal.pmed.0040168. Archived from the original on 16 May 2020.
- Goodman, Steven; Greenland, Sander. "ASSESSING THE UNRELIABILITY OF THE MEDICAL LITERATURE: A RESPONSE TO "WHY MOST PUBLISHED RESEARCH FINDINGS ARE FALSE"". Collection of Biostatistics Research Archive. Working Paper 135: Johns Hopkins University, Dept. of Biostatistics Working Papers. Archived from the original on 2 November 2018.CS1 maint: location (link)
- Jager, Leah R.; Leek, Jeffrey T. (1 January 2014). "An estimate of the science-wise false discovery rate and application to the top medical literature". Biostatistics. Oxford Academic. pp. 1–12. doi:10.1093/biostatistics/kxt007. Archived from the original on 11 June 2020.
- Leek, Jeff. "Is most science false? The titans weigh in". simplystatistics.org. Archived from the original on 31 January 2017.
- Schimmick, Ullrich (16 January 2019). "Ioannidis (2005) was wrong: Most published research findings are not false". Replicability-Index. Archived from the original on 19 September 2020.
- Ingraham, Paul (15 September 2016). "Ioannidis: Making Science Look Bad Since 2005". www.PainScience.com. Archived from the original on 21 June 2020.
- Minikel, Eric V. (17 March 2016). "John Ioannidis: The state of research on research". www.cureffi.org. Archived from the original on 17 January 2020.
Further reading
- Carnegie Mellon University, Statistics Journal Club: Summary and discussion of: “Why Most Published Research Findings Are False”
- Applications to Economics: De Long, J. Bradford; Lang, Kevin. "Are all Economic Hypotheses False?" Journal of Political Economy. 100 (6): 1257–1272, 1992
- Applications to Social Sciences: Hardwicke, Tom E.; Wallach, Joshua D.; Kidwell, Mallory C.; Bendixen, Theiss; Crüwell Sophia and Ioannidis, John P. A. "An empirical assessment of transparency and reproducibility-related research practices in the social sciences (2014–2017)." Royal Society Open Science. 7: 190806, 2020.
External links
- YouTube video(s) from the Berkeley Initiative for Transparency in the Social Sciences, 2016, "Why Most Published Research Findings are False" (Part I, Part II, Part III)
- YouTube video of John Ioannidis at Talks at Google, 2014 "Reproducible Research: True or False?"