top of page

The Gold Standard? - Part 2


C.F. Rehnborg Chair in Disease Prevention at Stanford University, Professor of Medicine, Professor of Health Research and Policy, and Professor (by courtesy) of Biomedical Data Science at the School of Medicine; Professor (by courtesy) of Statistics at the School of Humanities and Sciences; co-Director, Meta-Research Innovation Center at Stanford; Director of the PhD program in Epidemiology and Clinical Research
Dr. John Ioanidis. C.F. Rehnborg Chair in Disease Prevention at Stanford University, Professor of Medicine, Professor of Health Research and Policy, and Professor of Biomedical Data Science at the School of Medicine; Professor of Statistics at the School of Humanities and Sciences; co-Director, Meta-Research Innovation Center at Stanford; Director of the PhD program in Epidemiology and Clinical Research

The FOX is Guarding the Chicken Coop.

Dr. Ioannidis, MD is highly credible. His credentials are just partly stated under his image. https://profiles.stanford.edu/john-ioannidis


As you browse his list of publications and qualifications, ask yourself the following question. If he is wrong, why is he still employed by Stanford University - one of the most prestigious medical schools on the planet?


Dr. Ioannidis is the major critic of Gold Standard #2 - Randomized Controlled Studies. These are the studies the put pharmaceutical drugs in your medicine cabinet. Below is a short list of his publications with key statements from each.

 

1. Ioannidis: Most Research Is Flawed; Let's Fix It – Medscape. You estimated that 90% of medical research is flawed.


2. A user's guide to inflated and manipulated impact factors. This article may not seem to belong here, but if the journal owners are lying about the influence of their journal, than can we trust what they are publishing?


3. Why most published research findings are false. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias.

TJL Comment: That's fine as long as your health isn't at stake - but it is!


4. Power failure: why small sample size undermines the reliability of neuroscience. Improving reproducibility in neuroscience is a key priority and requires attention to well-established but often ignored methodological principles.


5. Contradicted and initially stronger effects in highly cited clinical research. Controversies are most common with highly cited nonrandomized studies, but even the most highly cited randomized trials may be challenged and refuted over time, especially small ones.


6. Systematic review of the empirical evidence of study publication bias and outcome reporting bias. Studies that report positive or significant results are more likely to be published and outcomes that are statistically significant have higher odds of being fully reported.

TJL Comment: Do you learn more from mistakes or successes?

Shouldn't we publish both?


7. Better reporting of harms in randomized trials: an extension of the CONSORT statement. In response to overwhelming evidence and the consequences of poor-quality reporting of randomized, controlled trials (RCTs), especially regarding the safety of drugs, many medical journals and editorial groups have now endorsed the Consolidated Standards of Reporting Trials.


8. Why most discovered true associations are inflated. Newly discovered true associations often have inflated effects compared with the true effect sizes.


9. Summing up evidence: one answer is not always enough. Neither individual trials nor meta-analyses, reporting as they do on population effects, tell how to treat the individual patient.


10. Statin treatment for primary prevention of vascular disease: whom to treat? Cost-effectiveness analysis.