DOI: 10.12809/hkmj154631
COMMENTARY
The dark side of the moon
SW Choi, PhD; David MH Lam, MB, ChB; Michael G Irwin, MB, ChB, FHKAM (Anaesthesiology)
Department of Anaesthesiology, The University of Hong Kong, Queen Mary Hospital, Pokfulam, Hong Kong
Corresponding author: Dr SW Choi (htswchoi@hku.hk)
 
 Full paper in PDF
 
If anyone told you that they read their horoscope every day because once, a long time ago, their horoscope correctly predicted a job promotion, you would probably laugh and say that this was a fluke. In 2011, however, social psychologist, Daryl Bem, published a paper in the Journal of Personality and Social Psychology that purportedly showed evidence of precognition (the ability to tell the future) and premonition among university students.1 The study received a great deal of attention from both the scientific community and the media, with the result that several psychologists attempted to repeat the experiments that had been described in great detail. When they failed to obtain the same results, their manuscripts were rejected by the same journal on the grounds that ‘this journal does not publish replications’.2 Although the replications were eventually published in PLOS ONE, this is a classic case of publication bias.3 This is probably not surprising since sensational reports of being able to see into the future are undoubtedly more exciting than the mundane reality. The point here is that you would be wrong to think that this only happens in the field of parapsychology. It also happens in basic science.
 
A study conducted in 2012 by Begley and Ellis4 of 53 ‘landmark’ preclinical trials of cancer drugs found that 47 of them could not be reproduced, even though the investigators contacted some of the original laboratories to borrow the same antibodies and other reagents. The investigators suggested that this was because only exciting, positive results were published. Scientists may perform many studies, repeat an experiment many times, and cherry-pick the results that ‘tell the best story’, submitting only these positive results for publication.4 The first recommendation made by Begley and Ellis4 was that there must be more opportunity for scientists to present negative data and that preclinical investigators should be required to report all findings, regardless of outcome. At present, the whole system of publication and academic medicine—from journal editors to academic administrators who make decisions on contracts, pay rises, and tenure—provide little, if any, incentive for scientists to present negative findings. After all, has a Nobel Prize ever been awarded to anyone who showed that something did not work?
 
‘Cherry-picking’ positive results in basic science may seem tolerable, and at least no lives (no human lives, at any rate) have been put at risk, but you would be wrong to think that such publication bias is limited to parapsychology and preclinical laboratory cancer research.
 
The pinnacle, and the most highly regarded form of evidence in evidence-based practice, is that drawn from systematic reviews. It is assumed that in gathering evidence for a systematic review, all trials pertaining to a certain drug, device, or method are available to the reviewer, whatever the outcome. In reality, this is seldom the case. For example, governments around the world have spent billions of dollars stockpiling Tamiflu (Roche Laboratories Inc, New Jersey, US; oseltamivir), a neuraminidase inhibitor that has been shown to reduce influenza-associated complications and shorten hospital stay.5 The Hong Kong SAR Government alone pledged HK$254 million to stockpile 20 million doses6 of Tamiflu, in case of a pandemic, despite concerns about the efficacy of neuraminidase inhibitors in the treatment of influenza.
 
In 2012, when attempting to conduct a comprehensive review of the efficacy of Tamiflu, the Cochrane Collaboration discovered that a large number of studies, including data from 60% of the people who were involved in randomised, placebo-controlled phase III treatment trials of Tamiflu, have never been published.7 Upon request of the complete trial data from Roche, the investigators were presented with various excuses and legal technicalities, all of which have been documented in PLOS Medicine.8 The Cochrane team were left with no choice but to investigate Roche’s clinical study reports, typically submitted to regulators for drug licensing. The reviewers found significant discrepancies between published trial data and the more complete, but unpublished, records. While unpublished trial reports mentioned serious adverse events, one of the most cited medical journal publications made no mention of such effects.7 Contrary to Roche’s claims that Tamiflu can reduce influenza complications and shorten hospital stay, the Cochrane team, on the basis of clinical study reports as well as published studies, concluded that although Tamiflu did reduce the time to first alleviation of symptoms by a mean of 21 hours, it did not reduce the number of people who went on to require hospitalisation.7
 
Selective reporting is nothing new and most scientists are aware of this phenomenon.9 In 2004, the International Committee of Medical Journal Editors announced that they would not publish any studies that had not been previously registered. This was intended to encourage all investigators to register their trials at inception so that they could be compared with published trials.10 The journals, however, have reneged on this agreement and studies of trial registration and publication have since shown that more than 50% of published trials were not previously registered. What is more, in many of the trials that were registered, there were discrepancies between the registered and published primary outcomes, with the discrepancy favouring a statistically significant primary outcome in over 90% of cases.11 12
 
Much of the unpublished trial data can be accessed through clinical trial registers. Clinicians are advised to refer to this information for a more complete picture of any drug they are likely to use, rather than rely on medical journal publications alone. Guidelines for evidence-based medical practice should also be written with materials and data accessed from government regulatory bodies as well as clinical trial registers.
 
It is evident that our current system of potentially biased reporting in peer-reviewed journals has to be addressed. Without the full story we might erroneously conclude that new (and most likely patented) drugs are better than older treatment modalities and have fewer side-effects.
 
Just as the synchronous rotation of the moon on its own axis and around the Earth prevents us ever from seeing its ‘dark side’, the coordinated interplay between researchers, journal editors, pharmaceutical companies, and clinicians makes it difficult for us to be fully informed of the whole picture when it comes to pharmaceutical efficacy. Taking action against publication bias of only positive, new, and exciting data is not simply the domain of disgruntled scientists with logbooks full of negative results. It should concern each and every one of us. How can we practise evidence-based medicine if we do not demand access to all the evidence?
 
References
1. Bem DJ. Feeling the future: experimental evidence for anomalous retroactive influences on cognition and affect. J Pers Soc Psychol 2011;100:407-25. CrossRef
2. French C. Precognition studies and the curse of the failed replications. Available from: http://www.theguardian.com/science/2012/mar/15/precognition-studies-curse-failed-replications. Accessed Jun 2015.
3. Ritchie SJ, Wiseman R, French CC. Failing the future: three unsuccessful attempts to replicate Bem’s ‘retroactive facilitation of recall’ effect. PloS One 2012;7:e33423. CrossRef
4. Begley CG, Ellis LM. Drug development: Raise standards for preclinical cancer research. Nature 2012;483:531-3. CrossRef
5. Kumar A. Early versus late oseltamivir treatment in severely ill patients with 2009 pandemic influenza A (H1N1): speed is life. J Antimicrob Chemother 2011;66:959-63. CrossRef
6. Who’s telling truth about Tamiflu after latest study of trial data. South China Morning Post 2014 Apr 13.
7. Jefferson T, Jones MA, Doshi P, et al. Neuraminidase inhibitors for preventing and treating influenza in healthy adults and children. Cochrane Database Syst Rev 2014;4:CD008965. CrossRef
8. Doshi P, Jefferson T, Del Mar C. The imperative to share clinical study reports: recommendations from the Tamiflu experience. PLoS Med 2012;9:e1001201. CrossRef
9. Dubben HH, Beck-Bornholdt HP. Systematic review of publication bias in studies on publication bias. BMJ 2005;331:433-4. CrossRef
10. De Angelis C, Drazen JM, Frizelle FA, et al. Clinical trial registration: a statement from the International Committee of Medical Journal Editors. Lancet 2004;364:911-2. CrossRef
11. Killeen S, Sourallous P, Hunter IA, Hartley JE, Grady HL. Registration rates, adequacy of registration, and a comparison of registered and published primary outcomes in randomized controlled trials published in surgery journals. Ann Surg 2014;259:193-6. CrossRef
12. Mathieu S, Boutron I, Moher D, Altman DG, Ravaud P. Comparison of registered and published primary outcomes in randomized controlled trials. JAMA 2009;302:977-84. Erratum in: JAMA 2009;302:1532. CrossRef