Episode 76: Why do we only tend to see positive results published?
In this episode, Dr Bahijja Raimi-Abraham answers a frequently posed question in regards to scientific research — why do we only see positive results being published, and never failures or negative results?
Well, why do we?
Dr Bahijja: So, “why do we only see positive results published?” — well, it is really mostly due to publication bias. Essentially, it is down to the tendency on the part of researchers, investigators, editors and more to read and accept manuscripts directly based on the strength of the findings. It occurs in published academic research — or even just research in general. This means positive — or more interesting results, have a better chance of being published. In practice, this means forming conclusions solely based on publications (with positive results) may provide a misleading view of a topic or subject.
This is not a new or recent issue. In fact, as early as 1959, there have been articles published indicating concerns over only statistically significant findings being published. This means that literature could potentially lead to false conclusions regarding topics of research, if only significant results are published. Withholding “negative results” (concluding something didn’t happen or isn’t significant) can therefore result in misinterpretations and excessive positive results.
What are some examples?
Dr Bahijja: One example of this is in my own field of interest, solid dispersions, which is a technique used to improve solubility of poorly water-soluble drugs. There are so many published studies highlighting how useful solid dispersions are (aka positive results) for improving solubility in in-vitro studies. However, in vivo, what we find is there is a very poor correlation [with the in-vitro studies].
Did you know? In vivo refers to research done with or within a whole living organism,, while in-vitro refers to research outside a organism (or with isolated components, such as a cell culture).
There is this other contributing factor, being that sort of natural gravitation towards interesting new findings within research that can lead to this sort of publication bias.
Has there been any discussion in the scientific community about publication bias?
Dr Bahijja: In March 2019, there was actually an Editorial article from nature entitled “The Importance of No Evidence”. This article highlighted how publication bias threatens the need for science to self-correct and how we need to change how null or negative studies are perceived (and how we need to offer incentives for their publication). At the end, Nature Human Behaviour noted that “they welcome null or negative studies provided they address an important question of broad significance and are methodologically robust”.
What does publication bias mean for researchers?
Dr Bahijja: Publications are important regardless of the level, for the importance of one’s career and they are a major aspect of KPIs (or key performance indicators). So, it is quite difficult, given that you need to publish papers to keep up with your KPIs, and there is essentially this requirement to publish. [Given the bias in publication, this “need to publish” what is oftentimes positive results can have big effects on researchers].
One example of this could be if there was a lab that has consistently been expecting the same results on an experiment, and then maybe one day they recieve a new PhD candidate or researcher. This researcher might be told to continue on with a particular experiment and their findings might not fit in with the expectation — and they might be told “that’s not quite right” or that they made a mistake. There is this sort of question of research integrity that might come up due to these sorts of situation. This can also lead to wasted time — people might redo experiments again and again seeking results. It also leads to false expectations for policy makers — especially in the fields of statistics or healthcare.
What do negative results matter?
Dr Bahijja: Null results are often not shared, because in comparsion to a positive result, their importance is hard to judge (intuitively). A null result is harder to interpret. In positive results, essentially, this is sufficient evidence to make a claim that something has occured. In a null or negative result, there was essentially no or insufficient evidence to make a claim that the same something has occured (essentially failure to disprove nothing happened). Reviewers have been known to often be more critical of the studies (according to a 1970s study on publication bias in the peer review system) of null or negative studies. In that study, even with the same listed methods, positive results were rated as having methods twice as reliable or preferable then null or negative results by the reviewers.
What other factors might impact publication bias?
Dr Bahijja: One big factor is the “file drawer” effect, or essentially, researchers withholding their own negative results and never submitting them for publication. This could be due to a variety of reasons — maybe lack of interest, or continued effort to seek a positive result. There is also the actual publication stage, where we have seen it is proven (according to some studies) that publications refer positive results. […]
Interested in some of the articles mentioned in this episode? Read them below:
- Publication Bias: The Problem That Won’t Go Away, Dickersin and Min (1993). Annals of The New York Academy of Sciences.
- The Importance of No Evidence — Editorial Nature Human Behaviour (2019).
- When Scientists Find Nothing: The Value of Null Results — Inside Science (2020).
If you have any questions you’d like to be answered by Dr Bahijja, feel free to send them in via the website chat, or email MondayScience2020@gmail.com. You can also send us your questions as a voice message via
https://anchor.fm/mondayscience/message. We love to hear your thoughts!