Skip to main content
Unconscious cognitive biases risk hampering scientific progress, but there are ways to tackle this issue.

T here are wide-spread concerns about the reproducibility and reliability of research [1,2] and many efforts are made by societies and publishers to address these [3,4]. However, while there are many valuable efforts to improve the reproducibility, transparency, integrity, and openness of science at a systemic level [5], there are deeper issues related to cognitive biases that scientists themselves need to tackle [6].

Cognitive biases are the flipside of the ability of humans to draw quick conclusions based on incomplete information. This aptitude has been vital in mankind’s evolution, but can hamper the scientific process. Our innate ability to jump to conclusions can lead us astray, to find patterns in noise, to see supporting evidence that confirms our hypothesis, and to ignore evidence that refutes it. And the problem is compounded by, on one hand, the advent of ‘big data science’ and access and analysis of large multivariate data sets, and, on the other hand, inadequate tools or knowledge to extract valid conclusions therefrom using rigorous statistical methods.

A recent news feature, published in Nature [6], recently investigated common cognitive biases and identified four common types:

  1. Hypothesis myopia – whereby researchers are so focused to collect evidence to support one main hypothesis that they neither consider evidence to refute it, nor consider alternative explanations.
  2. The Texas sharpshooter – alluding to a fable where an inept marksman fires rounds of bullets randomly and after-the-fact draws a bulls-eye-circle around the closest group of hits, this cognitive bias leads researchers astray to focus on the most interesting or agreeable results, rather than looking at the big picture. One manifestation of this bias is so-called p-hacking, whereby researchers exploit degrees of freedom in their research until they reach p<0.05, or HARKing – Hypothesizing After the Results are Known.
  3. Asymmetric attention – whereby researchers give expected results little additional scrutiny, whereas non-intuitive results are rigorously checked.
  4. Just-so storytelling – after-the fact adaptation of a narrative to fit the obtained results.

Several ways to tackle these biases are also identified in this piece [6]:

  1. Competing hypotheses – identifying as many possible hypotheses and explanations for the observations, and design experiments to test all of these to rule out all but one.
  2. Transparency & open science – sharing methods, data, computer code, and results publicly enables others to reproduce and reanalyze research data and verify whether the same conclusions are obtained. Another interesting effort is to enable researchers to subject research plans to the scrutiny of peers even before an experiment is done – this could reduce any unconscious temptation to alter experiments and analysis during the research.
  3. Keep your competitors close – collaborating with adversaries may be a good debiasing approach particularly when tackling controversial topics; instead of waiting for the debate to ensue post-publication, researchers can team up with competing groups that can challenge their methods, analysis, and views – this could potentially reduce hypothesis myopia, asymmetric attention or just-so storytelling simply because rival teams could cancel out biases.
  4. Blind data analysis – using computer code to generate alternative data sets that are analyzed in tandem to the ‘real data’.

The last approach has been previously been used by particle physicists and cosmologists [7] and means that the researchers are kept in the dark about the real results while all of the analytical decisions are completed; it means that researchers work blind while selecting data and debugging analysis to minimize the risk of them fooling themselves. As Robert MacCoun and Saul Perlmutter explain in a commentary [7]: “Many motivations distort what inferences we draw from data. These include the desire to support one’s theory, to refute one’s competitors, to be first to report a phenomenon, or simply avoid publishing ‘odd’ results. Such biases can be conscious or unconscious.” Because of these reasons, the authors argue that blind analysis should be used more broadly in empirical research.

Many of these considerations echo what Richard Feynman once said during his commencement address at Caltech [8], where he called for:“…a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty… if you’re doing an experiment, you should report everything that you think might make it invalid – not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you’ve eliminated by some other experiment, and how they worked – to make sure the other fellow can tell they have been eliminated.Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can – if you know anything at all wrong, or possibly wrong – to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. … When you have a put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition. In summary, the idea is to try to give all of the information to help others to judge the value of your contribution; not just the information that leads to judgement in one particular direction or another.”

[1] Why most published research findings are false, J. Ioannidis, PLoS Medicine 2, e124 (2005).

[2] Over half of psychology studies fail reproducibility test, M. Baker, Nature (Aug 2015).

[3] Reproducibility and reliability of biomedical research: improving research practice, Report from symposium organized by the Academy of Medical Sciences, the Biotechnology and Biological Sciences Research Council, the Medical Research Council, and the Welcome Trust, (Apr 2015).

[4] Principles and Guidelines for Reporting Preclinical Research, National Institute of Health report resulting from joint workshop with Nature and Science on the issue of reproducibility (Jun 2014).

[5] Improving research transparency, integrity, and openness, Elevate Scientific blog (Aug 2015).

[6] How scientists fool themselves – and how they can stop, R. Nuzzo, Nature 526, 182 (2015).

[7] Blind analysis: Hide results to seek the truth, R. MacCoun & S. Perlmutter, Nature 526, 187 (2015).

[8] R. Feynman’s commencement address at Caltech (1974).

Dan Csontos

Author Dan Csontos

More posts by Dan Csontos

Leave a Reply