Skip to main content
There are wide-spread concerns regarding the reproducibility of scientific results. Now two perspectives and an editorial published in Science outline steps to promote the transparency, reproducibility, and quality of science.

O ne of the key requirements for scientific findings to be accepted as fact is that the findings can be reproduced. But far from all scientific results can be reproduced [1-3] and the question is how to increase the reliability and quality of science. Now two perspectives [4-5] and an editorial [6] published last week in Science outline steps to promote the transparency, reproducibility, and quality of science.

In his editorial piece [6], Stuart Buck (VP of research integrity at the Laura and John Arnold Foundation, Houston, TX) argues that, while most scientists aspire to greater transparency, they are deterred from taking concrete actions because of the extra work and resources required to, e.g. make data available (which would enable others to more readily check the results produced with the data). Thus, Buck says that one way to raise the quality of science is to create free open-sources tools that enable scientists to incorporate transparency into their workflow. Examples of existing tools listed by Buck include the Open Science Framework, iPython, and the Galaxy Project.

But as the authors of two Perspectives in Science, argue [4-5], many more changes need to be implemented to improve research integrity and the responsibility for doing so falls on a wide range of stakeholders. One key aspect highlighted by the authors of both Perspectives is the lack of incentives in the current academic reward system to reproduce scientific studies, with the focus being on innovation, rather than verification. Another incentive that the authors would like to have changed is the focus on many, rather than high-quality, publications.

To that end, the group assembled by the U.S. National Academy of Sciences, to examine ways to remove current disincentives to high standards of integrity in science, recommends that [4]:

  • “…incentives should be changed so that scholars are rewarded for publishing well rather than often. In tenure cases at universities, as in grant submissions, the candidate should be evaluated on the importance of a select set of work, instead of using the number of publications or impact ratio of a journal as a surrogate for quality.”
  • the peer-review process should be improved to improve clarity and quality of the editorial responses, e.g. using solutions such as the one used by eLife.
  • adopting more neutral language, e.g. replacing “retraction” (which carries a negative connotation) with “withdrawal”, and “conflict of interest” with “disclosure of relevant relationships”, to incentivize researchers to be transparent.
  • “Universities should insist that their faculties and students are schooled in the ethics of research, their publications feature neither honorific nor ghost authors, their public information offices avoid hype in publicizing findings, and suspect research is promptly and thoroughly investigated.”

In the second Perspective [5], the Transparency and Openness Promotion (TOP) Committee presents a list of eight standards to enable journal publishers to increase the transparency and openness of scientific research. These standards relate to: 1) citation standards, 2) data transparency, 3) analytic methods (code) transparency, 4) research materials transparency, 5) design and analysis transparency, 6) preregistration of studies, 7) preregistration of analysis plans, 8) replication.

The authors further divide the guidelines into three tiers (level 1-3), with the standards becoming increasingly stringent (and thus requiring changes in workflow for the various stakeholders) with increasing level. These guidelines provide publishers (as well as grant agencies and universities) with a roadmap for gradually improving journal policies to improve the transparency, openness, and reproducibility of published research results, and in extension improve the public trust in science, and science itself.

[1] Why most published research findings are false, J. Ioannidis, PLoS Medicine 2, e124 (2005).

[2] Drug development: Raise standards for preclinical cancel research, Begley et al., Nature 483, 531 (2012).

[3] Trouble at the lab, The Economist, 19 October 2013.

[4] Self-correction in science at work, Alberts et al., Science 348, 1420 (2015).

[5] Promoting an open research culture, Nosek et al., Science 348, 1422 (2015).

[6] Solving reproducibility, S. Buck, Science 348, 1403 (2015).

Dan Csontos

Author Dan Csontos

More posts by Dan Csontos

Leave a Reply