The peer review system is vital to the scientific process, yet it is marred by shortcomings related to reliability and bias. Several recent initiatives pushed by science publishers are now trying to address these issues.

P eer review – science publishing’s quality control mechanism – is vital to the scientific process. It is a first step to ensure that published research results are true, by subjecting scientific papers to scrutiny by peers (i.e. active researchers with scientific expertise on the topic of the paper) [1]. But peer review can be unreliable, biased, and slow. Now several interesting initiatives pushed forward by societies, publishers, and companies, aim to address these issues.

While peer review has existed since the early days of scientific publishing in the 17th century, it has not been universally applied by journals until the 20th century. For example, out of Einstein’s 300 papers only one was sent out for review, and when Einstein found out about it he withdrew the paper from the journal [2]. The leading international journal Nature in turn did not peer review its submissions as standard procedure until 1967 [3]. Nowadays though, peer review is used by all quality scientific journals as a way to assess the technical validity, originality, and sometimes significance of a scientific paper.

But the system has been criticized because of several shortcomings related to reliability, bias (e.g. gender, institutional, against the unconventional), and delays [4]. As Richard Horton, chief-editor of The Lancet, once said [5]: “We portray peer review to the public as a quasi-sacred process that helps to make science our most objective truth teller. But we know that the system of peer review is biased, unjust, unaccountable, incomplete, easily fixed, often insulting, usually ignorant, occasionally foolish, and frequently wrong.” Given these flaws, how can this system be improved?

One way in which journals are addressing the shortcomings of peer review is by revising the peer review process. Typically, many journals have applied so-called single-blind peer review, which means that the reviewers know who the authors are, but the reviewers remain anonymous to the authors. This approach has been criticized for a variety of reasons. Some journals therefore offer so-called double-blind review, whereby neither the reviewers, nor the authors know who the other party is (Nature journals recently started to offer authors the option to use a double-blind process [6]).

Another option explored by some journals is that of open review. This can mean that both the reviewer names and reports are published alongside the paper (example of such journals are Biology Direct, BMJ Open, and F1000 Research) or that one or both elements are optional to authors or reviewers (journals giving such options are e.g. PeerJ, and Frontiers). Making the reports public is particularly important because a lot of interesting discussions are usually had between the editor, the authors, and the reviewers during the editorial and review processes; such discussions are often highly interesting also for the research community.

Yet another alternative to the conventional single-blind review process is that of post-publication peer-review [7]. This can occur in several ways:

  • Review by formally invited reviewers, after publication of the un-reviewed article (e.g. F1000 Research and journals published by Copernicus)
  • Review by volunteer reviewers, after publication of the un-reviewed article (e.g. Science Open, The Winnower)
  • Comments on blogs or third-party sites, independent of any formal peer review that may have already occurred on the article (e.g. PubPeer, PubMed Commons)

This open-review approach can potentially help remove bias in the peer review process. But in itself it does not address issues related to reliability. Editors typically assign 2-3 experts to review a paper, and it is not uncommon for review reports to contain conflicting assessments and recommendations, which not only makes the job of the editor difficult, but can be highly frustrating for authors. To address this issue, eLife has adopted an approach to consolidate reviewer reports into a unison decision to the authors, by bringing together all reviewers to an online session to discuss their recommendations, and refine their feedback.

Which of these initiatives will stand the test of time is unclear. But they’re all very valuable attempts to identify and develop new ways to improve the quality of science and ensure that published research results can be trusted.

[1] Peer review: The nuts and bolts – A guide for early career researchers (produced by Sense About Science).

[2] Einstein versus the Physical Review, D. Kennefick, Physics Today (Sep 2005).

[3] History of the journal Nature: Timeline.

[4] Peer review: a flawed process at the heart of science and journals, R. Smith, Journal of the Royal Society of Medicine 99, 178 (2006).

[5] Genetically modified food: consternation, confusion, and crack-up, R. Horton, The Medical Journal of Australia 172, 148 (2000).

[6] Nature journals offer double-blind review, Nature 518, 274 (2015).

[7] What is post-publication peer review? F1000 Research blog (July 2014).

Dan Csontos

Author Dan Csontos

More posts by Dan Csontos

Leave a Reply