Peer review highly sensitive to poor refereeing, claim researchers


Just a small number of bad referees can significantly undermine the ability of the peer-review system to select the best scientific papers. That is according to a pair of complex systems researchers in Austria who have modelled an academic publishing system and showed that human foibles can have a dramatic effect on the quality of published science.
Scholarly peer review is the commonly accepted procedure for assessing the quality of research before it is published in academic journals. It relies on a community of experts within a narrow field of expertise to have both the knowledge and the time to provide comprehensive reviews of academic manuscripts.
While the concept of peer review is widely considered the most appropriate system for regulating scientific publications, it is not without its critics. Some feel that the system's reliance on impartiality and the lack of remuneration for referees mean that in practice the process is not as open as it should be. This may be particularly apparent when referees are asked to review more controversial ideas that could damage their own standing within the community if they give their approval.
Questioning referee competence
Stefan Thurner and Rudolf Hanel at the Medical University of Vienna set out to make an assessment of how the peer-review system might respond to incompetent refereeing. "I wanted to know what would be the effects on peer review as a selection mechanism if referees were not all good, but behaved according to different interests," Thurner told
The researchers created a model of a generic specialist field where referees, selected at random, can fall into one of five categories. There are the "correct" who accept the good papers and reject the bad. There are the "altruists" and the "misanthropists", who accept or reject all papers respectively. Then there are the "rational", who reject papers that might draw attention away from their own work. And finally, there are the "random" who are not qualified to judge the quality of a paper because of incompetence or lack of time.
Within this model community, the quality of scientists is assumed to follow a Gaussian distribution where each scientist produces one new paper every two time-units, the quality reflecting an author's ability. At every step in the model, each new paper is passed to two referees chosen at random from the community, with self-review excluded, with a reviewer being allowed to either accept or reject the paper. The paper is published if both reviewers approve the paper, and rejected if they both do not like it. If the reviewers are divided, the paper gets accepted with a probability of 0.5.
Big impact on quality
After running the model with 1000 scientists over 500 time-steps, Thurner and Hanel find that even a small presence of rational or random referees can significantly reduce the quality of published papers. When just 10% of referees do not behave "correctly" the quality of accepted papers drops by one standard deviation. If the fractions of rational, random and correct referees are about 1/3 each, the quality selection aspect of peer review practically vanished altogether.
"Our message is clear: if it can not be guaranteed that the fraction of rational and random referees is confined to a very small number, the peer-review system will not perform much better than by accepting papers by throwing (an unbiased!) coin," explain the researchers.
Daniel Kennefick, a cosmologist at the University of Arkansas with a special interest in sociology, believes that the study exposes the vulnerability of peer review when referees are not accountable for their decisions. "The system provides an opportunity for referees to try to avoid embarrassment for themselves, which is not the goal at all," he says.
Kennefick feels that the current system also encourages scientists to publish findings that may not offer much of an advance. "Many authors are nowadays determined to achieve publication for publication's sake, in an effort to secure an academic position and are not particularly swayed by the argument that it is in their own interests not to publish an incorrect article."
Don't forget the editors
But Tim Smith, senior publisher for New Journal of Physics at IOP Publishing, which also publishes physics, feels that the study overlooks the role of journal editors. "Peer-review is certainly not flawless and alternatives to the current process will continue to be proposed. In relation to this study however, one shouldn't ignore the role played by journal editors and Boards in accounting for potential conflicts of interest, and preserving the integrity of the referee selection and decision-making processes," he says.
Michèle Lamont a sociologist at Harvard University who analyses peer review in her 2009 book, How Professors Think: Inside the Curious World of Academic Judgment, feels that we expect too much from peer review. Lamont believes that we should never hope for "uncorrupted" evaluation of new science as all researchers are embedded in social and psychological networks. She feels that one way to improve the system, however, is to make assessment criteria more relevant to specific disciplines.
When asked by to offer an alternative to the current peer-review system, Thurner argues that science would benefit from the creation of a "market for scientific work". He envisages a situation where journal editors and their "scouts" search preprint servers for the most innovative papers before approaching authors with an offer of publication. The best papers, he believes, would naturally be picked up by a number of editors leaving it up to authors to choose their journal. "Papers that no-one wants to publish remain on the server and are open to everyone – but without the 'prestigious' quality stamp of a journal," Thurner explains.
This research is described in a paper submitted to the arXiv preprint server.

About the author
James Dacey is a reporter for