{"title":"Three common sources of error in peer review and how to minimize them","authors":"L. Aarssen","doi":"10.4033/IEE.2016.9.7.E","DOIUrl":null,"url":null,"abstract":"Researchers have an odd love-hate relationship with peer review. Most regard it as agonizing, but at the same time, necessary. Peer review is of course a good thing when it provides the value that is expected of it: weeding out junk papers, and improving the rest. Unfortunately, however, the former often doesn't work particularly well, and when the latter works, it usually happens only after a lot of wasted time, hoop-jumping and wading through absurdity. Perhaps we put up with this simply because the toil and pain of it all has been sustained for so long that it has come to define the culture of academia—one that believes that no contribution can be taken seriously unless it has suffered and endured the pain, and thus earned the coveted badge of 'peer-reviewed publication'. Here, I argue that the painful route to endorsement payoff from peer review, and its common failure to provide the value expected of it, are routinely exacerbated by three sources of error in the peer-review process, all of which can be minimized with some changes in practice. Some interesting data for context are provided from a recent analysis of peer-review results from the journal, Functional Ecology. Like many journals now, Functional Ecology invites submitting authors to include a list of suggested reviewers for their manuscripts, and editors commonly invite some of their reviewers from this list. Fox et al. (2016) found that author-preferred reviewers rated papers much more positively than did editor-selected reviewers, and papers reviewed by author-preferred reviewers were much more likely to be invited for revision than were papers reviewed by editor-selected reviewers. Few will be surprised by these findings, and there is good reason to be concerned of course that the expected value from peer review here has missed the mark. This failure is undoubtedly not unique to Functional Ecology. It is, I suspect, likely to be a systemic feature of the traditional single-blind peer-review model— where reviewers know who the authors are, but not vice versa. The critical question is: what is the signal of failure here?— the fact that author-preferred reviewers rated papers more positively?— or the fact that editorselected reviewers rated papers more negatively? Either one could be a product of peer-review error, and at least three explanations could be involved:","PeriodicalId":42755,"journal":{"name":"Ideas in Ecology and Evolution","volume":"9 1","pages":""},"PeriodicalIF":0.2000,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ideas in Ecology and Evolution","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4033/IEE.2016.9.7.E","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"EVOLUTIONARY BIOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Researchers have an odd love-hate relationship with peer review. Most regard it as agonizing, but at the same time, necessary. Peer review is of course a good thing when it provides the value that is expected of it: weeding out junk papers, and improving the rest. Unfortunately, however, the former often doesn't work particularly well, and when the latter works, it usually happens only after a lot of wasted time, hoop-jumping and wading through absurdity. Perhaps we put up with this simply because the toil and pain of it all has been sustained for so long that it has come to define the culture of academia—one that believes that no contribution can be taken seriously unless it has suffered and endured the pain, and thus earned the coveted badge of 'peer-reviewed publication'. Here, I argue that the painful route to endorsement payoff from peer review, and its common failure to provide the value expected of it, are routinely exacerbated by three sources of error in the peer-review process, all of which can be minimized with some changes in practice. Some interesting data for context are provided from a recent analysis of peer-review results from the journal, Functional Ecology. Like many journals now, Functional Ecology invites submitting authors to include a list of suggested reviewers for their manuscripts, and editors commonly invite some of their reviewers from this list. Fox et al. (2016) found that author-preferred reviewers rated papers much more positively than did editor-selected reviewers, and papers reviewed by author-preferred reviewers were much more likely to be invited for revision than were papers reviewed by editor-selected reviewers. Few will be surprised by these findings, and there is good reason to be concerned of course that the expected value from peer review here has missed the mark. This failure is undoubtedly not unique to Functional Ecology. It is, I suspect, likely to be a systemic feature of the traditional single-blind peer-review model— where reviewers know who the authors are, but not vice versa. The critical question is: what is the signal of failure here?— the fact that author-preferred reviewers rated papers more positively?— or the fact that editorselected reviewers rated papers more negatively? Either one could be a product of peer-review error, and at least three explanations could be involved: