{"title":"The journal impact factor contest leads to erosion of quality of peer review","authors":"D. Wardle","doi":"10.4033/IEE.2014.7.17.C","DOIUrl":null,"url":null,"abstract":"In his commentary, Hochberg (2014) makes the case that the quality of scientific research is maintained and enhanced over time through a process akin to Darwinian evolution, and that high quality peer review is a necessary ingredient for this to occur. It is a good analogy. This is not to mean that peer review is infallible, and there are many cases in which it has helped impede publication of truly innovative work, including several studies that have subsequently delivered Nobel prizes (Campanario 2009). As such, it has been claimed that peer review ‘favors unadventurous nibblings at the margin of truth rather than quantum leaps’ (Lock 1985). Ecology is not immune from this problem; the ‘why is the world green’ paper by Hairston, Smith and Slobodkin (1960), arguably the most influential publication in trophic ecology in the past 60 years, was first rejected by Ecology (Schoener 1989). Nevertheless, peer review overall does more good than bad, and so long as that is the case, its contribution to the evolutionary process that Hochberg (2014) describes should be positive overall. Erosion of the quality of peer review, when combined with the shortcomings that the peer review process already has, will inevitably retard this evolutionary process. Hochberg (2014) also makes the case that overexploitation of reviewers (i.e., ‘tragedy of the reviewer commons’) is likely to reduce the effectiveness of reviewers which will then push overall scientific quality downwards. He then identifies three mechanisms that should counter this effect. However, I suggest that Hochberg overlooks an important issue contributing to reviewer exploitation, and that until this is resolved by the scientific community to some satisfaction, declining effectiveness of the peer-review process in maintaining scientific quality is inevitable. The issue in question relates to journal ‘impact factors’ (hereafter IFs) and the obsession that many journals and scientists have with them. This appears to have contributed to many ecological journals implementing ever-decreasing acceptance rates (now 10–20% for most of the main ecological journals), on the belief that being more selective and publishing only work that is likely to be generously cited will elevate their impact factor relative to that of competing journals. It has also contributed to scientists flooding high-IF journals with submissions. Inevitably many of these manuscripts will be submitted to three or four (or more) journals over time before publication, and may consume the time of many reviewers and editors in the process (not to mention greatly delaying communication of the science to those who might find it useful). This may not always be due to the authors aiming too high—given that the fate of any manuscript following submission to any highly selective journal is partly determined by stochastic factors (i.e., based on whose desk it happens to lands on), an author of even an excellent paper might need to submit to two or more leading journals before they happen to encounter the ‘right’ combination of reviewers and editors. This problem is a key contributor to the overexploitation of the reviewer pool or the ‘tragedy of the reviewer commons’ and, potentially, evolutionary decline in manuscript quality. I suggest two shifts that will be needed in the scientific community for this problem to be reversed: The first is for the scientific (and science publishing) community to abandon journal IFs. Impact factors are widely recognized as seriously flawed indicators of scientific merit for several reasons (Seglen 1997), including that high journal IFs are driven by a tiny proportion of manuscripts that are extremely heavily","PeriodicalId":42755,"journal":{"name":"Ideas in Ecology and Evolution","volume":null,"pages":null},"PeriodicalIF":0.2000,"publicationDate":"2014-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ideas in Ecology and Evolution","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4033/IEE.2014.7.17.C","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"EVOLUTIONARY BIOLOGY","Score":null,"Total":0}
引用次数: 1
Abstract
In his commentary, Hochberg (2014) makes the case that the quality of scientific research is maintained and enhanced over time through a process akin to Darwinian evolution, and that high quality peer review is a necessary ingredient for this to occur. It is a good analogy. This is not to mean that peer review is infallible, and there are many cases in which it has helped impede publication of truly innovative work, including several studies that have subsequently delivered Nobel prizes (Campanario 2009). As such, it has been claimed that peer review ‘favors unadventurous nibblings at the margin of truth rather than quantum leaps’ (Lock 1985). Ecology is not immune from this problem; the ‘why is the world green’ paper by Hairston, Smith and Slobodkin (1960), arguably the most influential publication in trophic ecology in the past 60 years, was first rejected by Ecology (Schoener 1989). Nevertheless, peer review overall does more good than bad, and so long as that is the case, its contribution to the evolutionary process that Hochberg (2014) describes should be positive overall. Erosion of the quality of peer review, when combined with the shortcomings that the peer review process already has, will inevitably retard this evolutionary process. Hochberg (2014) also makes the case that overexploitation of reviewers (i.e., ‘tragedy of the reviewer commons’) is likely to reduce the effectiveness of reviewers which will then push overall scientific quality downwards. He then identifies three mechanisms that should counter this effect. However, I suggest that Hochberg overlooks an important issue contributing to reviewer exploitation, and that until this is resolved by the scientific community to some satisfaction, declining effectiveness of the peer-review process in maintaining scientific quality is inevitable. The issue in question relates to journal ‘impact factors’ (hereafter IFs) and the obsession that many journals and scientists have with them. This appears to have contributed to many ecological journals implementing ever-decreasing acceptance rates (now 10–20% for most of the main ecological journals), on the belief that being more selective and publishing only work that is likely to be generously cited will elevate their impact factor relative to that of competing journals. It has also contributed to scientists flooding high-IF journals with submissions. Inevitably many of these manuscripts will be submitted to three or four (or more) journals over time before publication, and may consume the time of many reviewers and editors in the process (not to mention greatly delaying communication of the science to those who might find it useful). This may not always be due to the authors aiming too high—given that the fate of any manuscript following submission to any highly selective journal is partly determined by stochastic factors (i.e., based on whose desk it happens to lands on), an author of even an excellent paper might need to submit to two or more leading journals before they happen to encounter the ‘right’ combination of reviewers and editors. This problem is a key contributor to the overexploitation of the reviewer pool or the ‘tragedy of the reviewer commons’ and, potentially, evolutionary decline in manuscript quality. I suggest two shifts that will be needed in the scientific community for this problem to be reversed: The first is for the scientific (and science publishing) community to abandon journal IFs. Impact factors are widely recognized as seriously flawed indicators of scientific merit for several reasons (Seglen 1997), including that high journal IFs are driven by a tiny proportion of manuscripts that are extremely heavily