{"title":"What ranking journals has in common with astrology","authors":"B. Brembs","doi":"10.13130/2282-5398/3378","DOIUrl":null,"url":null,"abstract":"As scientists, we all send our best work to Science or Nature – or at least we dream of one day making a discovery we deem worthy of sending there. So obvious does this hierarchy in our journal landscape appear to our intuition, that when erroneous or fraudulent work is published in ‘highranking’ journals, we immediately wonder how this could have happened. Isn’t work published there the best there is? Vetted by professional editors before being sent out to the most critical and most competent experts this planet has to offer? How could our system fail us so badly? We are used to boring, ill-designed, even flawed research in the ‘low-ranking’ journals where we publish. Surely, these incidents in the ‘top’ journals are few and far between? It may come as a surprise to many scientists that the data speak a different language. They indicate that perhaps erroneous and fraudulent work is more common in ‘top’ journals than anywhere else (Brembs et al., 2013). There is direct evidence that the methodology of the research published in these journals is at least not superior, perhaps even inferior to work published elsewhere (Brembs et al., 2013) There is some indirect evidence that the error-detection rate may be slightly higher in ‘top’ journals, compared to other journals (Brembs et al., 2013). Neither data alone are sufficient to explain why high-ranking journals retract so many more studies than lower-ranking journals, but together they raise a disturbing suspicion: attention to top journals shapes the content of our journals more than scientific rigor. The attention being paid to publications in high-ranking journals not only entices scientists to send their best work to these journals, it also attracts fraudsters as well as unexpected and eye-catching results, which all too often prove literally too good to be true (Steen, 2011a, 2011b; Fang and Casadevall, 2011; Cokol et al., 2007; Hamilton, 2011; Fang et al., 2012; Wager and Williams, 2011). A conservative interpretation of the currently available data suggests that the attraction for truly groundbreaking, solid research just barely cancels out the attraction for unreliable or fraudulent work. A less conservative approach suggests that the solid research is losing. How can the data be so at odds with our intuition? Of course, the research providing these data may itself be flawed. Selection bias, methodological errors and field-specific distortions may be dominating the studies currently available. However, in the absence of any evidence of such","PeriodicalId":315540,"journal":{"name":"Roars Trans.","volume":"33 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Roars Trans.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.13130/2282-5398/3378","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
As scientists, we all send our best work to Science or Nature – or at least we dream of one day making a discovery we deem worthy of sending there. So obvious does this hierarchy in our journal landscape appear to our intuition, that when erroneous or fraudulent work is published in ‘highranking’ journals, we immediately wonder how this could have happened. Isn’t work published there the best there is? Vetted by professional editors before being sent out to the most critical and most competent experts this planet has to offer? How could our system fail us so badly? We are used to boring, ill-designed, even flawed research in the ‘low-ranking’ journals where we publish. Surely, these incidents in the ‘top’ journals are few and far between? It may come as a surprise to many scientists that the data speak a different language. They indicate that perhaps erroneous and fraudulent work is more common in ‘top’ journals than anywhere else (Brembs et al., 2013). There is direct evidence that the methodology of the research published in these journals is at least not superior, perhaps even inferior to work published elsewhere (Brembs et al., 2013) There is some indirect evidence that the error-detection rate may be slightly higher in ‘top’ journals, compared to other journals (Brembs et al., 2013). Neither data alone are sufficient to explain why high-ranking journals retract so many more studies than lower-ranking journals, but together they raise a disturbing suspicion: attention to top journals shapes the content of our journals more than scientific rigor. The attention being paid to publications in high-ranking journals not only entices scientists to send their best work to these journals, it also attracts fraudsters as well as unexpected and eye-catching results, which all too often prove literally too good to be true (Steen, 2011a, 2011b; Fang and Casadevall, 2011; Cokol et al., 2007; Hamilton, 2011; Fang et al., 2012; Wager and Williams, 2011). A conservative interpretation of the currently available data suggests that the attraction for truly groundbreaking, solid research just barely cancels out the attraction for unreliable or fraudulent work. A less conservative approach suggests that the solid research is losing. How can the data be so at odds with our intuition? Of course, the research providing these data may itself be flawed. Selection bias, methodological errors and field-specific distortions may be dominating the studies currently available. However, in the absence of any evidence of such
作为科学家,我们都把最好的研究成果送到《科学》或《自然》杂志上——或者至少我们梦想着有一天能有我们认为值得送到那里的发现。在我们的直觉看来,期刊领域的这种等级制度是如此明显,以至于当错误或欺诈性的研究发表在“高排名”期刊上时,我们立即想知道这是怎么发生的。难道那里发表的作品不是最好的吗?在被送到这个星球上最关键、最有能力的专家面前之前,要经过专业编辑的审查?我们的制度怎么能让我们如此失望?我们已经习惯了在我们发表的“低排名”期刊上发表无聊、设计不良、甚至有缺陷的研究。当然,这些事件在“顶级”期刊上很少发生?对许多科学家来说,数据说的是另一种语言,这可能会让他们感到惊讶。他们指出,也许在“顶级”期刊中,错误和欺诈性的工作比其他任何地方都更常见(Brembs et al., 2013)。有直接证据表明,在这些期刊上发表的研究方法至少并不优越,甚至可能不如其他地方发表的研究(Brembs et al., 2013)。有一些间接证据表明,与其他期刊相比,“顶级”期刊的错误检测率可能略高(Brembs et al., 2013)。单独的数据都不足以解释为什么排名靠前的期刊比排名靠后的期刊撤回这么多的研究,但它们加在一起引发了一个令人不安的怀疑:对顶级期刊的关注塑造了我们期刊的内容,而不是科学的严谨性。对发表在高级期刊上的文章的关注不仅诱使科学家将他们最好的研究成果发表在这些期刊上,而且还吸引了欺诈者以及意想不到的和引人注目的结果,这些结果往往被证明是好得令人难以置信(Steen, 2011a, 2011b;Fang and Casadevall, 2011;Cokol et al., 2007;汉密尔顿,2011;Fang et al., 2012;Wager和Williams, 2011)。对现有数据的保守解读表明,对真正具有突破性的、可靠的研究的吸引力,仅仅能勉强抵消对不可靠或欺诈性研究的吸引力。一种不那么保守的方法表明,可靠的研究正在失败。数据怎么会与我们的直觉如此不一致呢?当然,提供这些数据的研究本身可能存在缺陷。选择偏差、方法错误和特定领域的扭曲可能是目前可用研究的主要问题。但是,在没有任何证据的情况下