首页 > 最新文献

Research integrity and peer review最新文献

英文 中文
Publisher Correction: Fighting reviewer fatigue or amplifying bias? Considerations and recommendations for use of ChatGPT and other large language models in scholarly peer review. 发行商更正:对抗审稿人疲劳还是放大偏见?在学术同行评审中使用ChatGPT和其他大型语言模型的注意事项和建议。
Pub Date : 2023-07-10 DOI: 10.1186/s41073-023-00136-2
Mohammad Hosseini, Serge P J M Horbach
{"title":"Publisher Correction: Fighting reviewer fatigue or amplifying bias? Considerations and recommendations for use of ChatGPT and other large language models in scholarly peer review.","authors":"Mohammad Hosseini, Serge P J M Horbach","doi":"10.1186/s41073-023-00136-2","DOIUrl":"https://doi.org/10.1186/s41073-023-00136-2","url":null,"abstract":"","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10334596/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10170319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Checklist to assess Trustworthiness in RAndomised Controlled Trials (TRACT checklist): concept proposal and pilot. RAndomised Controlled Trials 可信度评估核对表(TRACT 核对表):概念提案和试点。
IF 7.2 Q1 ETHICS Pub Date : 2023-06-20 DOI: 10.1186/s41073-023-00130-8
Ben W Mol, Shimona Lai, Ayesha Rahim, Esmée M Bordewijk, Rui Wang, Rik van Eekelen, Lyle C Gurrin, Jim G Thornton, Madelon van Wely, Wentao Li

Objectives: To propose a checklist that can be used to assess trustworthiness of randomized controlled trials (RCTs).

Design: A screening tool was developed using the four-stage approach proposed by Moher et al. This included defining the scope, reviewing the evidence base, suggesting a list of items from piloting, and holding a consensus meeting. The initial checklist was set-up by a core group who had been involved in the assessment of problematic RCTs for several years. We piloted this in a consensus panel of several stakeholders, including health professionals, reviewers, journal editors, policymakers, researchers, and evidence-synthesis specialists. Each member was asked to score three articles with the checklist and the results were then discussed in consensus meetings.

Outcome: The Trustworthiness in RAndomised Clinical Trials (TRACT) checklist includes 19 items organised into seven domains that are applicable to every RCT: 1) Governance, 2) Author Group, 3) Plausibility of Intervention Usage, 4) Timeframe, 5) Drop-out Rates, 6) Baseline Characteristics, and 7) Outcomes. Each item can be answered as either no concerns, some concerns/no information, or major concerns. If a study is assessed and found to have a majority of items rated at a major concern level, then editors, reviewers or evidence synthesizers should consider a more thorough investigation, including assessment of original individual participant data.

Conclusions: The TRACT checklist is the first checklist developed specifically to detect trustworthiness issues in RCTs. It might help editors, publishers and researchers to screen for such issues in submitted or published RCTs in a transparent and replicable manner.

目的:提出一份可用于评估随机对照试验(RCT)可信度的核对表:提出一份可用于评估随机对照试验(RCT)可信度的核对表:筛选工具的开发采用了莫赫尔等人提出的四阶段方法,包括确定范围、审查证据基础、提出试点项目清单以及召开共识会议。最初的核对表是由一个核心小组制定的,他们多年来一直参与有问题 RCT 的评估工作。我们在一个由多个利益相关者组成的共识小组中进行了试点,其中包括医疗专业人士、审稿人、期刊编辑、政策制定者、研究人员和证据合成专家。每位成员都被要求用核对表给三篇文章打分,然后在共识会议上讨论结果:临床试验可信度(TRACT)核对表包括 19 个项目,分为七个领域,适用于每项临床试验:1)管理;2)作者群;3)干预措施使用的可信度;4)时间框架;5)辍学率;6)基线特征;7)结果。每个项目都可以回答为 "没有疑虑"、"有一些疑虑/没有信息 "或 "有重大疑虑"。如果对一项研究进行评估后发现大部分项目被评为重大问题,那么编辑、评审人员或证据综合人员应考虑进行更彻底的调查,包括评估原始的个体参与者数据:TRACT核对表是第一份专门用于检测RCT可信度问题的核对表。它可以帮助编辑、出版商和研究人员以透明、可复制的方式在提交或发表的 RCT 中筛查此类问题。
{"title":"Checklist to assess Trustworthiness in RAndomised Controlled Trials (TRACT checklist): concept proposal and pilot.","authors":"Ben W Mol, Shimona Lai, Ayesha Rahim, Esmée M Bordewijk, Rui Wang, Rik van Eekelen, Lyle C Gurrin, Jim G Thornton, Madelon van Wely, Wentao Li","doi":"10.1186/s41073-023-00130-8","DOIUrl":"10.1186/s41073-023-00130-8","url":null,"abstract":"<p><strong>Objectives: </strong>To propose a checklist that can be used to assess trustworthiness of randomized controlled trials (RCTs).</p><p><strong>Design: </strong>A screening tool was developed using the four-stage approach proposed by Moher et al. This included defining the scope, reviewing the evidence base, suggesting a list of items from piloting, and holding a consensus meeting. The initial checklist was set-up by a core group who had been involved in the assessment of problematic RCTs for several years. We piloted this in a consensus panel of several stakeholders, including health professionals, reviewers, journal editors, policymakers, researchers, and evidence-synthesis specialists. Each member was asked to score three articles with the checklist and the results were then discussed in consensus meetings.</p><p><strong>Outcome: </strong>The Trustworthiness in RAndomised Clinical Trials (TRACT) checklist includes 19 items organised into seven domains that are applicable to every RCT: 1) Governance, 2) Author Group, 3) Plausibility of Intervention Usage, 4) Timeframe, 5) Drop-out Rates, 6) Baseline Characteristics, and 7) Outcomes. Each item can be answered as either no concerns, some concerns/no information, or major concerns. If a study is assessed and found to have a majority of items rated at a major concern level, then editors, reviewers or evidence synthesizers should consider a more thorough investigation, including assessment of original individual participant data.</p><p><strong>Conclusions: </strong>The TRACT checklist is the first checklist developed specifically to detect trustworthiness issues in RCTs. It might help editors, publishers and researchers to screen for such issues in submitted or published RCTs in a transparent and replicable manner.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":7.2,"publicationDate":"2023-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10280869/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10066264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Responsible research practices could be more strongly endorsed by Australian university codes of research conduct. 负责任的研究实践可以得到澳大利亚大学研究行为准则的更强有力的支持。
Pub Date : 2023-06-06 DOI: 10.1186/s41073-023-00129-1
Yi Kai Ong, Kay L Double, Lisa Bero, Joanna Diong

Background: This study aimed to investigate how strongly Australian university codes of research conduct endorse responsible research practices.

Methods: Codes of research conduct from 25 Australian universities active in health and medical research were obtained from public websites, and audited against 19 questions to assess how strongly they (1) defined research integrity, research quality, and research misconduct, (2) required research to be approved by an appropriate ethics committee, (3) endorsed 9 responsible research practices, and (4) discouraged 5 questionable research practices.

Results: Overall, a median of 10 (IQR 9 to 12) of 19 practices covered in the questions were mentioned, weakly endorsed, or strongly endorsed. Five to 8 of 9 responsible research practices were mentioned, weakly, or strongly endorsed, and 3 questionable research practices were discouraged. Results are stratified by Group of Eight (n = 8) and other (n = 17) universities. Specifically, (1) 6 (75%) Group of Eight and 11 (65%) other codes of research conduct defined research integrity, 4 (50%) and 8 (47%) defined research quality, and 7 (88%) and 16 (94%) defined research misconduct. (2) All codes required ethics approval for human and animal research. (3) All codes required conflicts of interest to be declared, but there was variability in how strongly other research practices were endorsed. The most commonly endorsed practices were ensuring researcher training in research integrity [8 (100%) and 16 (94%)] and making study data publicly available [6 (75%) and 12 (71%)]. The least commonly endorsed practices were making analysis code publicly available [0 (0%) and 0 (0%)] and registering analysis protocols [0 (0%) and 1 (6%)]. (4) Most codes discouraged fabricating data [5 (63%) and 15 (88%)], selectively deleting or modifying data [5 (63%) and 15 (88%)], and selective reporting of results [3 (38%) and 15 (88%)]. No codes discouraged p-hacking or hypothesising after results are known.

Conclusions: Responsible research practices could be more strongly endorsed by Australian university codes of research conduct. Our findings may not be generalisable to smaller universities, or those not active in health and medical research.

背景:本研究旨在调查澳大利亚大学研究行为准则对负责任的研究实践的认可程度。方法:从公共网站上获得活跃于健康和医学研究的25所澳大利亚大学的研究行为准则,并根据19个问题进行审计,以评估它们(1)定义研究诚信、研究质量和研究不端行为的程度,(2)要求研究得到适当的伦理委员会的批准,(3)支持负责任的研究实践,(4)劝阻有问题的研究实践。结果:总体而言,问题涵盖的19个实践中有10个(IQR 9至12)被提及,弱支持或强烈支持。9个负责任的研究实践中有5到8个被提及,弱或强烈支持,3个有问题的研究实践不被鼓励。结果按八国集团(n = 8)和其他(n = 17)所大学进行分层。具体来说,(1)6(75%)和11(65%)其他研究行为准则定义了研究诚信,4(50%)和8(47%)定义了研究质量,7(88%)和16(94%)定义了研究不端行为。(2)所有守则都需要获得人类和动物研究的伦理批准。(3)所有规范都要求声明利益冲突,但对其他研究实践的认可程度存在差异。最普遍认可的做法是确保研究人员在研究诚信方面的培训[8(100%)和16(94%)],以及使研究数据公开[6(75%)和12(71%)]。最不常被认可的实践是使分析代码公开可用[0(0%)和0(0%)]和注册分析协议[0(0%)和1(6%)]。(4)大多数法规不鼓励捏造数据[5(63%)和15(88%)],选择性删除或修改数据[5(63%)和15(88%)],以及选择性报告结果[3(38%)和15(88%)]。在结果已知之后,没有任何代码阻止p-hacking或假设。结论:负责任的研究实践可以得到澳大利亚大学研究行为准则的更强有力的支持。我们的研究结果可能不适用于规模较小的大学,或者那些在健康和医学研究方面不活跃的大学。
{"title":"Responsible research practices could be more strongly endorsed by Australian university codes of research conduct.","authors":"Yi Kai Ong,&nbsp;Kay L Double,&nbsp;Lisa Bero,&nbsp;Joanna Diong","doi":"10.1186/s41073-023-00129-1","DOIUrl":"https://doi.org/10.1186/s41073-023-00129-1","url":null,"abstract":"<p><strong>Background: </strong>This study aimed to investigate how strongly Australian university codes of research conduct endorse responsible research practices.</p><p><strong>Methods: </strong>Codes of research conduct from 25 Australian universities active in health and medical research were obtained from public websites, and audited against 19 questions to assess how strongly they (1) defined research integrity, research quality, and research misconduct, (2) required research to be approved by an appropriate ethics committee, (3) endorsed 9 responsible research practices, and (4) discouraged 5 questionable research practices.</p><p><strong>Results: </strong>Overall, a median of 10 (IQR 9 to 12) of 19 practices covered in the questions were mentioned, weakly endorsed, or strongly endorsed. Five to 8 of 9 responsible research practices were mentioned, weakly, or strongly endorsed, and 3 questionable research practices were discouraged. Results are stratified by Group of Eight (n = 8) and other (n = 17) universities. Specifically, (1) 6 (75%) Group of Eight and 11 (65%) other codes of research conduct defined research integrity, 4 (50%) and 8 (47%) defined research quality, and 7 (88%) and 16 (94%) defined research misconduct. (2) All codes required ethics approval for human and animal research. (3) All codes required conflicts of interest to be declared, but there was variability in how strongly other research practices were endorsed. The most commonly endorsed practices were ensuring researcher training in research integrity [8 (100%) and 16 (94%)] and making study data publicly available [6 (75%) and 12 (71%)]. The least commonly endorsed practices were making analysis code publicly available [0 (0%) and 0 (0%)] and registering analysis protocols [0 (0%) and 1 (6%)]. (4) Most codes discouraged fabricating data [5 (63%) and 15 (88%)], selectively deleting or modifying data [5 (63%) and 15 (88%)], and selective reporting of results [3 (38%) and 15 (88%)]. No codes discouraged p-hacking or hypothesising after results are known.</p><p><strong>Conclusions: </strong>Responsible research practices could be more strongly endorsed by Australian university codes of research conduct. Our findings may not be generalisable to smaller universities, or those not active in health and medical research.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10242962/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9591647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Fighting reviewer fatigue or amplifying bias? Considerations and recommendations for use of ChatGPT and other large language models in scholarly peer review. 消除审稿人疲劳还是放大偏见?在学术同行评审中使用 ChatGPT 和其他大型语言模型的考虑因素和建议。
IF 7.2 Q1 ETHICS Pub Date : 2023-05-18 DOI: 10.1186/s41073-023-00133-5
Mohammad Hosseini, Serge P J M Horbach

Background: The emergence of systems based on large language models (LLMs) such as OpenAI's ChatGPT has created a range of discussions in scholarly circles. Since LLMs generate grammatically correct and mostly relevant (yet sometimes outright wrong, irrelevant or biased) outputs in response to provided prompts, using them in various writing tasks including writing peer review reports could result in improved productivity. Given the significance of peer reviews in the existing scholarly publication landscape, exploring challenges and opportunities of using LLMs in peer review seems urgent. After the generation of the first scholarly outputs with LLMs, we anticipate that peer review reports too would be generated with the help of these systems. However, there are currently no guidelines on how these systems should be used in review tasks.

Methods: To investigate the potential impact of using LLMs on the peer review process, we used five core themes within discussions about peer review suggested by Tennant and Ross-Hellauer. These include 1) reviewers' role, 2) editors' role, 3) functions and quality of peer reviews, 4) reproducibility, and 5) the social and epistemic functions of peer reviews. We provide a small-scale exploration of ChatGPT's performance regarding identified issues.

Results: LLMs have the potential to substantially alter the role of both peer reviewers and editors. Through supporting both actors in efficiently writing constructive reports or decision letters, LLMs can facilitate higher quality review and address issues of review shortage. However, the fundamental opacity of LLMs' training data, inner workings, data handling, and development processes raise concerns about potential biases, confidentiality and the reproducibility of review reports. Additionally, as editorial work has a prominent function in defining and shaping epistemic communities, as well as negotiating normative frameworks within such communities, partly outsourcing this work to LLMs might have unforeseen consequences for social and epistemic relations within academia. Regarding performance, we identified major enhancements in a short period and expect LLMs to continue developing.

Conclusions: We believe that LLMs are likely to have a profound impact on academia and scholarly communication. While potentially beneficial to the scholarly communication system, many uncertainties remain and their use is not without risks. In particular, concerns about the amplification of existing biases and inequalities in access to appropriate infrastructure warrant further attention. For the moment, we recommend that if LLMs are used to write scholarly reviews and decision letters, reviewers and editors should disclose their use and accept full responsibility for data security and confidentiality, and their reports' accuracy, tone, reasoning and originality.

背景:基于大型语言模型(LLM)的系统(如 OpenAI 的 ChatGPT)的出现在学术界引起了一系列讨论。由于 LLMs 可以根据所提供的提示生成语法正确且大多相关(但有时也会完全错误、不相关或有偏见)的输出结果,因此在包括撰写同行评议报告在内的各种写作任务中使用 LLMs 可以提高工作效率。鉴于同行评议在现有学术出版物中的重要性,探索在同行评议中使用法律硕士的挑战和机遇似乎迫在眉睫。在利用 LLM 生成第一批学术成果之后,我们预计同行评审报告也将在这些系统的帮助下生成。然而,目前还没有关于如何在评审任务中使用这些系统的指南:为了研究使用 LLM 对同行评审过程的潜在影响,我们使用了 Tennant 和 Ross-Hellauer 提出的同行评审讨论中的五个核心主题。这些主题包括:1)审稿人的角色;2)编辑的角色;3)同行评审的功能和质量;4)可复制性;5)同行评审的社会和认识功能。我们对 ChatGPT 在上述问题上的表现进行了小规模的探讨:结果:LLM 有可能大大改变同行评审员和编辑的角色。通过支持这两个角色高效地撰写建设性报告或决定书,LLM 可以促进更高质量的评审,并解决评审不足的问题。然而,法律硕士的培训数据、内部运作、数据处理和开发过程从根本上是不透明的,这引发了人们对潜在偏见、保密性和审稿报告可重复性的担忧。此外,由于编辑工作在定义和塑造认识论社群以及在这些社群中协商规范性框架方面具有突出作用,因此将这项工作部分外包给法学硕士可能会对学术界内部的社会和认识论关系产生不可预见的后果。在绩效方面,我们在短期内发现了重大改进,并期待法律硕士继续发展:我们认为,法律硕士可能会对学术界和学术交流产生深远影响。虽然可能对学术交流系统有益,但仍存在许多不确定因素,而且其使用并非没有风险。特别是对现有偏见的放大和在使用适当基础设施方面的不平等的担忧,值得进一步关注。目前,我们建议,如果使用 LLM 撰写学术评论和决定信,审稿人和编辑应披露其使用情况,并对数据的安全性和保密性以及报告的准确性、语气、推理和原创性承担全部责任。
{"title":"Fighting reviewer fatigue or amplifying bias? Considerations and recommendations for use of ChatGPT and other large language models in scholarly peer review.","authors":"Mohammad Hosseini, Serge P J M Horbach","doi":"10.1186/s41073-023-00133-5","DOIUrl":"10.1186/s41073-023-00133-5","url":null,"abstract":"<p><strong>Background: </strong>The emergence of systems based on large language models (LLMs) such as OpenAI's ChatGPT has created a range of discussions in scholarly circles. Since LLMs generate grammatically correct and mostly relevant (yet sometimes outright wrong, irrelevant or biased) outputs in response to provided prompts, using them in various writing tasks including writing peer review reports could result in improved productivity. Given the significance of peer reviews in the existing scholarly publication landscape, exploring challenges and opportunities of using LLMs in peer review seems urgent. After the generation of the first scholarly outputs with LLMs, we anticipate that peer review reports too would be generated with the help of these systems. However, there are currently no guidelines on how these systems should be used in review tasks.</p><p><strong>Methods: </strong>To investigate the potential impact of using LLMs on the peer review process, we used five core themes within discussions about peer review suggested by Tennant and Ross-Hellauer. These include 1) reviewers' role, 2) editors' role, 3) functions and quality of peer reviews, 4) reproducibility, and 5) the social and epistemic functions of peer reviews. We provide a small-scale exploration of ChatGPT's performance regarding identified issues.</p><p><strong>Results: </strong>LLMs have the potential to substantially alter the role of both peer reviewers and editors. Through supporting both actors in efficiently writing constructive reports or decision letters, LLMs can facilitate higher quality review and address issues of review shortage. However, the fundamental opacity of LLMs' training data, inner workings, data handling, and development processes raise concerns about potential biases, confidentiality and the reproducibility of review reports. Additionally, as editorial work has a prominent function in defining and shaping epistemic communities, as well as negotiating normative frameworks within such communities, partly outsourcing this work to LLMs might have unforeseen consequences for social and epistemic relations within academia. Regarding performance, we identified major enhancements in a short period and expect LLMs to continue developing.</p><p><strong>Conclusions: </strong>We believe that LLMs are likely to have a profound impact on academia and scholarly communication. While potentially beneficial to the scholarly communication system, many uncertainties remain and their use is not without risks. In particular, concerns about the amplification of existing biases and inequalities in access to appropriate infrastructure warrant further attention. For the moment, we recommend that if LLMs are used to write scholarly reviews and decision letters, reviewers and editors should disclose their use and accept full responsibility for data security and confidentiality, and their reports' accuracy, tone, reasoning and originality.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":7.2,"publicationDate":"2023-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10191680/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9849534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gender differences in peer reviewed grant applications, awards, and amounts: a systematic review and meta-analysis. 同行评议拨款申请、奖励和数额的性别差异:系统回顾和荟萃分析。
Pub Date : 2023-05-03 DOI: 10.1186/s41073-023-00127-3
Karen B Schmaling, Stephen A Gallo

Background: Differential participation and success in grant applications may contribute to women's lesser representation in the sciences. This study's objective was to conduct a systematic review and meta-analysis to address the question of gender differences in grant award acceptance rates and reapplication award acceptance rates (potential bias in peer review outcomes) and other grant outcomes.

Methods: The review was registered on PROSPERO (CRD42021232153) and conducted in accordance with PRISMA 2020 standards. We searched Academic Search Complete, PubMed, and Web of Science for the timeframe 1 January 2005 to 31 December 2020, and forward and backward citations. Studies were included that reported data, by gender, on any of the following: grant applications or reapplications, awards, award amounts, award acceptance rates, or reapplication award acceptance rates. Studies that duplicated data reported in another study were excluded. Gender differences were investigated by meta-analyses and generalized linear mixed models. Doi plots and LFK indices were used to assess reporting bias.

Results: The searches identified 199 records, of which 13 were eligible. An additional 42 sources from forward and backward searches were eligible, for a total of 55 sources with data on one or more outcomes. The data from these studies ranged from 1975 to 2020: 49 sources were published papers and six were funders' reports (the latter were identified by forwards and backwards searches). Twenty-nine studies reported person-level data, 25 reported application-level data, and one study reported both: person-level data were used in analyses. Award acceptance rates were 1% higher for men, which was not significantly different from women (95% CI 3% more for men to 1% more for women, k = 36, n = 303,795 awards and 1,277,442 applications, I2 = 84%). Reapplication award acceptance rates were significantly higher for men (9%, 95% CI 18% to 1%, k = 7, n = 7319 applications and 3324 awards, I2 = 63%). Women received smaller award amounts (g = -2.28, 95% CI -4.92 to 0.36, k = 13, n = 212,935, I2 = 100%).

Conclusions: The proportions of women that applied for grants, re-applied, accepted awards, and accepted awards after reapplication were less than the proportion of eligible women. However, the award acceptance rate was similar for women and men, implying no gender bias in this peer reviewed grant outcome. Women received smaller awards and fewer awards after re-applying, which may negatively affect continued scientific productivity. Greater transparency is needed to monitor and verify these data globally.

背景:拨款申请中的差异参与和成功可能导致妇女在科学领域的代表性较低。本研究的目的是进行系统回顾和荟萃分析,以解决拨款接受率和再申请奖接受率(同行评议结果的潜在偏见)和其他拨款结果的性别差异问题。方法:本综述在PROSPERO注册(CRD42021232153),并按照PRISMA 2020标准进行。我们检索了Academic Search Complete、PubMed和Web of Science,查找时间范围为2005年1月1日至2020年12月31日,以及前后引文。研究纳入了按性别报告以下任何数据的研究:拨款申请或再申请、奖励、奖励金额、奖励接受率或再申请奖励接受率。在另一项研究中报告重复数据的研究被排除在外。通过荟萃分析和广义线性混合模型研究性别差异。Doi图和LFK指数用于评估报告偏倚。结果:检索到199条记录,其中13条符合条件。另外42个来自向前和向后搜索的来源符合条件,总共55个来源有一个或多个结果的数据。这些研究的数据范围从1975年到2020年:49个来源是发表的论文,6个是资助者的报告(后者是通过向前和向后搜索确定的)。29项研究报告了个人水平的数据,25项研究报告了应用水平的数据,一项研究报告了两者的数据:个人水平的数据用于分析。男性的奖项接受率比女性高1%,这与女性没有显著差异(95% CI男性高3%,女性高1%,k = 36, n = 303,795个奖项和1,277,442个申请,I2 = 84%)。男性再次申请奖励的接受率明显更高(9%,95% CI 18%至1%,k = 7, n = 7319份申请和3324份奖励,I2 = 63%)。女性获得的奖励较少(g = -2.28, 95% CI -4.92至0.36,k = 13, n = 212,935, I2 = 100%)。结论:申请资助、重新申请、接受奖励、重新申请后接受奖励的女性比例低于符合条件的女性比例。然而,女性和男性的奖项接受率相似,这意味着在同行评审的拨款结果中没有性别偏见。女性在重新申请后获得的奖励更少,奖励也更少,这可能会对持续的科学生产力产生负面影响。在全球范围内监测和核实这些数据需要更大的透明度。
{"title":"Gender differences in peer reviewed grant applications, awards, and amounts: a systematic review and meta-analysis.","authors":"Karen B Schmaling,&nbsp;Stephen A Gallo","doi":"10.1186/s41073-023-00127-3","DOIUrl":"https://doi.org/10.1186/s41073-023-00127-3","url":null,"abstract":"<p><strong>Background: </strong>Differential participation and success in grant applications may contribute to women's lesser representation in the sciences. This study's objective was to conduct a systematic review and meta-analysis to address the question of gender differences in grant award acceptance rates and reapplication award acceptance rates (potential bias in peer review outcomes) and other grant outcomes.</p><p><strong>Methods: </strong>The review was registered on PROSPERO (CRD42021232153) and conducted in accordance with PRISMA 2020 standards. We searched Academic Search Complete, PubMed, and Web of Science for the timeframe 1 January 2005 to 31 December 2020, and forward and backward citations. Studies were included that reported data, by gender, on any of the following: grant applications or reapplications, awards, award amounts, award acceptance rates, or reapplication award acceptance rates. Studies that duplicated data reported in another study were excluded. Gender differences were investigated by meta-analyses and generalized linear mixed models. Doi plots and LFK indices were used to assess reporting bias.</p><p><strong>Results: </strong>The searches identified 199 records, of which 13 were eligible. An additional 42 sources from forward and backward searches were eligible, for a total of 55 sources with data on one or more outcomes. The data from these studies ranged from 1975 to 2020: 49 sources were published papers and six were funders' reports (the latter were identified by forwards and backwards searches). Twenty-nine studies reported person-level data, 25 reported application-level data, and one study reported both: person-level data were used in analyses. Award acceptance rates were 1% higher for men, which was not significantly different from women (95% CI 3% more for men to 1% more for women, k = 36, n = 303,795 awards and 1,277,442 applications, I<sup>2</sup> = 84%). Reapplication award acceptance rates were significantly higher for men (9%, 95% CI 18% to 1%, k = 7, n = 7319 applications and 3324 awards, I<sup>2</sup> = 63%). Women received smaller award amounts (g = -2.28, 95% CI -4.92 to 0.36, k = 13, n = 212,935, I<sup>2</sup> = 100%).</p><p><strong>Conclusions: </strong>The proportions of women that applied for grants, re-applied, accepted awards, and accepted awards after reapplication were less than the proportion of eligible women. However, the award acceptance rate was similar for women and men, implying no gender bias in this peer reviewed grant outcome. Women received smaller awards and fewer awards after re-applying, which may negatively affect continued scientific productivity. Greater transparency is needed to monitor and verify these data globally.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10155348/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9762431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Scientific sinkhole: estimating the cost of peer review based on survey data with snowball sampling. 科学陷坑:基于滚雪球抽样的调查数据估算同行评议的成本。
Pub Date : 2023-04-24 DOI: 10.1186/s41073-023-00128-2
Allana G LeBlanc, Joel D Barnes, Travis J Saunders, Mark S Tremblay, Jean-Philippe Chaput

Background: There are a variety of costs associated with publication of scientific findings. The purpose of this work was to estimate the cost of peer review in scientific publishing per reviewer, per year and for the entire scientific community.

Methods: Internet-based self-report, cross-sectional survey, live between June 28, 2021 and August 2, 2021 was used. Participants were recruited via snowball sampling. No restrictions were placed on geographic location or field of study. Respondents who were asked to act as a peer-reviewer for at least one manuscript submitted to a scientific journal in 2020 were eligible. The primary outcome measure was the cost of peer review per person, per year (calculated as wage-cost x number of initial reviews and number of re-reviews per year). The secondary outcome was the cost of peer review globally (calculated as the number of peer-reviewed papers in Scopus x median wage-cost of initial review and re-review).

Results: A total of 354 participants completed at least one question of the survey, and information necessary to calculate the cost of peer-review was available for 308 participants from 33 countries (44% from Canada). The cost of peer review was estimated at $US1,272 per person, per year ($US1,015 for initial review and $US256 for re-review), or US$1.1-1.7 billion for the scientific community per year. The global cost of peer-review was estimated at US$6 billion in 2020 when relying on the Dimensions database and taking into account reviewed-but-rejected manuscripts.

Conclusions: Peer review represents an important financial piece of scientific publishing. Our results may not represent all countries or fields of study, but are consistent with previous estimates and provide additional context from peer reviewers themselves. Researchers and scientists have long provided peer review as a contribution to the scientific community. Recognizing the importance of peer-review, institutions should acknowledge these costs in job descriptions, performance measurement, promotion packages, and funding applications. Journals should develop methods to compensate reviewers for their time and improve transparency while maintaining the integrity of the peer-review process.

背景:与发表科学发现相关的费用有多种。这项工作的目的是估计科学出版中每个审稿人,每年和整个科学界的同行评议成本。方法:基于互联网的自我报告,横断面调查,现场时间为2021年6月28日至2021年8月2日。参与者是通过滚雪球抽样招募的。对地理位置或研究领域没有任何限制。被要求在2020年至少为一份提交给科学期刊的手稿担任同行审稿人的受访者符合条件。主要结果测量是每人每年同行评审的成本(计算为工资-成本x每年初次评审的次数和重新评审的次数)。次要结果是全球同行评议的成本(计算方法为Scopus中同行评议论文的数量x初次评议和再评议的工资成本中位数)。结果:共有354名参与者完成了调查的至少一个问题,来自33个国家的308名参与者(44%来自加拿大)获得了计算同行评审成本所需的信息。同行评议的费用估计为每人每年1272美元(初次评议1015美元,重新评议256美元),或科学界每年11 - 17亿美元。根据Dimensions数据库并考虑审稿但被拒绝的稿件,2020年同行评议的全球成本估计为60亿美元。结论:同行评议代表了科学出版的重要财务部分。我们的结果可能不代表所有国家或研究领域,但与以前的估计一致,并提供同行审稿人自己的额外背景。长期以来,研究人员和科学家一直将同行评议作为对科学界的贡献。认识到同行评议的重要性,机构应该在职位描述、绩效评估、晋升方案和资金申请中承认这些成本。期刊应该开发方法来补偿审稿人的时间,提高透明度,同时保持同行评审过程的完整性。
{"title":"Scientific sinkhole: estimating the cost of peer review based on survey data with snowball sampling.","authors":"Allana G LeBlanc,&nbsp;Joel D Barnes,&nbsp;Travis J Saunders,&nbsp;Mark S Tremblay,&nbsp;Jean-Philippe Chaput","doi":"10.1186/s41073-023-00128-2","DOIUrl":"https://doi.org/10.1186/s41073-023-00128-2","url":null,"abstract":"<p><strong>Background: </strong>There are a variety of costs associated with publication of scientific findings. The purpose of this work was to estimate the cost of peer review in scientific publishing per reviewer, per year and for the entire scientific community.</p><p><strong>Methods: </strong>Internet-based self-report, cross-sectional survey, live between June 28, 2021 and August 2, 2021 was used. Participants were recruited via snowball sampling. No restrictions were placed on geographic location or field of study. Respondents who were asked to act as a peer-reviewer for at least one manuscript submitted to a scientific journal in 2020 were eligible. The primary outcome measure was the cost of peer review per person, per year (calculated as wage-cost x number of initial reviews and number of re-reviews per year). The secondary outcome was the cost of peer review globally (calculated as the number of peer-reviewed papers in Scopus x median wage-cost of initial review and re-review).</p><p><strong>Results: </strong>A total of 354 participants completed at least one question of the survey, and information necessary to calculate the cost of peer-review was available for 308 participants from 33 countries (44% from Canada). The cost of peer review was estimated at $US1,272 per person, per year ($US1,015 for initial review and $US256 for re-review), or US$1.1-1.7 billion for the scientific community per year. The global cost of peer-review was estimated at US$6 billion in 2020 when relying on the Dimensions database and taking into account reviewed-but-rejected manuscripts.</p><p><strong>Conclusions: </strong>Peer review represents an important financial piece of scientific publishing. Our results may not represent all countries or fields of study, but are consistent with previous estimates and provide additional context from peer reviewers themselves. Researchers and scientists have long provided peer review as a contribution to the scientific community. Recognizing the importance of peer-review, institutions should acknowledge these costs in job descriptions, performance measurement, promotion packages, and funding applications. Journals should develop methods to compensate reviewers for their time and improve transparency while maintaining the integrity of the peer-review process.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10122980/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9776362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating and preventing scientific misconduct using Benford's Law. 利用本福德定律调查和预防科学不端行为。
IF 7.2 Q1 ETHICS Pub Date : 2023-04-11 DOI: 10.1186/s41073-022-00126-w
Gregory M Eckhartt, Graeme D Ruxton

Integrity and trust in that integrity are fundamental to academic research. However, procedures for monitoring the trustworthiness of research, and for investigating cases where concern about possible data fraud have been raised are not well established. Here we suggest a practical approach for the investigation of work suspected of fraudulent data manipulation using Benford's Law. This should be of value to both individual peer-reviewers and academic institutions and journals. In this, we draw inspiration from well-established practices of financial auditing. We provide synthesis of the literature on tests of adherence to Benford's Law, culminating in advice of a single initial test for digits in each position of numerical strings within a dataset. We also recommend further tests which may prove useful in the event that specific hypotheses regarding the nature of data manipulation can be justified. Importantly, our advice differs from the most common current implementations of tests of Benford's Law. Furthermore, we apply the approach to previously-published data, highlighting the efficacy of these tests in detecting known irregularities. Finally, we discuss the results of these tests, with reference to their strengths and limitations.

诚信和对诚信的信任是学术研究的根本。然而,对研究的可信度进行监督以及对可能存在的数据欺诈行为进行调查的程序并不完善。在此,我们建议使用本福德定律调查涉嫌欺诈性数据操纵的工作的实用方法。这对同行评审员个人、学术机构和期刊都有价值。在这方面,我们从财务审计的成熟做法中汲取了灵感。我们对有关本福德定律测试的文献进行了综述,最后建议对数据集中数字串每个位置的数字进行一次初步测试。我们还建议进行更多测试,这些测试在有关数据处理性质的特定假设成立时可能会被证明是有用的。重要的是,我们的建议不同于当前最常见的本福德定律检验方法。此外,我们还将该方法应用于以前发表的数据,强调了这些测试在检测已知违规行为方面的功效。最后,我们讨论了这些检验的结果,并提到了它们的优势和局限性。
{"title":"Investigating and preventing scientific misconduct using Benford's Law.","authors":"Gregory M Eckhartt, Graeme D Ruxton","doi":"10.1186/s41073-022-00126-w","DOIUrl":"10.1186/s41073-022-00126-w","url":null,"abstract":"<p><p>Integrity and trust in that integrity are fundamental to academic research. However, procedures for monitoring the trustworthiness of research, and for investigating cases where concern about possible data fraud have been raised are not well established. Here we suggest a practical approach for the investigation of work suspected of fraudulent data manipulation using Benford's Law. This should be of value to both individual peer-reviewers and academic institutions and journals. In this, we draw inspiration from well-established practices of financial auditing. We provide synthesis of the literature on tests of adherence to Benford's Law, culminating in advice of a single initial test for digits in each position of numerical strings within a dataset. We also recommend further tests which may prove useful in the event that specific hypotheses regarding the nature of data manipulation can be justified. Importantly, our advice differs from the most common current implementations of tests of Benford's Law. Furthermore, we apply the approach to previously-published data, highlighting the efficacy of these tests in detecting known irregularities. Finally, we discuss the results of these tests, with reference to their strengths and limitations.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":7.2,"publicationDate":"2023-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10088595/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9290217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reducing the Inadvertent Spread of Retracted Science: recommendations from the RISRS report. 减少被撤回科学成果的无意传播:RISRS 报告的建议。
IF 7.2 Q1 ETHICS Pub Date : 2022-09-19 DOI: 10.1186/s41073-022-00125-x
Jodi Schneider, Nathan D Woods, Randi Proescholdt
<p><strong>Background: </strong>Retraction is a mechanism for alerting readers to unreliable material and other problems in the published scientific and scholarly record. Retracted publications generally remain visible and searchable, but the intention of retraction is to mark them as "removed" from the citable record of scholarship. However, in practice, some retracted articles continue to be treated by researchers and the public as valid content as they are often unaware of the retraction. Research over the past decade has identified a number of factors contributing to the unintentional spread of retracted research. The goal of the Reducing the Inadvertent Spread of Retracted Science: Shaping a Research and Implementation Agenda (RISRS) project was to develop an actionable agenda for reducing the inadvertent spread of retracted science. This included identifying how retraction status could be more thoroughly disseminated, and determining what actions are feasible and relevant for particular stakeholders who play a role in the distribution of knowledge.</p><p><strong>Methods: </strong>These recommendations were developed as part of a year-long process that included a scoping review of empirical literature and successive rounds of stakeholder consultation, culminating in a three-part online workshop that brought together a diverse body of 65 stakeholders in October-November 2020 to engage in collaborative problem solving and dialogue. Stakeholders held roles such as publishers, editors, researchers, librarians, standards developers, funding program officers, and technologists and worked for institutions such as universities, governmental agencies, funding organizations, publishing houses, libraries, standards organizations, and technology providers. Workshop discussions were seeded by materials derived from stakeholder interviews (N = 47) and short original discussion pieces contributed by stakeholders. The online workshop resulted in a set of recommendations to address the complexities of retracted research throughout the scholarly communications ecosystem.</p><p><strong>Results: </strong>The RISRS recommendations are: (1) Develop a systematic cross-industry approach to ensure the public availability of consistent, standardized, interoperable, and timely information about retractions; (2) Recommend a taxonomy of retraction categories/classifications and corresponding retraction metadata that can be adopted by all stakeholders; (3) Develop best practices for coordinating the retraction process to enable timely, fair, unbiased outcomes; and (4) Educate stakeholders about pre- and post-publication stewardship, including retraction and correction of the scholarly record.</p><p><strong>Conclusions: </strong>Our stakeholder engagement study led to 4 recommendations to address inadvertent citation of retracted research, and formation of a working group to develop the Communication of Retractions, Removals, and Expressions of Concern (CORREC) Recommende
背景:撤稿是一种提醒读者注意已发表的科学和学术记录中不可靠材料和其他问题的机制。被撤稿的出版物一般仍然可见并可检索,但撤稿的目的是将其从可引用的学术记录中 "删除"。然而,在实践中,一些被撤稿的文章仍被研究人员和公众视为有效内容,因为他们往往不知道撤稿一事。过去十年的研究发现了一些导致被撤研究无意传播的因素。减少被撤科学研究的无意传播》(Reducing the Inadvertent Spread of Retracted Science:研究与实施议程》(RISRS)项目的目标是为减少被撤科学研究的无意传播制定一个可操作的议程。这包括确定如何更全面地传播撤稿状况,以及确定哪些行动是可行的,并与在知识传播中发挥作用的特定利益相关者相关:这些建议是在长达一年的过程中提出的,其中包括对实证文献进行范围界定审查和连续几轮利益相关者咨询,最终于 2020 年 10 月至 11 月举办了由三部分组成的在线研讨会,汇集了 65 位不同的利益相关者,共同参与问题的解决和对话。利益相关者的角色包括出版商、编辑、研究人员、图书馆员、标准制定者、资助项目官员和技术专家,工作机构包括大学、政府机构、资助组织、出版社、图书馆、标准组织和技术提供商。研讨会的讨论以利益相关者访谈(N = 47)和利益相关者提供的原创讨论短文为基础。在线研讨会提出了一系列建议,以解决整个学术交流生态系统中撤稿研究的复杂问题:结果:RISRS 建议如下结果:RISRS 建议包括:(1) 制定系统的跨行业方法,确保向公众提供一致、标准化、可互操作和及时的撤稿信息;(2) 推荐所有利益相关者均可采用的撤稿类别/分类和相应的撤稿元数据分类法;(3) 制定协调撤稿流程的最佳实践,以实现及时、公平和公正的结果;(4) 向利益相关者宣传出版前和出版后的管理工作,包括学术记录的撤稿和更正:我们的利益相关者参与研究提出了 4 项建议,以解决无意中引用被撤稿研究的问题,并成立了一个工作小组,以制定《撤稿、删除和关注表达的沟通(CORREC)推荐实践》。还需要开展进一步的工作,以确定目前对撤稿的记录情况如何,代码和数据集的撤稿对相关出版物有何影响,以及确定撤稿元数据是否(未能)传播。所有这些工作的成果应能确保被撤回的论文不会在不知情的情况下被引用,并确保在科学以外的公共论坛上,被撤回的论文不会被视为有效的科学成果。
{"title":"Reducing the Inadvertent Spread of Retracted Science: recommendations from the RISRS report.","authors":"Jodi Schneider, Nathan D Woods, Randi Proescholdt","doi":"10.1186/s41073-022-00125-x","DOIUrl":"10.1186/s41073-022-00125-x","url":null,"abstract":"&lt;p&gt;&lt;strong&gt;Background: &lt;/strong&gt;Retraction is a mechanism for alerting readers to unreliable material and other problems in the published scientific and scholarly record. Retracted publications generally remain visible and searchable, but the intention of retraction is to mark them as \"removed\" from the citable record of scholarship. However, in practice, some retracted articles continue to be treated by researchers and the public as valid content as they are often unaware of the retraction. Research over the past decade has identified a number of factors contributing to the unintentional spread of retracted research. The goal of the Reducing the Inadvertent Spread of Retracted Science: Shaping a Research and Implementation Agenda (RISRS) project was to develop an actionable agenda for reducing the inadvertent spread of retracted science. This included identifying how retraction status could be more thoroughly disseminated, and determining what actions are feasible and relevant for particular stakeholders who play a role in the distribution of knowledge.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Methods: &lt;/strong&gt;These recommendations were developed as part of a year-long process that included a scoping review of empirical literature and successive rounds of stakeholder consultation, culminating in a three-part online workshop that brought together a diverse body of 65 stakeholders in October-November 2020 to engage in collaborative problem solving and dialogue. Stakeholders held roles such as publishers, editors, researchers, librarians, standards developers, funding program officers, and technologists and worked for institutions such as universities, governmental agencies, funding organizations, publishing houses, libraries, standards organizations, and technology providers. Workshop discussions were seeded by materials derived from stakeholder interviews (N = 47) and short original discussion pieces contributed by stakeholders. The online workshop resulted in a set of recommendations to address the complexities of retracted research throughout the scholarly communications ecosystem.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Results: &lt;/strong&gt;The RISRS recommendations are: (1) Develop a systematic cross-industry approach to ensure the public availability of consistent, standardized, interoperable, and timely information about retractions; (2) Recommend a taxonomy of retraction categories/classifications and corresponding retraction metadata that can be adopted by all stakeholders; (3) Develop best practices for coordinating the retraction process to enable timely, fair, unbiased outcomes; and (4) Educate stakeholders about pre- and post-publication stewardship, including retraction and correction of the scholarly record.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Conclusions: &lt;/strong&gt;Our stakeholder engagement study led to 4 recommendations to address inadvertent citation of retracted research, and formation of a working group to develop the Communication of Retractions, Removals, and Expressions of Concern (CORREC) Recommende","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":7.2,"publicationDate":"2022-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9483880/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40371377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correction: Characteristics of 'mega' peer-reviewers. 更正:“超级”同行审稿人的特点。
Pub Date : 2022-07-13 DOI: 10.1186/s41073-022-00124-y
Danielle B Rice, Ba' Pham, Justin Presseau, Andrea C Tricco, David Moher
{"title":"Correction: Characteristics of 'mega' peer-reviewers.","authors":"Danielle B Rice,&nbsp;Ba' Pham,&nbsp;Justin Presseau,&nbsp;Andrea C Tricco,&nbsp;David Moher","doi":"10.1186/s41073-022-00124-y","DOIUrl":"https://doi.org/10.1186/s41073-022-00124-y","url":null,"abstract":"","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9281154/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40503523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving equity, diversity, and inclusion in academia. 提高学术界的公平性、多样性和包容性。
Pub Date : 2022-07-04 DOI: 10.1186/s41073-022-00123-z
Omar Dewidar, Nour Elmestekawy, Vivian Welch

There are growing bodies of evidence demonstrating the benefits of equity, diversity, and inclusion (EDI) on academic and organizational excellence. In turn, some editors have stated their desire to improve the EDI of their journals and of the wider scientific community. The Royal Society of Chemistry established a minimum set of requirements aimed at improving EDI in scholarly publishing. Additionally, several resources were reported to have the potential to improve EDI, but their effectiveness and feasibility are yet to be determined. In this commentary we suggest six approaches, based on the Royal Society of Chemistry set of requirements, that journals could implement to improve EDI. They are: (1) adopt a journal EDI statement with clear, actionable steps to achieve it; (2) promote the use of inclusive and bias-free language; (3) appoint a journal's EDI director or lead; (4) establish a EDI mentoring approach; (5) monitor adherence to EDI principles; and (6) publish reports on EDI actions and achievements. We also provide examples of journals that have implemented some of these strategies, and discuss the roles of peer reviewers, authors, researchers, academic institutes, and funders in improving EDI.

越来越多的证据表明,公平、多样性和包容性(EDI)对学术和组织卓越的好处。反过来,一些编辑表示他们希望改进其期刊和更广泛的科学界的电子数据交换。英国皇家化学学会建立了一套最低要求,旨在改善学术出版中的EDI。此外,据报道,一些资源有可能改善电子数据交换,但其有效性和可行性尚待确定。在这篇评论中,我们根据英国皇家化学学会的要求,提出了六种方法,期刊可以实现这些方法来改进EDI。它们是:(1)采用具有明确、可操作步骤的日记账EDI报表;(2)提倡使用包容性和无偏见的语言;(三)指定期刊的EDI主管或者负责人;(4)建立EDI指导方法;(5)监督EDI原则的遵守情况;(6)发布电子数据交换的行动和成果报告。我们还提供了一些已经实施了这些策略的期刊的例子,并讨论了同行审稿人、作者、研究人员、学术机构和资助者在改进EDI中的作用。
{"title":"Improving equity, diversity, and inclusion in academia.","authors":"Omar Dewidar,&nbsp;Nour Elmestekawy,&nbsp;Vivian Welch","doi":"10.1186/s41073-022-00123-z","DOIUrl":"https://doi.org/10.1186/s41073-022-00123-z","url":null,"abstract":"<p><p>There are growing bodies of evidence demonstrating the benefits of equity, diversity, and inclusion (EDI) on academic and organizational excellence. In turn, some editors have stated their desire to improve the EDI of their journals and of the wider scientific community. The Royal Society of Chemistry established a minimum set of requirements aimed at improving EDI in scholarly publishing. Additionally, several resources were reported to have the potential to improve EDI, but their effectiveness and feasibility are yet to be determined. In this commentary we suggest six approaches, based on the Royal Society of Chemistry set of requirements, that journals could implement to improve EDI. They are: (1) adopt a journal EDI statement with clear, actionable steps to achieve it; (2) promote the use of inclusive and bias-free language; (3) appoint a journal's EDI director or lead; (4) establish a EDI mentoring approach; (5) monitor adherence to EDI principles; and (6) publish reports on EDI actions and achievements. We also provide examples of journals that have implemented some of these strategies, and discuss the roles of peer reviewers, authors, researchers, academic institutes, and funders in improving EDI.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9251949/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40470381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
期刊
Research integrity and peer review
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1