首页 > 最新文献

Research integrity and peer review最新文献

英文 中文
Publisher Correction: Fighting reviewer fatigue or amplifying bias? Considerations and recommendations for use of ChatGPT and other large language models in scholarly peer review. 发行商更正:对抗审稿人疲劳还是放大偏见?在学术同行评审中使用ChatGPT和其他大型语言模型的注意事项和建议。
Q1 ETHICS Pub Date : 2023-07-10 DOI: 10.1186/s41073-023-00136-2
Mohammad Hosseini, Serge P J M Horbach
{"title":"Publisher Correction: Fighting reviewer fatigue or amplifying bias? Considerations and recommendations for use of ChatGPT and other large language models in scholarly peer review.","authors":"Mohammad Hosseini, Serge P J M Horbach","doi":"10.1186/s41073-023-00136-2","DOIUrl":"https://doi.org/10.1186/s41073-023-00136-2","url":null,"abstract":"","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"8 1","pages":"7"},"PeriodicalIF":0.0,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10334596/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10170319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Checklist to assess Trustworthiness in RAndomised Controlled Trials (TRACT checklist): concept proposal and pilot. RAndomised Controlled Trials 可信度评估核对表(TRACT 核对表):概念提案和试点。
IF 7.2 Q1 ETHICS Pub Date : 2023-06-20 DOI: 10.1186/s41073-023-00130-8
Ben W Mol, Shimona Lai, Ayesha Rahim, Esmée M Bordewijk, Rui Wang, Rik van Eekelen, Lyle C Gurrin, Jim G Thornton, Madelon van Wely, Wentao Li

Objectives: To propose a checklist that can be used to assess trustworthiness of randomized controlled trials (RCTs).

Design: A screening tool was developed using the four-stage approach proposed by Moher et al. This included defining the scope, reviewing the evidence base, suggesting a list of items from piloting, and holding a consensus meeting. The initial checklist was set-up by a core group who had been involved in the assessment of problematic RCTs for several years. We piloted this in a consensus panel of several stakeholders, including health professionals, reviewers, journal editors, policymakers, researchers, and evidence-synthesis specialists. Each member was asked to score three articles with the checklist and the results were then discussed in consensus meetings.

Outcome: The Trustworthiness in RAndomised Clinical Trials (TRACT) checklist includes 19 items organised into seven domains that are applicable to every RCT: 1) Governance, 2) Author Group, 3) Plausibility of Intervention Usage, 4) Timeframe, 5) Drop-out Rates, 6) Baseline Characteristics, and 7) Outcomes. Each item can be answered as either no concerns, some concerns/no information, or major concerns. If a study is assessed and found to have a majority of items rated at a major concern level, then editors, reviewers or evidence synthesizers should consider a more thorough investigation, including assessment of original individual participant data.

Conclusions: The TRACT checklist is the first checklist developed specifically to detect trustworthiness issues in RCTs. It might help editors, publishers and researchers to screen for such issues in submitted or published RCTs in a transparent and replicable manner.

目的:提出一份可用于评估随机对照试验(RCT)可信度的核对表:提出一份可用于评估随机对照试验(RCT)可信度的核对表:筛选工具的开发采用了莫赫尔等人提出的四阶段方法,包括确定范围、审查证据基础、提出试点项目清单以及召开共识会议。最初的核对表是由一个核心小组制定的,他们多年来一直参与有问题 RCT 的评估工作。我们在一个由多个利益相关者组成的共识小组中进行了试点,其中包括医疗专业人士、审稿人、期刊编辑、政策制定者、研究人员和证据合成专家。每位成员都被要求用核对表给三篇文章打分,然后在共识会议上讨论结果:临床试验可信度(TRACT)核对表包括 19 个项目,分为七个领域,适用于每项临床试验:1)管理;2)作者群;3)干预措施使用的可信度;4)时间框架;5)辍学率;6)基线特征;7)结果。每个项目都可以回答为 "没有疑虑"、"有一些疑虑/没有信息 "或 "有重大疑虑"。如果对一项研究进行评估后发现大部分项目被评为重大问题,那么编辑、评审人员或证据综合人员应考虑进行更彻底的调查,包括评估原始的个体参与者数据:TRACT核对表是第一份专门用于检测RCT可信度问题的核对表。它可以帮助编辑、出版商和研究人员以透明、可复制的方式在提交或发表的 RCT 中筛查此类问题。
{"title":"Checklist to assess Trustworthiness in RAndomised Controlled Trials (TRACT checklist): concept proposal and pilot.","authors":"Ben W Mol, Shimona Lai, Ayesha Rahim, Esmée M Bordewijk, Rui Wang, Rik van Eekelen, Lyle C Gurrin, Jim G Thornton, Madelon van Wely, Wentao Li","doi":"10.1186/s41073-023-00130-8","DOIUrl":"10.1186/s41073-023-00130-8","url":null,"abstract":"<p><strong>Objectives: </strong>To propose a checklist that can be used to assess trustworthiness of randomized controlled trials (RCTs).</p><p><strong>Design: </strong>A screening tool was developed using the four-stage approach proposed by Moher et al. This included defining the scope, reviewing the evidence base, suggesting a list of items from piloting, and holding a consensus meeting. The initial checklist was set-up by a core group who had been involved in the assessment of problematic RCTs for several years. We piloted this in a consensus panel of several stakeholders, including health professionals, reviewers, journal editors, policymakers, researchers, and evidence-synthesis specialists. Each member was asked to score three articles with the checklist and the results were then discussed in consensus meetings.</p><p><strong>Outcome: </strong>The Trustworthiness in RAndomised Clinical Trials (TRACT) checklist includes 19 items organised into seven domains that are applicable to every RCT: 1) Governance, 2) Author Group, 3) Plausibility of Intervention Usage, 4) Timeframe, 5) Drop-out Rates, 6) Baseline Characteristics, and 7) Outcomes. Each item can be answered as either no concerns, some concerns/no information, or major concerns. If a study is assessed and found to have a majority of items rated at a major concern level, then editors, reviewers or evidence synthesizers should consider a more thorough investigation, including assessment of original individual participant data.</p><p><strong>Conclusions: </strong>The TRACT checklist is the first checklist developed specifically to detect trustworthiness issues in RCTs. It might help editors, publishers and researchers to screen for such issues in submitted or published RCTs in a transparent and replicable manner.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"8 1","pages":"6"},"PeriodicalIF":7.2,"publicationDate":"2023-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10280869/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10066264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Responsible research practices could be more strongly endorsed by Australian university codes of research conduct. 负责任的研究实践可以得到澳大利亚大学研究行为准则的更强有力的支持。
Q1 ETHICS Pub Date : 2023-06-06 DOI: 10.1186/s41073-023-00129-1
Yi Kai Ong, Kay L Double, Lisa Bero, Joanna Diong

Background: This study aimed to investigate how strongly Australian university codes of research conduct endorse responsible research practices.

Methods: Codes of research conduct from 25 Australian universities active in health and medical research were obtained from public websites, and audited against 19 questions to assess how strongly they (1) defined research integrity, research quality, and research misconduct, (2) required research to be approved by an appropriate ethics committee, (3) endorsed 9 responsible research practices, and (4) discouraged 5 questionable research practices.

Results: Overall, a median of 10 (IQR 9 to 12) of 19 practices covered in the questions were mentioned, weakly endorsed, or strongly endorsed. Five to 8 of 9 responsible research practices were mentioned, weakly, or strongly endorsed, and 3 questionable research practices were discouraged. Results are stratified by Group of Eight (n = 8) and other (n = 17) universities. Specifically, (1) 6 (75%) Group of Eight and 11 (65%) other codes of research conduct defined research integrity, 4 (50%) and 8 (47%) defined research quality, and 7 (88%) and 16 (94%) defined research misconduct. (2) All codes required ethics approval for human and animal research. (3) All codes required conflicts of interest to be declared, but there was variability in how strongly other research practices were endorsed. The most commonly endorsed practices were ensuring researcher training in research integrity [8 (100%) and 16 (94%)] and making study data publicly available [6 (75%) and 12 (71%)]. The least commonly endorsed practices were making analysis code publicly available [0 (0%) and 0 (0%)] and registering analysis protocols [0 (0%) and 1 (6%)]. (4) Most codes discouraged fabricating data [5 (63%) and 15 (88%)], selectively deleting or modifying data [5 (63%) and 15 (88%)], and selective reporting of results [3 (38%) and 15 (88%)]. No codes discouraged p-hacking or hypothesising after results are known.

Conclusions: Responsible research practices could be more strongly endorsed by Australian university codes of research conduct. Our findings may not be generalisable to smaller universities, or those not active in health and medical research.

背景:本研究旨在调查澳大利亚大学研究行为准则对负责任的研究实践的认可程度。方法:从公共网站上获得活跃于健康和医学研究的25所澳大利亚大学的研究行为准则,并根据19个问题进行审计,以评估它们(1)定义研究诚信、研究质量和研究不端行为的程度,(2)要求研究得到适当的伦理委员会的批准,(3)支持负责任的研究实践,(4)劝阻有问题的研究实践。结果:总体而言,问题涵盖的19个实践中有10个(IQR 9至12)被提及,弱支持或强烈支持。9个负责任的研究实践中有5到8个被提及,弱或强烈支持,3个有问题的研究实践不被鼓励。结果按八国集团(n = 8)和其他(n = 17)所大学进行分层。具体来说,(1)6(75%)和11(65%)其他研究行为准则定义了研究诚信,4(50%)和8(47%)定义了研究质量,7(88%)和16(94%)定义了研究不端行为。(2)所有守则都需要获得人类和动物研究的伦理批准。(3)所有规范都要求声明利益冲突,但对其他研究实践的认可程度存在差异。最普遍认可的做法是确保研究人员在研究诚信方面的培训[8(100%)和16(94%)],以及使研究数据公开[6(75%)和12(71%)]。最不常被认可的实践是使分析代码公开可用[0(0%)和0(0%)]和注册分析协议[0(0%)和1(6%)]。(4)大多数法规不鼓励捏造数据[5(63%)和15(88%)],选择性删除或修改数据[5(63%)和15(88%)],以及选择性报告结果[3(38%)和15(88%)]。在结果已知之后,没有任何代码阻止p-hacking或假设。结论:负责任的研究实践可以得到澳大利亚大学研究行为准则的更强有力的支持。我们的研究结果可能不适用于规模较小的大学,或者那些在健康和医学研究方面不活跃的大学。
{"title":"Responsible research practices could be more strongly endorsed by Australian university codes of research conduct.","authors":"Yi Kai Ong,&nbsp;Kay L Double,&nbsp;Lisa Bero,&nbsp;Joanna Diong","doi":"10.1186/s41073-023-00129-1","DOIUrl":"https://doi.org/10.1186/s41073-023-00129-1","url":null,"abstract":"<p><strong>Background: </strong>This study aimed to investigate how strongly Australian university codes of research conduct endorse responsible research practices.</p><p><strong>Methods: </strong>Codes of research conduct from 25 Australian universities active in health and medical research were obtained from public websites, and audited against 19 questions to assess how strongly they (1) defined research integrity, research quality, and research misconduct, (2) required research to be approved by an appropriate ethics committee, (3) endorsed 9 responsible research practices, and (4) discouraged 5 questionable research practices.</p><p><strong>Results: </strong>Overall, a median of 10 (IQR 9 to 12) of 19 practices covered in the questions were mentioned, weakly endorsed, or strongly endorsed. Five to 8 of 9 responsible research practices were mentioned, weakly, or strongly endorsed, and 3 questionable research practices were discouraged. Results are stratified by Group of Eight (n = 8) and other (n = 17) universities. Specifically, (1) 6 (75%) Group of Eight and 11 (65%) other codes of research conduct defined research integrity, 4 (50%) and 8 (47%) defined research quality, and 7 (88%) and 16 (94%) defined research misconduct. (2) All codes required ethics approval for human and animal research. (3) All codes required conflicts of interest to be declared, but there was variability in how strongly other research practices were endorsed. The most commonly endorsed practices were ensuring researcher training in research integrity [8 (100%) and 16 (94%)] and making study data publicly available [6 (75%) and 12 (71%)]. The least commonly endorsed practices were making analysis code publicly available [0 (0%) and 0 (0%)] and registering analysis protocols [0 (0%) and 1 (6%)]. (4) Most codes discouraged fabricating data [5 (63%) and 15 (88%)], selectively deleting or modifying data [5 (63%) and 15 (88%)], and selective reporting of results [3 (38%) and 15 (88%)]. No codes discouraged p-hacking or hypothesising after results are known.</p><p><strong>Conclusions: </strong>Responsible research practices could be more strongly endorsed by Australian university codes of research conduct. Our findings may not be generalisable to smaller universities, or those not active in health and medical research.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"8 1","pages":"5"},"PeriodicalIF":0.0,"publicationDate":"2023-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10242962/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9591647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Fighting reviewer fatigue or amplifying bias? Considerations and recommendations for use of ChatGPT and other large language models in scholarly peer review. 消除审稿人疲劳还是放大偏见?在学术同行评审中使用 ChatGPT 和其他大型语言模型的考虑因素和建议。
IF 7.2 Q1 ETHICS Pub Date : 2023-05-18 DOI: 10.1186/s41073-023-00133-5
Mohammad Hosseini, Serge P J M Horbach

Background: The emergence of systems based on large language models (LLMs) such as OpenAI's ChatGPT has created a range of discussions in scholarly circles. Since LLMs generate grammatically correct and mostly relevant (yet sometimes outright wrong, irrelevant or biased) outputs in response to provided prompts, using them in various writing tasks including writing peer review reports could result in improved productivity. Given the significance of peer reviews in the existing scholarly publication landscape, exploring challenges and opportunities of using LLMs in peer review seems urgent. After the generation of the first scholarly outputs with LLMs, we anticipate that peer review reports too would be generated with the help of these systems. However, there are currently no guidelines on how these systems should be used in review tasks.

Methods: To investigate the potential impact of using LLMs on the peer review process, we used five core themes within discussions about peer review suggested by Tennant and Ross-Hellauer. These include 1) reviewers' role, 2) editors' role, 3) functions and quality of peer reviews, 4) reproducibility, and 5) the social and epistemic functions of peer reviews. We provide a small-scale exploration of ChatGPT's performance regarding identified issues.

Results: LLMs have the potential to substantially alter the role of both peer reviewers and editors. Through supporting both actors in efficiently writing constructive reports or decision letters, LLMs can facilitate higher quality review and address issues of review shortage. However, the fundamental opacity of LLMs' training data, inner workings, data handling, and development processes raise concerns about potential biases, confidentiality and the reproducibility of review reports. Additionally, as editorial work has a prominent function in defining and shaping epistemic communities, as well as negotiating normative frameworks within such communities, partly outsourcing this work to LLMs might have unforeseen consequences for social and epistemic relations within academia. Regarding performance, we identified major enhancements in a short period and expect LLMs to continue developing.

Conclusions: We believe that LLMs are likely to have a profound impact on academia and scholarly communication. While potentially beneficial to the scholarly communication system, many uncertainties remain and their use is not without risks. In particular, concerns about the amplification of existing biases and inequalities in access to appropriate infrastructure warrant further attention. For the moment, we recommend that if LLMs are used to write scholarly reviews and decision letters, reviewers and editors should disclose their use and accept full responsibility for data security and confidentiality, and their reports' accuracy, tone, reasoning and originality.

背景:基于大型语言模型(LLM)的系统(如 OpenAI 的 ChatGPT)的出现在学术界引起了一系列讨论。由于 LLMs 可以根据所提供的提示生成语法正确且大多相关(但有时也会完全错误、不相关或有偏见)的输出结果,因此在包括撰写同行评议报告在内的各种写作任务中使用 LLMs 可以提高工作效率。鉴于同行评议在现有学术出版物中的重要性,探索在同行评议中使用法律硕士的挑战和机遇似乎迫在眉睫。在利用 LLM 生成第一批学术成果之后,我们预计同行评审报告也将在这些系统的帮助下生成。然而,目前还没有关于如何在评审任务中使用这些系统的指南:为了研究使用 LLM 对同行评审过程的潜在影响,我们使用了 Tennant 和 Ross-Hellauer 提出的同行评审讨论中的五个核心主题。这些主题包括:1)审稿人的角色;2)编辑的角色;3)同行评审的功能和质量;4)可复制性;5)同行评审的社会和认识功能。我们对 ChatGPT 在上述问题上的表现进行了小规模的探讨:结果:LLM 有可能大大改变同行评审员和编辑的角色。通过支持这两个角色高效地撰写建设性报告或决定书,LLM 可以促进更高质量的评审,并解决评审不足的问题。然而,法律硕士的培训数据、内部运作、数据处理和开发过程从根本上是不透明的,这引发了人们对潜在偏见、保密性和审稿报告可重复性的担忧。此外,由于编辑工作在定义和塑造认识论社群以及在这些社群中协商规范性框架方面具有突出作用,因此将这项工作部分外包给法学硕士可能会对学术界内部的社会和认识论关系产生不可预见的后果。在绩效方面,我们在短期内发现了重大改进,并期待法律硕士继续发展:我们认为,法律硕士可能会对学术界和学术交流产生深远影响。虽然可能对学术交流系统有益,但仍存在许多不确定因素,而且其使用并非没有风险。特别是对现有偏见的放大和在使用适当基础设施方面的不平等的担忧,值得进一步关注。目前,我们建议,如果使用 LLM 撰写学术评论和决定信,审稿人和编辑应披露其使用情况,并对数据的安全性和保密性以及报告的准确性、语气、推理和原创性承担全部责任。
{"title":"Fighting reviewer fatigue or amplifying bias? Considerations and recommendations for use of ChatGPT and other large language models in scholarly peer review.","authors":"Mohammad Hosseini, Serge P J M Horbach","doi":"10.1186/s41073-023-00133-5","DOIUrl":"10.1186/s41073-023-00133-5","url":null,"abstract":"<p><strong>Background: </strong>The emergence of systems based on large language models (LLMs) such as OpenAI's ChatGPT has created a range of discussions in scholarly circles. Since LLMs generate grammatically correct and mostly relevant (yet sometimes outright wrong, irrelevant or biased) outputs in response to provided prompts, using them in various writing tasks including writing peer review reports could result in improved productivity. Given the significance of peer reviews in the existing scholarly publication landscape, exploring challenges and opportunities of using LLMs in peer review seems urgent. After the generation of the first scholarly outputs with LLMs, we anticipate that peer review reports too would be generated with the help of these systems. However, there are currently no guidelines on how these systems should be used in review tasks.</p><p><strong>Methods: </strong>To investigate the potential impact of using LLMs on the peer review process, we used five core themes within discussions about peer review suggested by Tennant and Ross-Hellauer. These include 1) reviewers' role, 2) editors' role, 3) functions and quality of peer reviews, 4) reproducibility, and 5) the social and epistemic functions of peer reviews. We provide a small-scale exploration of ChatGPT's performance regarding identified issues.</p><p><strong>Results: </strong>LLMs have the potential to substantially alter the role of both peer reviewers and editors. Through supporting both actors in efficiently writing constructive reports or decision letters, LLMs can facilitate higher quality review and address issues of review shortage. However, the fundamental opacity of LLMs' training data, inner workings, data handling, and development processes raise concerns about potential biases, confidentiality and the reproducibility of review reports. Additionally, as editorial work has a prominent function in defining and shaping epistemic communities, as well as negotiating normative frameworks within such communities, partly outsourcing this work to LLMs might have unforeseen consequences for social and epistemic relations within academia. Regarding performance, we identified major enhancements in a short period and expect LLMs to continue developing.</p><p><strong>Conclusions: </strong>We believe that LLMs are likely to have a profound impact on academia and scholarly communication. While potentially beneficial to the scholarly communication system, many uncertainties remain and their use is not without risks. In particular, concerns about the amplification of existing biases and inequalities in access to appropriate infrastructure warrant further attention. For the moment, we recommend that if LLMs are used to write scholarly reviews and decision letters, reviewers and editors should disclose their use and accept full responsibility for data security and confidentiality, and their reports' accuracy, tone, reasoning and originality.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"8 1","pages":"4"},"PeriodicalIF":7.2,"publicationDate":"2023-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10191680/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9849534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gender differences in peer reviewed grant applications, awards, and amounts: a systematic review and meta-analysis. 同行评议拨款申请、奖励和数额的性别差异:系统回顾和荟萃分析。
Q1 ETHICS Pub Date : 2023-05-03 DOI: 10.1186/s41073-023-00127-3
Karen B Schmaling, Stephen A Gallo

Background: Differential participation and success in grant applications may contribute to women's lesser representation in the sciences. This study's objective was to conduct a systematic review and meta-analysis to address the question of gender differences in grant award acceptance rates and reapplication award acceptance rates (potential bias in peer review outcomes) and other grant outcomes.

Methods: The review was registered on PROSPERO (CRD42021232153) and conducted in accordance with PRISMA 2020 standards. We searched Academic Search Complete, PubMed, and Web of Science for the timeframe 1 January 2005 to 31 December 2020, and forward and backward citations. Studies were included that reported data, by gender, on any of the following: grant applications or reapplications, awards, award amounts, award acceptance rates, or reapplication award acceptance rates. Studies that duplicated data reported in another study were excluded. Gender differences were investigated by meta-analyses and generalized linear mixed models. Doi plots and LFK indices were used to assess reporting bias.

Results: The searches identified 199 records, of which 13 were eligible. An additional 42 sources from forward and backward searches were eligible, for a total of 55 sources with data on one or more outcomes. The data from these studies ranged from 1975 to 2020: 49 sources were published papers and six were funders' reports (the latter were identified by forwards and backwards searches). Twenty-nine studies reported person-level data, 25 reported application-level data, and one study reported both: person-level data were used in analyses. Award acceptance rates were 1% higher for men, which was not significantly different from women (95% CI 3% more for men to 1% more for women, k = 36, n = 303,795 awards and 1,277,442 applications, I2 = 84%). Reapplication award acceptance rates were significantly higher for men (9%, 95% CI 18% to 1%, k = 7, n = 7319 applications and 3324 awards, I2 = 63%). Women received smaller award amounts (g = -2.28, 95% CI -4.92 to 0.36, k = 13, n = 212,935, I2 = 100%).

Conclusions: The proportions of women that applied for grants, re-applied, accepted awards, and accepted awards after reapplication were less than the proportion of eligible women. However, the award acceptance rate was similar for women and men, implying no gender bias in this peer reviewed grant outcome. Women received smaller awards and fewer awards after re-applying, which may negatively affect continued scientific productivity. Greater transparency is needed to monitor and verify these data globally.

背景:拨款申请中的差异参与和成功可能导致妇女在科学领域的代表性较低。本研究的目的是进行系统回顾和荟萃分析,以解决拨款接受率和再申请奖接受率(同行评议结果的潜在偏见)和其他拨款结果的性别差异问题。方法:本综述在PROSPERO注册(CRD42021232153),并按照PRISMA 2020标准进行。我们检索了Academic Search Complete、PubMed和Web of Science,查找时间范围为2005年1月1日至2020年12月31日,以及前后引文。研究纳入了按性别报告以下任何数据的研究:拨款申请或再申请、奖励、奖励金额、奖励接受率或再申请奖励接受率。在另一项研究中报告重复数据的研究被排除在外。通过荟萃分析和广义线性混合模型研究性别差异。Doi图和LFK指数用于评估报告偏倚。结果:检索到199条记录,其中13条符合条件。另外42个来自向前和向后搜索的来源符合条件,总共55个来源有一个或多个结果的数据。这些研究的数据范围从1975年到2020年:49个来源是发表的论文,6个是资助者的报告(后者是通过向前和向后搜索确定的)。29项研究报告了个人水平的数据,25项研究报告了应用水平的数据,一项研究报告了两者的数据:个人水平的数据用于分析。男性的奖项接受率比女性高1%,这与女性没有显著差异(95% CI男性高3%,女性高1%,k = 36, n = 303,795个奖项和1,277,442个申请,I2 = 84%)。男性再次申请奖励的接受率明显更高(9%,95% CI 18%至1%,k = 7, n = 7319份申请和3324份奖励,I2 = 63%)。女性获得的奖励较少(g = -2.28, 95% CI -4.92至0.36,k = 13, n = 212,935, I2 = 100%)。结论:申请资助、重新申请、接受奖励、重新申请后接受奖励的女性比例低于符合条件的女性比例。然而,女性和男性的奖项接受率相似,这意味着在同行评审的拨款结果中没有性别偏见。女性在重新申请后获得的奖励更少,奖励也更少,这可能会对持续的科学生产力产生负面影响。在全球范围内监测和核实这些数据需要更大的透明度。
{"title":"Gender differences in peer reviewed grant applications, awards, and amounts: a systematic review and meta-analysis.","authors":"Karen B Schmaling,&nbsp;Stephen A Gallo","doi":"10.1186/s41073-023-00127-3","DOIUrl":"https://doi.org/10.1186/s41073-023-00127-3","url":null,"abstract":"<p><strong>Background: </strong>Differential participation and success in grant applications may contribute to women's lesser representation in the sciences. This study's objective was to conduct a systematic review and meta-analysis to address the question of gender differences in grant award acceptance rates and reapplication award acceptance rates (potential bias in peer review outcomes) and other grant outcomes.</p><p><strong>Methods: </strong>The review was registered on PROSPERO (CRD42021232153) and conducted in accordance with PRISMA 2020 standards. We searched Academic Search Complete, PubMed, and Web of Science for the timeframe 1 January 2005 to 31 December 2020, and forward and backward citations. Studies were included that reported data, by gender, on any of the following: grant applications or reapplications, awards, award amounts, award acceptance rates, or reapplication award acceptance rates. Studies that duplicated data reported in another study were excluded. Gender differences were investigated by meta-analyses and generalized linear mixed models. Doi plots and LFK indices were used to assess reporting bias.</p><p><strong>Results: </strong>The searches identified 199 records, of which 13 were eligible. An additional 42 sources from forward and backward searches were eligible, for a total of 55 sources with data on one or more outcomes. The data from these studies ranged from 1975 to 2020: 49 sources were published papers and six were funders' reports (the latter were identified by forwards and backwards searches). Twenty-nine studies reported person-level data, 25 reported application-level data, and one study reported both: person-level data were used in analyses. Award acceptance rates were 1% higher for men, which was not significantly different from women (95% CI 3% more for men to 1% more for women, k = 36, n = 303,795 awards and 1,277,442 applications, I<sup>2</sup> = 84%). Reapplication award acceptance rates were significantly higher for men (9%, 95% CI 18% to 1%, k = 7, n = 7319 applications and 3324 awards, I<sup>2</sup> = 63%). Women received smaller award amounts (g = -2.28, 95% CI -4.92 to 0.36, k = 13, n = 212,935, I<sup>2</sup> = 100%).</p><p><strong>Conclusions: </strong>The proportions of women that applied for grants, re-applied, accepted awards, and accepted awards after reapplication were less than the proportion of eligible women. However, the award acceptance rate was similar for women and men, implying no gender bias in this peer reviewed grant outcome. Women received smaller awards and fewer awards after re-applying, which may negatively affect continued scientific productivity. Greater transparency is needed to monitor and verify these data globally.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"8 1","pages":"2"},"PeriodicalIF":0.0,"publicationDate":"2023-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10155348/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9762431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Scientific sinkhole: estimating the cost of peer review based on survey data with snowball sampling. 科学陷坑:基于滚雪球抽样的调查数据估算同行评议的成本。
Q1 ETHICS Pub Date : 2023-04-24 DOI: 10.1186/s41073-023-00128-2
Allana G LeBlanc, Joel D Barnes, Travis J Saunders, Mark S Tremblay, Jean-Philippe Chaput

Background: There are a variety of costs associated with publication of scientific findings. The purpose of this work was to estimate the cost of peer review in scientific publishing per reviewer, per year and for the entire scientific community.

Methods: Internet-based self-report, cross-sectional survey, live between June 28, 2021 and August 2, 2021 was used. Participants were recruited via snowball sampling. No restrictions were placed on geographic location or field of study. Respondents who were asked to act as a peer-reviewer for at least one manuscript submitted to a scientific journal in 2020 were eligible. The primary outcome measure was the cost of peer review per person, per year (calculated as wage-cost x number of initial reviews and number of re-reviews per year). The secondary outcome was the cost of peer review globally (calculated as the number of peer-reviewed papers in Scopus x median wage-cost of initial review and re-review).

Results: A total of 354 participants completed at least one question of the survey, and information necessary to calculate the cost of peer-review was available for 308 participants from 33 countries (44% from Canada). The cost of peer review was estimated at $US1,272 per person, per year ($US1,015 for initial review and $US256 for re-review), or US$1.1-1.7 billion for the scientific community per year. The global cost of peer-review was estimated at US$6 billion in 2020 when relying on the Dimensions database and taking into account reviewed-but-rejected manuscripts.

Conclusions: Peer review represents an important financial piece of scientific publishing. Our results may not represent all countries or fields of study, but are consistent with previous estimates and provide additional context from peer reviewers themselves. Researchers and scientists have long provided peer review as a contribution to the scientific community. Recognizing the importance of peer-review, institutions should acknowledge these costs in job descriptions, performance measurement, promotion packages, and funding applications. Journals should develop methods to compensate reviewers for their time and improve transparency while maintaining the integrity of the peer-review process.

背景:与发表科学发现相关的费用有多种。这项工作的目的是估计科学出版中每个审稿人,每年和整个科学界的同行评议成本。方法:基于互联网的自我报告,横断面调查,现场时间为2021年6月28日至2021年8月2日。参与者是通过滚雪球抽样招募的。对地理位置或研究领域没有任何限制。被要求在2020年至少为一份提交给科学期刊的手稿担任同行审稿人的受访者符合条件。主要结果测量是每人每年同行评审的成本(计算为工资-成本x每年初次评审的次数和重新评审的次数)。次要结果是全球同行评议的成本(计算方法为Scopus中同行评议论文的数量x初次评议和再评议的工资成本中位数)。结果:共有354名参与者完成了调查的至少一个问题,来自33个国家的308名参与者(44%来自加拿大)获得了计算同行评审成本所需的信息。同行评议的费用估计为每人每年1272美元(初次评议1015美元,重新评议256美元),或科学界每年11 - 17亿美元。根据Dimensions数据库并考虑审稿但被拒绝的稿件,2020年同行评议的全球成本估计为60亿美元。结论:同行评议代表了科学出版的重要财务部分。我们的结果可能不代表所有国家或研究领域,但与以前的估计一致,并提供同行审稿人自己的额外背景。长期以来,研究人员和科学家一直将同行评议作为对科学界的贡献。认识到同行评议的重要性,机构应该在职位描述、绩效评估、晋升方案和资金申请中承认这些成本。期刊应该开发方法来补偿审稿人的时间,提高透明度,同时保持同行评审过程的完整性。
{"title":"Scientific sinkhole: estimating the cost of peer review based on survey data with snowball sampling.","authors":"Allana G LeBlanc,&nbsp;Joel D Barnes,&nbsp;Travis J Saunders,&nbsp;Mark S Tremblay,&nbsp;Jean-Philippe Chaput","doi":"10.1186/s41073-023-00128-2","DOIUrl":"https://doi.org/10.1186/s41073-023-00128-2","url":null,"abstract":"<p><strong>Background: </strong>There are a variety of costs associated with publication of scientific findings. The purpose of this work was to estimate the cost of peer review in scientific publishing per reviewer, per year and for the entire scientific community.</p><p><strong>Methods: </strong>Internet-based self-report, cross-sectional survey, live between June 28, 2021 and August 2, 2021 was used. Participants were recruited via snowball sampling. No restrictions were placed on geographic location or field of study. Respondents who were asked to act as a peer-reviewer for at least one manuscript submitted to a scientific journal in 2020 were eligible. The primary outcome measure was the cost of peer review per person, per year (calculated as wage-cost x number of initial reviews and number of re-reviews per year). The secondary outcome was the cost of peer review globally (calculated as the number of peer-reviewed papers in Scopus x median wage-cost of initial review and re-review).</p><p><strong>Results: </strong>A total of 354 participants completed at least one question of the survey, and information necessary to calculate the cost of peer-review was available for 308 participants from 33 countries (44% from Canada). The cost of peer review was estimated at $US1,272 per person, per year ($US1,015 for initial review and $US256 for re-review), or US$1.1-1.7 billion for the scientific community per year. The global cost of peer-review was estimated at US$6 billion in 2020 when relying on the Dimensions database and taking into account reviewed-but-rejected manuscripts.</p><p><strong>Conclusions: </strong>Peer review represents an important financial piece of scientific publishing. Our results may not represent all countries or fields of study, but are consistent with previous estimates and provide additional context from peer reviewers themselves. Researchers and scientists have long provided peer review as a contribution to the scientific community. Recognizing the importance of peer-review, institutions should acknowledge these costs in job descriptions, performance measurement, promotion packages, and funding applications. Journals should develop methods to compensate reviewers for their time and improve transparency while maintaining the integrity of the peer-review process.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"8 1","pages":"3"},"PeriodicalIF":0.0,"publicationDate":"2023-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10122980/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9776362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating and preventing scientific misconduct using Benford's Law. 利用本福德定律调查和预防科学不端行为。
IF 7.2 Q1 ETHICS Pub Date : 2023-04-11 DOI: 10.1186/s41073-022-00126-w
Gregory M Eckhartt, Graeme D Ruxton

Integrity and trust in that integrity are fundamental to academic research. However, procedures for monitoring the trustworthiness of research, and for investigating cases where concern about possible data fraud have been raised are not well established. Here we suggest a practical approach for the investigation of work suspected of fraudulent data manipulation using Benford's Law. This should be of value to both individual peer-reviewers and academic institutions and journals. In this, we draw inspiration from well-established practices of financial auditing. We provide synthesis of the literature on tests of adherence to Benford's Law, culminating in advice of a single initial test for digits in each position of numerical strings within a dataset. We also recommend further tests which may prove useful in the event that specific hypotheses regarding the nature of data manipulation can be justified. Importantly, our advice differs from the most common current implementations of tests of Benford's Law. Furthermore, we apply the approach to previously-published data, highlighting the efficacy of these tests in detecting known irregularities. Finally, we discuss the results of these tests, with reference to their strengths and limitations.

诚信和对诚信的信任是学术研究的根本。然而,对研究的可信度进行监督以及对可能存在的数据欺诈行为进行调查的程序并不完善。在此,我们建议使用本福德定律调查涉嫌欺诈性数据操纵的工作的实用方法。这对同行评审员个人、学术机构和期刊都有价值。在这方面,我们从财务审计的成熟做法中汲取了灵感。我们对有关本福德定律测试的文献进行了综述,最后建议对数据集中数字串每个位置的数字进行一次初步测试。我们还建议进行更多测试,这些测试在有关数据处理性质的特定假设成立时可能会被证明是有用的。重要的是,我们的建议不同于当前最常见的本福德定律检验方法。此外,我们还将该方法应用于以前发表的数据,强调了这些测试在检测已知违规行为方面的功效。最后,我们讨论了这些检验的结果,并提到了它们的优势和局限性。
{"title":"Investigating and preventing scientific misconduct using Benford's Law.","authors":"Gregory M Eckhartt, Graeme D Ruxton","doi":"10.1186/s41073-022-00126-w","DOIUrl":"10.1186/s41073-022-00126-w","url":null,"abstract":"<p><p>Integrity and trust in that integrity are fundamental to academic research. However, procedures for monitoring the trustworthiness of research, and for investigating cases where concern about possible data fraud have been raised are not well established. Here we suggest a practical approach for the investigation of work suspected of fraudulent data manipulation using Benford's Law. This should be of value to both individual peer-reviewers and academic institutions and journals. In this, we draw inspiration from well-established practices of financial auditing. We provide synthesis of the literature on tests of adherence to Benford's Law, culminating in advice of a single initial test for digits in each position of numerical strings within a dataset. We also recommend further tests which may prove useful in the event that specific hypotheses regarding the nature of data manipulation can be justified. Importantly, our advice differs from the most common current implementations of tests of Benford's Law. Furthermore, we apply the approach to previously-published data, highlighting the efficacy of these tests in detecting known irregularities. Finally, we discuss the results of these tests, with reference to their strengths and limitations.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"8 1","pages":"1"},"PeriodicalIF":7.2,"publicationDate":"2023-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10088595/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9290217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ACCORD guideline for reporting consensus-based methods in biomedical research and clinical practice: a study protocol 报告生物医学研究和临床实践中基于共识的方法的ACCORD指南:一项研究方案
Q1 ETHICS Pub Date : 2022-06-07 DOI: 10.1186/s41073-022-00122-0
William T. Gattrell, Amrit Pali Hungin, Amy Price, Christopher C. Winchester, David Tovey, Ellen L. Hughes, Esther J. van Zuuren, Keith Goldman, Patricia Logullo, Robert Matheis, Niall Harrison

Background

Structured, systematic methods to formulate consensus recommendations, such as the Delphi process or nominal group technique, among others, provide the opportunity to harness the knowledge of experts to support clinical decision making in areas of uncertainty. They are widely used in biomedical research, in particular where disease characteristics or resource limitations mean that high-quality evidence generation is difficult. However, poor reporting of methods used to reach a consensus – for example, not clearly explaining the definition of consensus, or not stating how consensus group panellists were selected – can potentially undermine confidence in this type of research and hinder reproducibility. Our objective is therefore to systematically develop a reporting guideline to help the biomedical research and clinical practice community describe the methods or techniques used to reach consensus in a complete, transparent, and consistent manner.

Methods

The ACCORD (ACcurate COnsensus Reporting Document) project will take place in five stages and follow the EQUATOR Network guidance for the development of reporting guidelines. In Stage 1, a multidisciplinary Steering Committee has been established to lead and coordinate the guideline development process. In Stage 2, a systematic literature review will identify evidence on the quality of the reporting of consensus methodology, to obtain potential items for a reporting checklist. In Stage 3, Delphi methodology will be used to reach consensus regarding the checklist items, first among the Steering Committee, and then among a broader Delphi panel comprising participants with a range of expertise, including patient representatives. In Stage 4, the reporting guideline will be finalised in a consensus meeting, along with the production of an Explanation and Elaboration (E&E) document. In Stage 5, we plan to publish the reporting guideline and E&E document in open-access journals, supported by presentations at appropriate events. Dissemination of the reporting guideline, including a website linked to social media channels, is crucial for the document to be implemented in practice.

Discussion

The ACCORD reporting guideline will provide a set of minimum items that should be reported about methods used to achieve consensus, including approaches ranging from simple unstructured opinion gatherings to highly structured processes.

结构化的、系统化的方法来形成一致的建议,如德尔菲过程或名义小组技术等,提供了利用专家知识来支持不确定领域的临床决策的机会。它们广泛用于生物医学研究,特别是在疾病特征或资源限制意味着难以获得高质量证据的情况下。然而,对于用于达成共识的方法的不良报告——例如,没有清楚地解释共识的定义,或者没有说明如何选择共识小组的小组成员——可能会破坏对这类研究的信心,并阻碍可重复性。因此,我们的目标是系统地制定一份报告指南,以帮助生物医学研究和临床实践界以完整、透明和一致的方式描述用于达成共识的方法或技术。方法ACCORD(准确共识报告文件)项目将分五个阶段进行,并遵循EQUATOR网络制定报告指南的指导。在第一阶段,建立了一个多学科指导委员会来领导和协调指南的制定过程。在第二阶段,系统的文献综述将确定共识方法报告质量的证据,以获得报告清单的潜在项目。在第三阶段,德尔菲法将用于就清单项目达成共识,首先在指导委员会之间达成共识,然后在更广泛的德尔菲小组中达成共识,该小组由具有一系列专业知识的参与者组成,包括患者代表。在第4阶段,报告准则将在一次共识会议上最终确定,同时产生一份解释和细化(E&E)文件。在第五阶段,我们计划在开放获取期刊上发表报告指南和E&E文档,并在适当的活动上发表演讲。报告准则的传播,包括一个与社交媒体渠道相连的网站,对于该文件在实践中的实施至关重要。ACCORD报告指南将提供一组应该报告的关于用于达成共识的方法的最低项目,包括从简单的非结构化意见收集到高度结构化过程的方法。
{"title":"ACCORD guideline for reporting consensus-based methods in biomedical research and clinical practice: a study protocol","authors":"William T. Gattrell, Amrit Pali Hungin, Amy Price, Christopher C. Winchester, David Tovey, Ellen L. Hughes, Esther J. van Zuuren, Keith Goldman, Patricia Logullo, Robert Matheis, Niall Harrison","doi":"10.1186/s41073-022-00122-0","DOIUrl":"https://doi.org/10.1186/s41073-022-00122-0","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Background</h3><p>Structured, systematic methods to formulate consensus recommendations, such as the Delphi process or nominal group technique, among others, provide the opportunity to harness the knowledge of experts to support clinical decision making in areas of uncertainty. They are widely used in biomedical research, in particular where disease characteristics or resource limitations mean that high-quality evidence generation is difficult. However, poor reporting of methods used to reach a consensus – for example, not clearly explaining the definition of consensus, or not stating how consensus group panellists were selected – can potentially undermine confidence in this type of research and hinder reproducibility. Our objective is therefore to systematically develop a reporting guideline to help the biomedical research and clinical practice community describe the methods or techniques used to reach consensus in a complete, transparent, and consistent manner.</p><h3 data-test=\"abstract-sub-heading\">Methods</h3><p>The ACCORD (ACcurate COnsensus Reporting Document) project will take place in five stages and follow the EQUATOR Network guidance for the development of reporting guidelines. In Stage 1, a multidisciplinary Steering Committee has been established to lead and coordinate the guideline development process. In Stage 2, a systematic literature review will identify evidence on the quality of the reporting of consensus methodology, to obtain potential items for a reporting checklist. In Stage 3, Delphi methodology will be used to reach consensus regarding the checklist items, first among the Steering Committee, and then among a broader Delphi panel comprising participants with a range of expertise, including patient representatives. In Stage 4, the reporting guideline will be finalised in a consensus meeting, along with the production of an Explanation and Elaboration (E&amp;E) document. In Stage 5, we plan to publish the reporting guideline and E&amp;E document in open-access journals, supported by presentations at appropriate events. Dissemination of the reporting guideline, including a website linked to social media channels, is crucial for the document to be implemented in practice.</p><h3 data-test=\"abstract-sub-heading\">Discussion</h3><p>The ACCORD reporting guideline will provide a set of minimum items that should be reported about methods used to achieve consensus, including approaches ranging from simple unstructured opinion gatherings to highly structured processes.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138529742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
What works for peer review and decision-making in research funding: a realist synthesis. 同行评审和科研经费决策的有效途径:现实主义综述。
IF 7.2 Q1 ETHICS Pub Date : 2022-03-04 DOI: 10.1186/s41073-022-00120-2
Alejandra Recio-Saucedo, Ksenia Crane, Katie Meadmore, Kathryn Fackrell, Hazel Church, Simon Fraser, Amanda Blatch-Jones

Introduction: Allocation of research funds relies on peer review to support funding decisions, and these processes can be susceptible to biases and inefficiencies. The aim of this work was to determine which past interventions to peer review and decision-making have worked to improve research funding practices, how they worked, and for whom.

Methods: Realist synthesis of peer-review publications and grey literature reporting interventions in peer review for research funding.

Results: We analysed 96 publications and 36 website sources. Sixty publications enabled us to extract stakeholder-specific context-mechanism-outcomes configurations (CMOCs) for 50 interventions, which formed the basis of our synthesis. Shorter applications, reviewer and applicant training, virtual funding panels, enhanced decision models, institutional submission quotas, applicant training in peer review and grant-writing reduced interrater variability, increased relevance of funded research, reduced time taken to write and review applications, promoted increased investment into innovation, and lowered cost of panels.

Conclusions: Reports of 50 interventions in different areas of peer review provide useful guidance on ways of solving common issues with the peer review process. Evidence of the broader impact of these interventions on the research ecosystem is still needed, and future research should aim to identify processes that consistently work to improve peer review across funders and research contexts.

导言:科研经费的分配依赖于同行评审来支持资助决策,而这些过程很容易出现偏差和低效。这项工作的目的是确定过去对同行评审和决策的干预措施对改善研究经费的使用起到了什么作用、如何起作用以及对谁起作用:方法:对同行评审出版物和灰色文献进行现实主义综合,这些出版物和灰色文献报告了科研经费同行评审的干预措施:我们分析了 96 篇出版物和 36 个网站来源。我们分析了 96 篇出版物和 36 个网站来源。60 篇出版物使我们能够为 50 项干预措施提取利益相关者特定的背景-机制-结果配置(CMOC),这为我们的综合分析奠定了基础。缩短申请时间、评审员和申请人培训、虚拟资助小组、强化决策模型、机构提交配额、对申请人进行同行评审和撰写资助申请的培训,这些措施降低了评审员之间的差异,提高了受资助研究的相关性,减少了撰写和评审申请所需的时间,促进了对创新的更多投资,并降低了资助小组的成本:关于同行评审不同领域 50 项干预措施的报告为解决同行评审过程中的常见问题提供了有益的指导。这些干预措施对研究生态系统的更广泛影响仍需证据证明,未来的研究应致力于确定在不同资助者和研究环境中始终有效的同行评审改进流程。
{"title":"What works for peer review and decision-making in research funding: a realist synthesis.","authors":"Alejandra Recio-Saucedo, Ksenia Crane, Katie Meadmore, Kathryn Fackrell, Hazel Church, Simon Fraser, Amanda Blatch-Jones","doi":"10.1186/s41073-022-00120-2","DOIUrl":"10.1186/s41073-022-00120-2","url":null,"abstract":"<p><strong>Introduction: </strong>Allocation of research funds relies on peer review to support funding decisions, and these processes can be susceptible to biases and inefficiencies. The aim of this work was to determine which past interventions to peer review and decision-making have worked to improve research funding practices, how they worked, and for whom.</p><p><strong>Methods: </strong>Realist synthesis of peer-review publications and grey literature reporting interventions in peer review for research funding.</p><p><strong>Results: </strong>We analysed 96 publications and 36 website sources. Sixty publications enabled us to extract stakeholder-specific context-mechanism-outcomes configurations (CMOCs) for 50 interventions, which formed the basis of our synthesis. Shorter applications, reviewer and applicant training, virtual funding panels, enhanced decision models, institutional submission quotas, applicant training in peer review and grant-writing reduced interrater variability, increased relevance of funded research, reduced time taken to write and review applications, promoted increased investment into innovation, and lowered cost of panels.</p><p><strong>Conclusions: </strong>Reports of 50 interventions in different areas of peer review provide useful guidance on ways of solving common issues with the peer review process. Evidence of the broader impact of these interventions on the research ecosystem is still needed, and future research should aim to identify processes that consistently work to improve peer review across funders and research contexts.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"7 1","pages":"2"},"PeriodicalIF":7.2,"publicationDate":"2022-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8894828/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"65775168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Characteristics of 'mega' peer-reviewers. “超级”同行审稿人的特点。
Q1 ETHICS Pub Date : 2022-02-21 DOI: 10.1186/s41073-022-00121-1
Danielle B Rice, Ba' Pham, Justin Presseau, Andrea C Tricco, David Moher

Background: The demand for peer reviewers is often perceived as disproportionate to the supply and availability of reviewers. Considering characteristics associated with peer review behaviour can allow for the development of solutions to manage the growing demand for peer reviewers. The objective of this research was to compare characteristics among two groups of reviewers registered in Publons.

Methods: A descriptive cross-sectional study design was used to compare characteristics between (1) individuals completing at least 100 peer reviews ('mega peer reviewers') from January 2018 to December 2018 as and (2) a control group of peer reviewers completing between 1 and 18 peer reviews over the same time period. Data was provided by Publons, which offers a repository of peer reviewer activities in addition to tracking peer reviewer publications and research metrics. Mann Whitney tests and chi-square tests were conducted comparing characteristics (e.g., number of publications, number of citations, word count of peer review) of mega peer reviewers to the control group of reviewers.

Results: A total of 1596 peer reviewers had data provided by Publons. A total of 396 M peer reviewers and a random sample of 1200 control group reviewers were included. A greater proportion of mega peer reviews were male (92%) as compared to the control reviewers (70% male). Mega peer reviewers demonstrated a significantly greater average number of total publications, citations, receipt of Publons awards, and a higher average h index as compared to the control group of reviewers (all p < .001). We found no statistically significant differences in the number of words between the groups (p > .428).

Conclusions: Mega peer reviewers registered in the Publons database also had a higher number of publications and citations as compared to a control group of reviewers. Additional research that considers motivations associated with peer review behaviour should be conducted to help inform peer reviewing activity.

背景:对同行审稿人的需求通常被认为与审稿人的供应和可用性不成比例。考虑与同行评审行为相关的特征可以允许开发解决方案来管理对同行评审人员日益增长的需求。本研究的目的是比较在Publons注册的两组审稿人的特征。方法:采用描述性横断面研究设计,比较(1)2018年1月至2018年12月期间完成至少100次同行评议的个体(“大型同行评议者”)与(2)同一时期完成1至18次同行评议的对照组的特征。数据由Publons提供,除了跟踪同行评议出版物和研究指标外,Publons还提供了同行评议活动的存储库。采用Mann Whitney检验和卡方检验比较大型同行审稿人与对照组审稿人的特征(如发表论文数量、被引用次数、同行评议字数)。结果:共有1596名同行审稿人拥有Publons提供的数据。共纳入396万名同行审稿人和随机抽样1200名对照组审稿人。与对照审稿人(70%)相比,大型同行审稿人中男性的比例(92%)更高。与对照组的审稿人相比,超级同行审稿人表现出明显更高的平均总出版物数量、引用次数、Publons奖的接收以及更高的平均h指数(均p .428)。结论:与对照组的审稿人相比,在Publons数据库中注册的超级同行审稿人也有更多的出版物和引用。应该进行更多的研究,考虑与同行评议行为相关的动机,以帮助为同行评议活动提供信息。
{"title":"Characteristics of 'mega' peer-reviewers.","authors":"Danielle B Rice,&nbsp;Ba' Pham,&nbsp;Justin Presseau,&nbsp;Andrea C Tricco,&nbsp;David Moher","doi":"10.1186/s41073-022-00121-1","DOIUrl":"https://doi.org/10.1186/s41073-022-00121-1","url":null,"abstract":"<p><strong>Background: </strong>The demand for peer reviewers is often perceived as disproportionate to the supply and availability of reviewers. Considering characteristics associated with peer review behaviour can allow for the development of solutions to manage the growing demand for peer reviewers. The objective of this research was to compare characteristics among two groups of reviewers registered in Publons.</p><p><strong>Methods: </strong>A descriptive cross-sectional study design was used to compare characteristics between (1) individuals completing at least 100 peer reviews ('mega peer reviewers') from January 2018 to December 2018 as and (2) a control group of peer reviewers completing between 1 and 18 peer reviews over the same time period. Data was provided by Publons, which offers a repository of peer reviewer activities in addition to tracking peer reviewer publications and research metrics. Mann Whitney tests and chi-square tests were conducted comparing characteristics (e.g., number of publications, number of citations, word count of peer review) of mega peer reviewers to the control group of reviewers.</p><p><strong>Results: </strong>A total of 1596 peer reviewers had data provided by Publons. A total of 396 M peer reviewers and a random sample of 1200 control group reviewers were included. A greater proportion of mega peer reviews were male (92%) as compared to the control reviewers (70% male). Mega peer reviewers demonstrated a significantly greater average number of total publications, citations, receipt of Publons awards, and a higher average h index as compared to the control group of reviewers (all p < .001). We found no statistically significant differences in the number of words between the groups (p > .428).</p><p><strong>Conclusions: </strong>Mega peer reviewers registered in the Publons database also had a higher number of publications and citations as compared to a control group of reviewers. Additional research that considers motivations associated with peer review behaviour should be conducted to help inform peer reviewing activity.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"7 1","pages":"1"},"PeriodicalIF":0.0,"publicationDate":"2022-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8862198/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39941691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
Research integrity and peer review
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1