首页 > 最新文献

Research integrity and peer review最新文献

英文 中文
Reporting quality of abstracts and inconsistencies with full text articles in pediatric orthopedic publications. 小儿骨科出版物中摘要的报告质量和与全文文章的不一致。
Q1 ETHICS Pub Date : 2023-08-23 DOI: 10.1186/s41073-023-00135-3
Sherif Ahmed Kamel, Tamer A El-Sobky

Background: Abstracts should provide a brief yet comprehensive reporting of all components of a manuscript. Inaccurate reporting may mislead readers and impact citation practices. It was our goal to investigate the reporting quality of abstracts of interventional observational studies in three major pediatric orthopedic journals and to analyze any reporting inconsistencies between those abstracts and their corresponding full-text articles.

Methods: We selected a sample of 55 abstracts and their full-text articles published between 2018 and 2022. Included articles were primary therapeutic research investigating the results of treatments or interventions. Abstracts were scrutinized for reporting quality and inconsistencies with their full-text versions with a 22-itemized checklist. The reporting quality of titles was assessed by a 3-items categorical scale.

Results: In 48 (87%) of articles there were abstract reporting inaccuracies related to patient demographics. The study's follow-up and complications were not reported in 21 (38%) of abstracts each. Most common inconsistencies between the abstracts and full-text articles were related to reporting of inclusion or exclusion criteria in 39 (71%) and study correlations in 27 (49%) of articles. Reporting quality of the titles was insufficient in 33 (60%) of articles.

Conclusions: In our study we found low reporting quality of abstracts and noticeable inconsistencies with full-text articles, especially regarding inclusion or exclusion criteria and study correlations. While the current sample is likely not representative of overall pediatric orthopedic literature, we recommend that authors, reviewers, and editors ensure abstracts are reported accurately, ideally following the appropriate reporting guidelines, and that they double check that there are no inconsistencies between abstracts and full text articles. To capture essential study information, journals should also consider increasing abstract word limits.

背景:摘要应该提供一个简短而全面的报告的所有组成部分的手稿。不准确的报道可能会误导读者并影响引用实践。我们的目的是调查三个主要儿科骨科期刊的介入观察性研究摘要的报道质量,并分析这些摘要与其相应的全文文章之间的任何报道不一致之处。方法:选取2018 - 2022年间发表的55篇摘要及其全文。纳入的文章是调查治疗或干预结果的初步治疗研究。摘要通过22项清单审查报告质量和与全文版本的不一致之处。题目的报告质量采用3项分类量表进行评估。结果:48篇(87%)文章的摘要报告与患者人口统计学相关的不准确。21篇(38%)摘要未报道该研究的随访和并发症。摘要和全文文章之间最常见的不一致与39篇(71%)文章的纳入或排除标准报告和27篇(49%)文章的研究相关性报告有关。33篇(60%)文章标题报告质量不足。结论:在我们的研究中,我们发现摘要的报告质量较低,并且与全文文章存在明显的不一致,特别是在纳入或排除标准和研究相关性方面。虽然目前的样本可能不能代表整个儿科骨科文献,但我们建议作者、审稿人和编辑确保摘要报告准确,理想情况下遵循适当的报告指南,并仔细检查摘要与全文文章之间没有不一致之处。为了获取重要的研究信息,期刊还应考虑增加抽象字数限制。
{"title":"Reporting quality of abstracts and inconsistencies with full text articles in pediatric orthopedic publications.","authors":"Sherif Ahmed Kamel, Tamer A El-Sobky","doi":"10.1186/s41073-023-00135-3","DOIUrl":"10.1186/s41073-023-00135-3","url":null,"abstract":"<p><strong>Background: </strong>Abstracts should provide a brief yet comprehensive reporting of all components of a manuscript. Inaccurate reporting may mislead readers and impact citation practices. It was our goal to investigate the reporting quality of abstracts of interventional observational studies in three major pediatric orthopedic journals and to analyze any reporting inconsistencies between those abstracts and their corresponding full-text articles.</p><p><strong>Methods: </strong>We selected a sample of 55 abstracts and their full-text articles published between 2018 and 2022. Included articles were primary therapeutic research investigating the results of treatments or interventions. Abstracts were scrutinized for reporting quality and inconsistencies with their full-text versions with a 22-itemized checklist. The reporting quality of titles was assessed by a 3-items categorical scale.</p><p><strong>Results: </strong>In 48 (87%) of articles there were abstract reporting inaccuracies related to patient demographics. The study's follow-up and complications were not reported in 21 (38%) of abstracts each. Most common inconsistencies between the abstracts and full-text articles were related to reporting of inclusion or exclusion criteria in 39 (71%) and study correlations in 27 (49%) of articles. Reporting quality of the titles was insufficient in 33 (60%) of articles.</p><p><strong>Conclusions: </strong>In our study we found low reporting quality of abstracts and noticeable inconsistencies with full-text articles, especially regarding inclusion or exclusion criteria and study correlations. While the current sample is likely not representative of overall pediatric orthopedic literature, we recommend that authors, reviewers, and editors ensure abstracts are reported accurately, ideally following the appropriate reporting guidelines, and that they double check that there are no inconsistencies between abstracts and full text articles. To capture essential study information, journals should also consider increasing abstract word limits.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"8 1","pages":"11"},"PeriodicalIF":0.0,"publicationDate":"2023-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10463470/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10121003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Raising concerns on questionable ethics approvals - a case study of 456 trials from the Institut Hospitalo-Universitaire Méditerranée Infection. 对有问题的伦理批准提出关注——一项对来自法国医院-大学(Institut hospital - universitaire)的456项试验的案例研究。
Q1 ETHICS Pub Date : 2023-08-03 DOI: 10.1186/s41073-023-00134-4
Fabrice Frank, Nans Florens, Gideon Meyerowitz-Katz, Jérôme Barriere, Éric Billy, Véronique Saada, Alexander Samuel, Jacques Robert, Lonni Besançon

Background: The practice of clinical research is strictly regulated by law. During submission and review processes, compliance of such research with the laws enforced in the country where it was conducted is not always correctly filled in by the authors or verified by the editors. Here, we report a case of a single institution for which one may find hundreds of publications with seemingly relevant ethical concerns, along with 10 months of follow-up through contacts with the editors of these articles. We thus argue for a stricter control of ethical authorization by scientific editors and we call on publishers to cooperate to this end.

Methods: We present an investigation of the ethics and legal aspects of 456 studies published by the IHU-MI (Institut Hospitalo-Universitaire Méditerranée Infection) in Marseille, France.

Results: We identified a wide range of issues with the stated research authorization and ethics of the published studies with respect to the Institutional Review Board and the approval presented. Among the studies investigated, 248 were conducted with the same ethics approval number, even though the subjects, samples, and countries of investigation were different. Thirty-nine (39) did not even contain a reference to the ethics approval number while they present research on human beings. We thus contacted the journals that published these articles and provide their responses to our concerns. It should be noted that, since our investigation and reporting to journals, PLOS has issued expressions of concerns for several publications we analyze here.

Conclusion: This case presents an investigation of the veracity of ethical approval, and more than 10 months of follow-up by independent researchers. We call for stricter control and cooperation in handling of these cases, including editorial requirement to upload ethical approval documents, guidelines from COPE to address such ethical concerns, and transparent editorial policies and timelines to answer such concerns. All supplementary materials are available.

背景:临床研究实践受到法律的严格规范。在提交和审查过程中,作者并不总是正确填写或编辑核实这些研究是否符合进行研究的国家所执行的法律。在这里,我们报告了一个单一机构的案例,人们可能会发现数百篇看似相关的伦理问题的出版物,以及通过与这些文章的编辑联系的10个月的随访。因此,我们主张对科学编辑的伦理授权进行更严格的控制,并呼吁出版商为此进行合作。方法:我们对法国马赛IHU-MI(医院-大学研究所)发表的456项研究的伦理和法律方面进行了调查。结果:我们发现了与机构审查委员会和提交的批准有关的已发表研究的研究授权和伦理方面的广泛问题。在被调查的研究中,尽管研究对象、样本和调查国家不同,但有248项研究使用了相同的伦理批准号。39项研究在展示人体研究时甚至没有提及伦理批准号。因此,我们联系了发表这些文章的期刊,并提供了他们对我们关注的问题的回应。应该指出的是,自从我们的调查和向期刊报告以来,PLOS已经发布了我们在这里分析的几种出版物的担忧表达。结论:本病例对伦理批准的真实性进行了调查,并由独立研究人员进行了10个多月的随访。我们呼吁在处理这些案件时进行更严格的控制和合作,包括编辑要求上传伦理批准文件,COPE的指导方针来解决这些伦理问题,以及透明的编辑政策和时间表来回答这些问题。所有补充材料都准备好了。
{"title":"Raising concerns on questionable ethics approvals - a case study of 456 trials from the Institut Hospitalo-Universitaire Méditerranée Infection.","authors":"Fabrice Frank,&nbsp;Nans Florens,&nbsp;Gideon Meyerowitz-Katz,&nbsp;Jérôme Barriere,&nbsp;Éric Billy,&nbsp;Véronique Saada,&nbsp;Alexander Samuel,&nbsp;Jacques Robert,&nbsp;Lonni Besançon","doi":"10.1186/s41073-023-00134-4","DOIUrl":"https://doi.org/10.1186/s41073-023-00134-4","url":null,"abstract":"<p><strong>Background: </strong>The practice of clinical research is strictly regulated by law. During submission and review processes, compliance of such research with the laws enforced in the country where it was conducted is not always correctly filled in by the authors or verified by the editors. Here, we report a case of a single institution for which one may find hundreds of publications with seemingly relevant ethical concerns, along with 10 months of follow-up through contacts with the editors of these articles. We thus argue for a stricter control of ethical authorization by scientific editors and we call on publishers to cooperate to this end.</p><p><strong>Methods: </strong>We present an investigation of the ethics and legal aspects of 456 studies published by the IHU-MI (Institut Hospitalo-Universitaire Méditerranée Infection) in Marseille, France.</p><p><strong>Results: </strong>We identified a wide range of issues with the stated research authorization and ethics of the published studies with respect to the Institutional Review Board and the approval presented. Among the studies investigated, 248 were conducted with the same ethics approval number, even though the subjects, samples, and countries of investigation were different. Thirty-nine (39) did not even contain a reference to the ethics approval number while they present research on human beings. We thus contacted the journals that published these articles and provide their responses to our concerns. It should be noted that, since our investigation and reporting to journals, PLOS has issued expressions of concerns for several publications we analyze here.</p><p><strong>Conclusion: </strong>This case presents an investigation of the veracity of ethical approval, and more than 10 months of follow-up by independent researchers. We call for stricter control and cooperation in handling of these cases, including editorial requirement to upload ethical approval documents, guidelines from COPE to address such ethical concerns, and transparent editorial policies and timelines to answer such concerns. All supplementary materials are available.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"8 1","pages":"9"},"PeriodicalIF":0.0,"publicationDate":"2023-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10398994/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9938883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A new approach to grant review assessments: score, then rank. 一种授予评审评估的新方法:评分,然后排名。
Q1 ETHICS Pub Date : 2023-07-24 DOI: 10.1186/s41073-023-00131-7
Stephen A Gallo, Michael Pearce, Carole J Lee, Elena A Erosheva

Background: In many grant review settings, proposals are selected for funding on the basis of summary statistics of review ratings. Challenges of this approach (including the presence of ties and unclear ordering of funding preference for proposals) could be mitigated if rankings such as top-k preferences or paired comparisons, which are local evaluations that enforce ordering across proposals, were also collected and incorporated in the analysis of review ratings. However, analyzing ratings and rankings simultaneously has not been done until recently. This paper describes a practical method for integrating rankings and scores and demonstrates its usefulness for making funding decisions in real-world applications.

Methods: We first present the application of our existing joint model for rankings and ratings, the Mallows-Binomial, in obtaining an integrated score for each proposal and generating the induced preference ordering. We then apply this methodology to several theoretical "toy" examples of rating and ranking data, designed to demonstrate specific properties of the model. We then describe an innovative protocol for collecting rankings of the top-six proposals as an add-on to the typical peer review scoring procedures and provide a case study using actual peer review data to exemplify the output and how the model can appropriately resolve judges' evaluations.

Results: For the theoretical examples, we show how the model can provide a preference order to equally rated proposals by incorporating rankings, to proposals using ratings and only partial rankings (and how they differ from a ratings-only approach) and to proposals where judges provide internally inconsistent ratings/rankings and outlier scoring. Finally, we discuss how, using real world panel data, this method can provide information about funding priority with a level of accuracy in a well-suited format for research funding decisions.

Conclusions: A methodology is provided to collect and employ both rating and ranking data in peer review assessments of proposal submission quality, highlighting several advantages over methods relying on ratings alone. This method leverages information to most accurately distill reviewer opinion into a useful output to make an informed funding decision and is general enough to be applied to settings such as in the NIH panel review process.

背景:在许多拨款审查设置中,提案是根据审查评分的汇总统计来选择资助的。如果还收集诸如top-k偏好或配对比较之类的排名,并将其纳入审查评级的分析中,则可以减轻这种方法的挑战(包括存在联系和提案资金偏好的不明确排序)。配对比较是在提案之间强制排序的本地评估。但是,直到最近才同时分析收视率和排名。本文描述了一种整合排名和分数的实用方法,并展示了它在实际应用中做出资助决策的有用性。方法:我们首先介绍了我们现有的排名和评级联合模型,Mallows-Binomial,在获得每个提案的综合得分和生成诱导偏好排序中的应用。然后,我们将这种方法应用于几个评级和排名数据的理论“玩具”示例,旨在展示模型的特定属性。然后,我们描述了一个收集前六名提案排名的创新协议,作为典型同行评议评分程序的附加程序,并提供了一个使用实际同行评议数据的案例研究,以举例说明输出以及该模型如何适当地解决评委的评估。结果:对于理论示例,我们展示了该模型如何通过结合排名来为同等评级的提案提供偏好顺序,如何为使用评级和仅部分排名的提案提供偏好顺序(以及它们与仅评级方法的区别),以及如何为评委提供内部不一致的评级/排名和异常值评分的提案提供偏好顺序。最后,我们讨论了如何使用真实世界的面板数据,这种方法能够以一种非常适合研究资助决策的格式,以一定程度的准确性提供有关资助优先级的信息。结论:提供了一种方法来收集和使用评级和排名数据在提案提交质量的同行评议评估中,突出了仅依赖评级方法的几个优势。这种方法利用信息,最准确地将审稿人的意见提炼成有用的输出,以做出明智的资助决定,并且足够普遍,可以应用于NIH小组审查过程等设置。
{"title":"A new approach to grant review assessments: score, then rank.","authors":"Stephen A Gallo,&nbsp;Michael Pearce,&nbsp;Carole J Lee,&nbsp;Elena A Erosheva","doi":"10.1186/s41073-023-00131-7","DOIUrl":"https://doi.org/10.1186/s41073-023-00131-7","url":null,"abstract":"<p><strong>Background: </strong>In many grant review settings, proposals are selected for funding on the basis of summary statistics of review ratings. Challenges of this approach (including the presence of ties and unclear ordering of funding preference for proposals) could be mitigated if rankings such as top-k preferences or paired comparisons, which are local evaluations that enforce ordering across proposals, were also collected and incorporated in the analysis of review ratings. However, analyzing ratings and rankings simultaneously has not been done until recently. This paper describes a practical method for integrating rankings and scores and demonstrates its usefulness for making funding decisions in real-world applications.</p><p><strong>Methods: </strong>We first present the application of our existing joint model for rankings and ratings, the Mallows-Binomial, in obtaining an integrated score for each proposal and generating the induced preference ordering. We then apply this methodology to several theoretical \"toy\" examples of rating and ranking data, designed to demonstrate specific properties of the model. We then describe an innovative protocol for collecting rankings of the top-six proposals as an add-on to the typical peer review scoring procedures and provide a case study using actual peer review data to exemplify the output and how the model can appropriately resolve judges' evaluations.</p><p><strong>Results: </strong>For the theoretical examples, we show how the model can provide a preference order to equally rated proposals by incorporating rankings, to proposals using ratings and only partial rankings (and how they differ from a ratings-only approach) and to proposals where judges provide internally inconsistent ratings/rankings and outlier scoring. Finally, we discuss how, using real world panel data, this method can provide information about funding priority with a level of accuracy in a well-suited format for research funding decisions.</p><p><strong>Conclusions: </strong>A methodology is provided to collect and employ both rating and ranking data in peer review assessments of proposal submission quality, highlighting several advantages over methods relying on ratings alone. This method leverages information to most accurately distill reviewer opinion into a useful output to make an informed funding decision and is general enough to be applied to settings such as in the NIH panel review process.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"8 1","pages":"10"},"PeriodicalIF":0.0,"publicationDate":"2023-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10367367/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9865500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Institutional capacity to prevent and manage research misconduct: perspectives from Kenyan research regulators. 预防和管理科研不端行为的机构能力:来自肯尼亚科研监管机构的观点。
Q1 ETHICS Pub Date : 2023-07-12 DOI: 10.1186/s41073-023-00132-6
Edwin Were, Jepchirchir Kiplagat, Eunice Kaguiri, Rose Ayikukwei, Violet Naanyu

Background: Research misconduct i.e. fabrication, falsification, and plagiarism is associated with individual, institutional, national, and global factors. Researchers' perceptions of weak or non-existent institutional guidelines on the prevention and management of research misconduct can encourage these practices. Few countries in Africa have clear guidance on research misconduct. In Kenya, the capacity to prevent or manage research misconduct in academic and research institutions has not been documented. The objective of this study was to explore the perceptions of Kenyan research regulators on the occurrence of and institutional capacity to prevent or manage research misconduct.

Methods: Interviews with open-ended questions were conducted with 27 research regulators (chairs and secretaries of ethics committees, research directors of academic and research institutions, and national regulatory bodies). Among other questions, participants were asked: (1) How common is research misconduct in your view? (2) Does your institution have the capacity to prevent research misconduct? (3) Does your institution have the capacity to manage research misconduct? Their responses were audiotaped, transcribed, and coded using NVivo software. Deductive coding covered predefined themes including perceptions on occurrence, prevention detection, investigation, and management of research misconduct. Results are presented with illustrative quotes.

Results: Respondents perceived research misconduct to be very common among students developing thesis reports. Their responses suggested there was no dedicated capacity to prevent or manage research misconduct at the institutional and national levels. There were no specific national guidelines on research misconduct. At the institutional level, the only capacity/efforts mentioned were directed at reducing, detecting, and managing student plagiarism. There was no direct mention of the capacity to manage fabrication and falsification or misconduct by faculty researchers. We recommend the development of Kenya code of conduct or research integrity guidelines that would cover misconduct.

背景:研究不端行为,即捏造、伪造和抄袭与个人、机构、国家和全球因素有关。研究人员对预防和管理研究不端行为的薄弱或不存在的机构指导方针的认识可能会鼓励这些做法。非洲很少有国家对研究不端行为有明确的指导。在肯尼亚,学术和研究机构预防或管理研究不端行为的能力尚未得到记录。本研究的目的是探讨肯尼亚研究监管机构对研究不端行为的发生和机构预防或管理研究不端行为的能力的看法。方法:对27名研究监管人员(伦理委员会主席和秘书、学术和研究机构的研究主任以及国家监管机构)进行开放式访谈。在其他问题中,参与者被问到:(1)在你看来,科研不端行为有多普遍?(2)贵机构是否有能力防止研究不端行为?(3)贵机构是否有能力管理研究不端行为?他们的回答被录音,转录,并使用NVivo软件编码。演绎编码涵盖了预定义的主题,包括对研究不端行为的发生、预防、检测、调查和管理的看法。结果给出了说明性引用。结果:受访者认为研究不端行为是非常普遍的学生发展论文报告。他们的答复表明,在机构和国家层面没有专门的能力来预防或管理研究不端行为。没有针对研究不端行为的具体国家指导方针。在机构层面,提到的唯一能力/努力是针对减少、发现和管理学生抄袭。没有直接提到管理捏造、伪造或教员研究人员不当行为的能力。我们建议制定肯尼亚行为准则或研究诚信准则,以涵盖不端行为。
{"title":"Institutional capacity to prevent and manage research misconduct: perspectives from Kenyan research regulators.","authors":"Edwin Were,&nbsp;Jepchirchir Kiplagat,&nbsp;Eunice Kaguiri,&nbsp;Rose Ayikukwei,&nbsp;Violet Naanyu","doi":"10.1186/s41073-023-00132-6","DOIUrl":"https://doi.org/10.1186/s41073-023-00132-6","url":null,"abstract":"<p><strong>Background: </strong>Research misconduct i.e. fabrication, falsification, and plagiarism is associated with individual, institutional, national, and global factors. Researchers' perceptions of weak or non-existent institutional guidelines on the prevention and management of research misconduct can encourage these practices. Few countries in Africa have clear guidance on research misconduct. In Kenya, the capacity to prevent or manage research misconduct in academic and research institutions has not been documented. The objective of this study was to explore the perceptions of Kenyan research regulators on the occurrence of and institutional capacity to prevent or manage research misconduct.</p><p><strong>Methods: </strong>Interviews with open-ended questions were conducted with 27 research regulators (chairs and secretaries of ethics committees, research directors of academic and research institutions, and national regulatory bodies). Among other questions, participants were asked: (1) How common is research misconduct in your view? (2) Does your institution have the capacity to prevent research misconduct? (3) Does your institution have the capacity to manage research misconduct? Their responses were audiotaped, transcribed, and coded using NVivo software. Deductive coding covered predefined themes including perceptions on occurrence, prevention detection, investigation, and management of research misconduct. Results are presented with illustrative quotes.</p><p><strong>Results: </strong>Respondents perceived research misconduct to be very common among students developing thesis reports. Their responses suggested there was no dedicated capacity to prevent or manage research misconduct at the institutional and national levels. There were no specific national guidelines on research misconduct. At the institutional level, the only capacity/efforts mentioned were directed at reducing, detecting, and managing student plagiarism. There was no direct mention of the capacity to manage fabrication and falsification or misconduct by faculty researchers. We recommend the development of Kenya code of conduct or research integrity guidelines that would cover misconduct.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"8 1","pages":"8"},"PeriodicalIF":0.0,"publicationDate":"2023-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10337100/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10190722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Publisher Correction: Fighting reviewer fatigue or amplifying bias? Considerations and recommendations for use of ChatGPT and other large language models in scholarly peer review. 发行商更正:对抗审稿人疲劳还是放大偏见?在学术同行评审中使用ChatGPT和其他大型语言模型的注意事项和建议。
Q1 ETHICS Pub Date : 2023-07-10 DOI: 10.1186/s41073-023-00136-2
Mohammad Hosseini, Serge P J M Horbach
{"title":"Publisher Correction: Fighting reviewer fatigue or amplifying bias? Considerations and recommendations for use of ChatGPT and other large language models in scholarly peer review.","authors":"Mohammad Hosseini,&nbsp;Serge P J M Horbach","doi":"10.1186/s41073-023-00136-2","DOIUrl":"https://doi.org/10.1186/s41073-023-00136-2","url":null,"abstract":"","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"8 1","pages":"7"},"PeriodicalIF":0.0,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10334596/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10170319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Checklist to assess Trustworthiness in RAndomised Controlled Trials (TRACT checklist): concept proposal and pilot. RAndomised Controlled Trials 可信度评估核对表(TRACT 核对表):概念提案和试点。
IF 7.2 Q1 ETHICS Pub Date : 2023-06-20 DOI: 10.1186/s41073-023-00130-8
Ben W Mol, Shimona Lai, Ayesha Rahim, Esmée M Bordewijk, Rui Wang, Rik van Eekelen, Lyle C Gurrin, Jim G Thornton, Madelon van Wely, Wentao Li

Objectives: To propose a checklist that can be used to assess trustworthiness of randomized controlled trials (RCTs).

Design: A screening tool was developed using the four-stage approach proposed by Moher et al. This included defining the scope, reviewing the evidence base, suggesting a list of items from piloting, and holding a consensus meeting. The initial checklist was set-up by a core group who had been involved in the assessment of problematic RCTs for several years. We piloted this in a consensus panel of several stakeholders, including health professionals, reviewers, journal editors, policymakers, researchers, and evidence-synthesis specialists. Each member was asked to score three articles with the checklist and the results were then discussed in consensus meetings.

Outcome: The Trustworthiness in RAndomised Clinical Trials (TRACT) checklist includes 19 items organised into seven domains that are applicable to every RCT: 1) Governance, 2) Author Group, 3) Plausibility of Intervention Usage, 4) Timeframe, 5) Drop-out Rates, 6) Baseline Characteristics, and 7) Outcomes. Each item can be answered as either no concerns, some concerns/no information, or major concerns. If a study is assessed and found to have a majority of items rated at a major concern level, then editors, reviewers or evidence synthesizers should consider a more thorough investigation, including assessment of original individual participant data.

Conclusions: The TRACT checklist is the first checklist developed specifically to detect trustworthiness issues in RCTs. It might help editors, publishers and researchers to screen for such issues in submitted or published RCTs in a transparent and replicable manner.

目的:提出一份可用于评估随机对照试验(RCT)可信度的核对表:提出一份可用于评估随机对照试验(RCT)可信度的核对表:筛选工具的开发采用了莫赫尔等人提出的四阶段方法,包括确定范围、审查证据基础、提出试点项目清单以及召开共识会议。最初的核对表是由一个核心小组制定的,他们多年来一直参与有问题 RCT 的评估工作。我们在一个由多个利益相关者组成的共识小组中进行了试点,其中包括医疗专业人士、审稿人、期刊编辑、政策制定者、研究人员和证据合成专家。每位成员都被要求用核对表给三篇文章打分,然后在共识会议上讨论结果:临床试验可信度(TRACT)核对表包括 19 个项目,分为七个领域,适用于每项临床试验:1)管理;2)作者群;3)干预措施使用的可信度;4)时间框架;5)辍学率;6)基线特征;7)结果。每个项目都可以回答为 "没有疑虑"、"有一些疑虑/没有信息 "或 "有重大疑虑"。如果对一项研究进行评估后发现大部分项目被评为重大问题,那么编辑、评审人员或证据综合人员应考虑进行更彻底的调查,包括评估原始的个体参与者数据:TRACT核对表是第一份专门用于检测RCT可信度问题的核对表。它可以帮助编辑、出版商和研究人员以透明、可复制的方式在提交或发表的 RCT 中筛查此类问题。
{"title":"Checklist to assess Trustworthiness in RAndomised Controlled Trials (TRACT checklist): concept proposal and pilot.","authors":"Ben W Mol, Shimona Lai, Ayesha Rahim, Esmée M Bordewijk, Rui Wang, Rik van Eekelen, Lyle C Gurrin, Jim G Thornton, Madelon van Wely, Wentao Li","doi":"10.1186/s41073-023-00130-8","DOIUrl":"10.1186/s41073-023-00130-8","url":null,"abstract":"<p><strong>Objectives: </strong>To propose a checklist that can be used to assess trustworthiness of randomized controlled trials (RCTs).</p><p><strong>Design: </strong>A screening tool was developed using the four-stage approach proposed by Moher et al. This included defining the scope, reviewing the evidence base, suggesting a list of items from piloting, and holding a consensus meeting. The initial checklist was set-up by a core group who had been involved in the assessment of problematic RCTs for several years. We piloted this in a consensus panel of several stakeholders, including health professionals, reviewers, journal editors, policymakers, researchers, and evidence-synthesis specialists. Each member was asked to score three articles with the checklist and the results were then discussed in consensus meetings.</p><p><strong>Outcome: </strong>The Trustworthiness in RAndomised Clinical Trials (TRACT) checklist includes 19 items organised into seven domains that are applicable to every RCT: 1) Governance, 2) Author Group, 3) Plausibility of Intervention Usage, 4) Timeframe, 5) Drop-out Rates, 6) Baseline Characteristics, and 7) Outcomes. Each item can be answered as either no concerns, some concerns/no information, or major concerns. If a study is assessed and found to have a majority of items rated at a major concern level, then editors, reviewers or evidence synthesizers should consider a more thorough investigation, including assessment of original individual participant data.</p><p><strong>Conclusions: </strong>The TRACT checklist is the first checklist developed specifically to detect trustworthiness issues in RCTs. It might help editors, publishers and researchers to screen for such issues in submitted or published RCTs in a transparent and replicable manner.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"8 1","pages":"6"},"PeriodicalIF":7.2,"publicationDate":"2023-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10280869/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10066264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Responsible research practices could be more strongly endorsed by Australian university codes of research conduct. 负责任的研究实践可以得到澳大利亚大学研究行为准则的更强有力的支持。
Q1 ETHICS Pub Date : 2023-06-06 DOI: 10.1186/s41073-023-00129-1
Yi Kai Ong, Kay L Double, Lisa Bero, Joanna Diong

Background: This study aimed to investigate how strongly Australian university codes of research conduct endorse responsible research practices.

Methods: Codes of research conduct from 25 Australian universities active in health and medical research were obtained from public websites, and audited against 19 questions to assess how strongly they (1) defined research integrity, research quality, and research misconduct, (2) required research to be approved by an appropriate ethics committee, (3) endorsed 9 responsible research practices, and (4) discouraged 5 questionable research practices.

Results: Overall, a median of 10 (IQR 9 to 12) of 19 practices covered in the questions were mentioned, weakly endorsed, or strongly endorsed. Five to 8 of 9 responsible research practices were mentioned, weakly, or strongly endorsed, and 3 questionable research practices were discouraged. Results are stratified by Group of Eight (n = 8) and other (n = 17) universities. Specifically, (1) 6 (75%) Group of Eight and 11 (65%) other codes of research conduct defined research integrity, 4 (50%) and 8 (47%) defined research quality, and 7 (88%) and 16 (94%) defined research misconduct. (2) All codes required ethics approval for human and animal research. (3) All codes required conflicts of interest to be declared, but there was variability in how strongly other research practices were endorsed. The most commonly endorsed practices were ensuring researcher training in research integrity [8 (100%) and 16 (94%)] and making study data publicly available [6 (75%) and 12 (71%)]. The least commonly endorsed practices were making analysis code publicly available [0 (0%) and 0 (0%)] and registering analysis protocols [0 (0%) and 1 (6%)]. (4) Most codes discouraged fabricating data [5 (63%) and 15 (88%)], selectively deleting or modifying data [5 (63%) and 15 (88%)], and selective reporting of results [3 (38%) and 15 (88%)]. No codes discouraged p-hacking or hypothesising after results are known.

Conclusions: Responsible research practices could be more strongly endorsed by Australian university codes of research conduct. Our findings may not be generalisable to smaller universities, or those not active in health and medical research.

背景:本研究旨在调查澳大利亚大学研究行为准则对负责任的研究实践的认可程度。方法:从公共网站上获得活跃于健康和医学研究的25所澳大利亚大学的研究行为准则,并根据19个问题进行审计,以评估它们(1)定义研究诚信、研究质量和研究不端行为的程度,(2)要求研究得到适当的伦理委员会的批准,(3)支持负责任的研究实践,(4)劝阻有问题的研究实践。结果:总体而言,问题涵盖的19个实践中有10个(IQR 9至12)被提及,弱支持或强烈支持。9个负责任的研究实践中有5到8个被提及,弱或强烈支持,3个有问题的研究实践不被鼓励。结果按八国集团(n = 8)和其他(n = 17)所大学进行分层。具体来说,(1)6(75%)和11(65%)其他研究行为准则定义了研究诚信,4(50%)和8(47%)定义了研究质量,7(88%)和16(94%)定义了研究不端行为。(2)所有守则都需要获得人类和动物研究的伦理批准。(3)所有规范都要求声明利益冲突,但对其他研究实践的认可程度存在差异。最普遍认可的做法是确保研究人员在研究诚信方面的培训[8(100%)和16(94%)],以及使研究数据公开[6(75%)和12(71%)]。最不常被认可的实践是使分析代码公开可用[0(0%)和0(0%)]和注册分析协议[0(0%)和1(6%)]。(4)大多数法规不鼓励捏造数据[5(63%)和15(88%)],选择性删除或修改数据[5(63%)和15(88%)],以及选择性报告结果[3(38%)和15(88%)]。在结果已知之后,没有任何代码阻止p-hacking或假设。结论:负责任的研究实践可以得到澳大利亚大学研究行为准则的更强有力的支持。我们的研究结果可能不适用于规模较小的大学,或者那些在健康和医学研究方面不活跃的大学。
{"title":"Responsible research practices could be more strongly endorsed by Australian university codes of research conduct.","authors":"Yi Kai Ong,&nbsp;Kay L Double,&nbsp;Lisa Bero,&nbsp;Joanna Diong","doi":"10.1186/s41073-023-00129-1","DOIUrl":"https://doi.org/10.1186/s41073-023-00129-1","url":null,"abstract":"<p><strong>Background: </strong>This study aimed to investigate how strongly Australian university codes of research conduct endorse responsible research practices.</p><p><strong>Methods: </strong>Codes of research conduct from 25 Australian universities active in health and medical research were obtained from public websites, and audited against 19 questions to assess how strongly they (1) defined research integrity, research quality, and research misconduct, (2) required research to be approved by an appropriate ethics committee, (3) endorsed 9 responsible research practices, and (4) discouraged 5 questionable research practices.</p><p><strong>Results: </strong>Overall, a median of 10 (IQR 9 to 12) of 19 practices covered in the questions were mentioned, weakly endorsed, or strongly endorsed. Five to 8 of 9 responsible research practices were mentioned, weakly, or strongly endorsed, and 3 questionable research practices were discouraged. Results are stratified by Group of Eight (n = 8) and other (n = 17) universities. Specifically, (1) 6 (75%) Group of Eight and 11 (65%) other codes of research conduct defined research integrity, 4 (50%) and 8 (47%) defined research quality, and 7 (88%) and 16 (94%) defined research misconduct. (2) All codes required ethics approval for human and animal research. (3) All codes required conflicts of interest to be declared, but there was variability in how strongly other research practices were endorsed. The most commonly endorsed practices were ensuring researcher training in research integrity [8 (100%) and 16 (94%)] and making study data publicly available [6 (75%) and 12 (71%)]. The least commonly endorsed practices were making analysis code publicly available [0 (0%) and 0 (0%)] and registering analysis protocols [0 (0%) and 1 (6%)]. (4) Most codes discouraged fabricating data [5 (63%) and 15 (88%)], selectively deleting or modifying data [5 (63%) and 15 (88%)], and selective reporting of results [3 (38%) and 15 (88%)]. No codes discouraged p-hacking or hypothesising after results are known.</p><p><strong>Conclusions: </strong>Responsible research practices could be more strongly endorsed by Australian university codes of research conduct. Our findings may not be generalisable to smaller universities, or those not active in health and medical research.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"8 1","pages":"5"},"PeriodicalIF":0.0,"publicationDate":"2023-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10242962/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9591647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Fighting reviewer fatigue or amplifying bias? Considerations and recommendations for use of ChatGPT and other large language models in scholarly peer review. 消除审稿人疲劳还是放大偏见?在学术同行评审中使用 ChatGPT 和其他大型语言模型的考虑因素和建议。
IF 7.2 Q1 ETHICS Pub Date : 2023-05-18 DOI: 10.1186/s41073-023-00133-5
Mohammad Hosseini, Serge P J M Horbach

Background: The emergence of systems based on large language models (LLMs) such as OpenAI's ChatGPT has created a range of discussions in scholarly circles. Since LLMs generate grammatically correct and mostly relevant (yet sometimes outright wrong, irrelevant or biased) outputs in response to provided prompts, using them in various writing tasks including writing peer review reports could result in improved productivity. Given the significance of peer reviews in the existing scholarly publication landscape, exploring challenges and opportunities of using LLMs in peer review seems urgent. After the generation of the first scholarly outputs with LLMs, we anticipate that peer review reports too would be generated with the help of these systems. However, there are currently no guidelines on how these systems should be used in review tasks.

Methods: To investigate the potential impact of using LLMs on the peer review process, we used five core themes within discussions about peer review suggested by Tennant and Ross-Hellauer. These include 1) reviewers' role, 2) editors' role, 3) functions and quality of peer reviews, 4) reproducibility, and 5) the social and epistemic functions of peer reviews. We provide a small-scale exploration of ChatGPT's performance regarding identified issues.

Results: LLMs have the potential to substantially alter the role of both peer reviewers and editors. Through supporting both actors in efficiently writing constructive reports or decision letters, LLMs can facilitate higher quality review and address issues of review shortage. However, the fundamental opacity of LLMs' training data, inner workings, data handling, and development processes raise concerns about potential biases, confidentiality and the reproducibility of review reports. Additionally, as editorial work has a prominent function in defining and shaping epistemic communities, as well as negotiating normative frameworks within such communities, partly outsourcing this work to LLMs might have unforeseen consequences for social and epistemic relations within academia. Regarding performance, we identified major enhancements in a short period and expect LLMs to continue developing.

Conclusions: We believe that LLMs are likely to have a profound impact on academia and scholarly communication. While potentially beneficial to the scholarly communication system, many uncertainties remain and their use is not without risks. In particular, concerns about the amplification of existing biases and inequalities in access to appropriate infrastructure warrant further attention. For the moment, we recommend that if LLMs are used to write scholarly reviews and decision letters, reviewers and editors should disclose their use and accept full responsibility for data security and confidentiality, and their reports' accuracy, tone, reasoning and originality.

背景:基于大型语言模型(LLM)的系统(如 OpenAI 的 ChatGPT)的出现在学术界引起了一系列讨论。由于 LLMs 可以根据所提供的提示生成语法正确且大多相关(但有时也会完全错误、不相关或有偏见)的输出结果,因此在包括撰写同行评议报告在内的各种写作任务中使用 LLMs 可以提高工作效率。鉴于同行评议在现有学术出版物中的重要性,探索在同行评议中使用法律硕士的挑战和机遇似乎迫在眉睫。在利用 LLM 生成第一批学术成果之后,我们预计同行评审报告也将在这些系统的帮助下生成。然而,目前还没有关于如何在评审任务中使用这些系统的指南:为了研究使用 LLM 对同行评审过程的潜在影响,我们使用了 Tennant 和 Ross-Hellauer 提出的同行评审讨论中的五个核心主题。这些主题包括:1)审稿人的角色;2)编辑的角色;3)同行评审的功能和质量;4)可复制性;5)同行评审的社会和认识功能。我们对 ChatGPT 在上述问题上的表现进行了小规模的探讨:结果:LLM 有可能大大改变同行评审员和编辑的角色。通过支持这两个角色高效地撰写建设性报告或决定书,LLM 可以促进更高质量的评审,并解决评审不足的问题。然而,法律硕士的培训数据、内部运作、数据处理和开发过程从根本上是不透明的,这引发了人们对潜在偏见、保密性和审稿报告可重复性的担忧。此外,由于编辑工作在定义和塑造认识论社群以及在这些社群中协商规范性框架方面具有突出作用,因此将这项工作部分外包给法学硕士可能会对学术界内部的社会和认识论关系产生不可预见的后果。在绩效方面,我们在短期内发现了重大改进,并期待法律硕士继续发展:我们认为,法律硕士可能会对学术界和学术交流产生深远影响。虽然可能对学术交流系统有益,但仍存在许多不确定因素,而且其使用并非没有风险。特别是对现有偏见的放大和在使用适当基础设施方面的不平等的担忧,值得进一步关注。目前,我们建议,如果使用 LLM 撰写学术评论和决定信,审稿人和编辑应披露其使用情况,并对数据的安全性和保密性以及报告的准确性、语气、推理和原创性承担全部责任。
{"title":"Fighting reviewer fatigue or amplifying bias? Considerations and recommendations for use of ChatGPT and other large language models in scholarly peer review.","authors":"Mohammad Hosseini, Serge P J M Horbach","doi":"10.1186/s41073-023-00133-5","DOIUrl":"10.1186/s41073-023-00133-5","url":null,"abstract":"<p><strong>Background: </strong>The emergence of systems based on large language models (LLMs) such as OpenAI's ChatGPT has created a range of discussions in scholarly circles. Since LLMs generate grammatically correct and mostly relevant (yet sometimes outright wrong, irrelevant or biased) outputs in response to provided prompts, using them in various writing tasks including writing peer review reports could result in improved productivity. Given the significance of peer reviews in the existing scholarly publication landscape, exploring challenges and opportunities of using LLMs in peer review seems urgent. After the generation of the first scholarly outputs with LLMs, we anticipate that peer review reports too would be generated with the help of these systems. However, there are currently no guidelines on how these systems should be used in review tasks.</p><p><strong>Methods: </strong>To investigate the potential impact of using LLMs on the peer review process, we used five core themes within discussions about peer review suggested by Tennant and Ross-Hellauer. These include 1) reviewers' role, 2) editors' role, 3) functions and quality of peer reviews, 4) reproducibility, and 5) the social and epistemic functions of peer reviews. We provide a small-scale exploration of ChatGPT's performance regarding identified issues.</p><p><strong>Results: </strong>LLMs have the potential to substantially alter the role of both peer reviewers and editors. Through supporting both actors in efficiently writing constructive reports or decision letters, LLMs can facilitate higher quality review and address issues of review shortage. However, the fundamental opacity of LLMs' training data, inner workings, data handling, and development processes raise concerns about potential biases, confidentiality and the reproducibility of review reports. Additionally, as editorial work has a prominent function in defining and shaping epistemic communities, as well as negotiating normative frameworks within such communities, partly outsourcing this work to LLMs might have unforeseen consequences for social and epistemic relations within academia. Regarding performance, we identified major enhancements in a short period and expect LLMs to continue developing.</p><p><strong>Conclusions: </strong>We believe that LLMs are likely to have a profound impact on academia and scholarly communication. While potentially beneficial to the scholarly communication system, many uncertainties remain and their use is not without risks. In particular, concerns about the amplification of existing biases and inequalities in access to appropriate infrastructure warrant further attention. For the moment, we recommend that if LLMs are used to write scholarly reviews and decision letters, reviewers and editors should disclose their use and accept full responsibility for data security and confidentiality, and their reports' accuracy, tone, reasoning and originality.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"8 1","pages":"4"},"PeriodicalIF":7.2,"publicationDate":"2023-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10191680/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9849534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gender differences in peer reviewed grant applications, awards, and amounts: a systematic review and meta-analysis. 同行评议拨款申请、奖励和数额的性别差异:系统回顾和荟萃分析。
Q1 ETHICS Pub Date : 2023-05-03 DOI: 10.1186/s41073-023-00127-3
Karen B Schmaling, Stephen A Gallo

Background: Differential participation and success in grant applications may contribute to women's lesser representation in the sciences. This study's objective was to conduct a systematic review and meta-analysis to address the question of gender differences in grant award acceptance rates and reapplication award acceptance rates (potential bias in peer review outcomes) and other grant outcomes.

Methods: The review was registered on PROSPERO (CRD42021232153) and conducted in accordance with PRISMA 2020 standards. We searched Academic Search Complete, PubMed, and Web of Science for the timeframe 1 January 2005 to 31 December 2020, and forward and backward citations. Studies were included that reported data, by gender, on any of the following: grant applications or reapplications, awards, award amounts, award acceptance rates, or reapplication award acceptance rates. Studies that duplicated data reported in another study were excluded. Gender differences were investigated by meta-analyses and generalized linear mixed models. Doi plots and LFK indices were used to assess reporting bias.

Results: The searches identified 199 records, of which 13 were eligible. An additional 42 sources from forward and backward searches were eligible, for a total of 55 sources with data on one or more outcomes. The data from these studies ranged from 1975 to 2020: 49 sources were published papers and six were funders' reports (the latter were identified by forwards and backwards searches). Twenty-nine studies reported person-level data, 25 reported application-level data, and one study reported both: person-level data were used in analyses. Award acceptance rates were 1% higher for men, which was not significantly different from women (95% CI 3% more for men to 1% more for women, k = 36, n = 303,795 awards and 1,277,442 applications, I2 = 84%). Reapplication award acceptance rates were significantly higher for men (9%, 95% CI 18% to 1%, k = 7, n = 7319 applications and 3324 awards, I2 = 63%). Women received smaller award amounts (g = -2.28, 95% CI -4.92 to 0.36, k = 13, n = 212,935, I2 = 100%).

Conclusions: The proportions of women that applied for grants, re-applied, accepted awards, and accepted awards after reapplication were less than the proportion of eligible women. However, the award acceptance rate was similar for women and men, implying no gender bias in this peer reviewed grant outcome. Women received smaller awards and fewer awards after re-applying, which may negatively affect continued scientific productivity. Greater transparency is needed to monitor and verify these data globally.

背景:拨款申请中的差异参与和成功可能导致妇女在科学领域的代表性较低。本研究的目的是进行系统回顾和荟萃分析,以解决拨款接受率和再申请奖接受率(同行评议结果的潜在偏见)和其他拨款结果的性别差异问题。方法:本综述在PROSPERO注册(CRD42021232153),并按照PRISMA 2020标准进行。我们检索了Academic Search Complete、PubMed和Web of Science,查找时间范围为2005年1月1日至2020年12月31日,以及前后引文。研究纳入了按性别报告以下任何数据的研究:拨款申请或再申请、奖励、奖励金额、奖励接受率或再申请奖励接受率。在另一项研究中报告重复数据的研究被排除在外。通过荟萃分析和广义线性混合模型研究性别差异。Doi图和LFK指数用于评估报告偏倚。结果:检索到199条记录,其中13条符合条件。另外42个来自向前和向后搜索的来源符合条件,总共55个来源有一个或多个结果的数据。这些研究的数据范围从1975年到2020年:49个来源是发表的论文,6个是资助者的报告(后者是通过向前和向后搜索确定的)。29项研究报告了个人水平的数据,25项研究报告了应用水平的数据,一项研究报告了两者的数据:个人水平的数据用于分析。男性的奖项接受率比女性高1%,这与女性没有显著差异(95% CI男性高3%,女性高1%,k = 36, n = 303,795个奖项和1,277,442个申请,I2 = 84%)。男性再次申请奖励的接受率明显更高(9%,95% CI 18%至1%,k = 7, n = 7319份申请和3324份奖励,I2 = 63%)。女性获得的奖励较少(g = -2.28, 95% CI -4.92至0.36,k = 13, n = 212,935, I2 = 100%)。结论:申请资助、重新申请、接受奖励、重新申请后接受奖励的女性比例低于符合条件的女性比例。然而,女性和男性的奖项接受率相似,这意味着在同行评审的拨款结果中没有性别偏见。女性在重新申请后获得的奖励更少,奖励也更少,这可能会对持续的科学生产力产生负面影响。在全球范围内监测和核实这些数据需要更大的透明度。
{"title":"Gender differences in peer reviewed grant applications, awards, and amounts: a systematic review and meta-analysis.","authors":"Karen B Schmaling,&nbsp;Stephen A Gallo","doi":"10.1186/s41073-023-00127-3","DOIUrl":"https://doi.org/10.1186/s41073-023-00127-3","url":null,"abstract":"<p><strong>Background: </strong>Differential participation and success in grant applications may contribute to women's lesser representation in the sciences. This study's objective was to conduct a systematic review and meta-analysis to address the question of gender differences in grant award acceptance rates and reapplication award acceptance rates (potential bias in peer review outcomes) and other grant outcomes.</p><p><strong>Methods: </strong>The review was registered on PROSPERO (CRD42021232153) and conducted in accordance with PRISMA 2020 standards. We searched Academic Search Complete, PubMed, and Web of Science for the timeframe 1 January 2005 to 31 December 2020, and forward and backward citations. Studies were included that reported data, by gender, on any of the following: grant applications or reapplications, awards, award amounts, award acceptance rates, or reapplication award acceptance rates. Studies that duplicated data reported in another study were excluded. Gender differences were investigated by meta-analyses and generalized linear mixed models. Doi plots and LFK indices were used to assess reporting bias.</p><p><strong>Results: </strong>The searches identified 199 records, of which 13 were eligible. An additional 42 sources from forward and backward searches were eligible, for a total of 55 sources with data on one or more outcomes. The data from these studies ranged from 1975 to 2020: 49 sources were published papers and six were funders' reports (the latter were identified by forwards and backwards searches). Twenty-nine studies reported person-level data, 25 reported application-level data, and one study reported both: person-level data were used in analyses. Award acceptance rates were 1% higher for men, which was not significantly different from women (95% CI 3% more for men to 1% more for women, k = 36, n = 303,795 awards and 1,277,442 applications, I<sup>2</sup> = 84%). Reapplication award acceptance rates were significantly higher for men (9%, 95% CI 18% to 1%, k = 7, n = 7319 applications and 3324 awards, I<sup>2</sup> = 63%). Women received smaller award amounts (g = -2.28, 95% CI -4.92 to 0.36, k = 13, n = 212,935, I<sup>2</sup> = 100%).</p><p><strong>Conclusions: </strong>The proportions of women that applied for grants, re-applied, accepted awards, and accepted awards after reapplication were less than the proportion of eligible women. However, the award acceptance rate was similar for women and men, implying no gender bias in this peer reviewed grant outcome. Women received smaller awards and fewer awards after re-applying, which may negatively affect continued scientific productivity. Greater transparency is needed to monitor and verify these data globally.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"8 1","pages":"2"},"PeriodicalIF":0.0,"publicationDate":"2023-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10155348/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9762431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Scientific sinkhole: estimating the cost of peer review based on survey data with snowball sampling. 科学陷坑:基于滚雪球抽样的调查数据估算同行评议的成本。
Q1 ETHICS Pub Date : 2023-04-24 DOI: 10.1186/s41073-023-00128-2
Allana G LeBlanc, Joel D Barnes, Travis J Saunders, Mark S Tremblay, Jean-Philippe Chaput

Background: There are a variety of costs associated with publication of scientific findings. The purpose of this work was to estimate the cost of peer review in scientific publishing per reviewer, per year and for the entire scientific community.

Methods: Internet-based self-report, cross-sectional survey, live between June 28, 2021 and August 2, 2021 was used. Participants were recruited via snowball sampling. No restrictions were placed on geographic location or field of study. Respondents who were asked to act as a peer-reviewer for at least one manuscript submitted to a scientific journal in 2020 were eligible. The primary outcome measure was the cost of peer review per person, per year (calculated as wage-cost x number of initial reviews and number of re-reviews per year). The secondary outcome was the cost of peer review globally (calculated as the number of peer-reviewed papers in Scopus x median wage-cost of initial review and re-review).

Results: A total of 354 participants completed at least one question of the survey, and information necessary to calculate the cost of peer-review was available for 308 participants from 33 countries (44% from Canada). The cost of peer review was estimated at $US1,272 per person, per year ($US1,015 for initial review and $US256 for re-review), or US$1.1-1.7 billion for the scientific community per year. The global cost of peer-review was estimated at US$6 billion in 2020 when relying on the Dimensions database and taking into account reviewed-but-rejected manuscripts.

Conclusions: Peer review represents an important financial piece of scientific publishing. Our results may not represent all countries or fields of study, but are consistent with previous estimates and provide additional context from peer reviewers themselves. Researchers and scientists have long provided peer review as a contribution to the scientific community. Recognizing the importance of peer-review, institutions should acknowledge these costs in job descriptions, performance measurement, promotion packages, and funding applications. Journals should develop methods to compensate reviewers for their time and improve transparency while maintaining the integrity of the peer-review process.

背景:与发表科学发现相关的费用有多种。这项工作的目的是估计科学出版中每个审稿人,每年和整个科学界的同行评议成本。方法:基于互联网的自我报告,横断面调查,现场时间为2021年6月28日至2021年8月2日。参与者是通过滚雪球抽样招募的。对地理位置或研究领域没有任何限制。被要求在2020年至少为一份提交给科学期刊的手稿担任同行审稿人的受访者符合条件。主要结果测量是每人每年同行评审的成本(计算为工资-成本x每年初次评审的次数和重新评审的次数)。次要结果是全球同行评议的成本(计算方法为Scopus中同行评议论文的数量x初次评议和再评议的工资成本中位数)。结果:共有354名参与者完成了调查的至少一个问题,来自33个国家的308名参与者(44%来自加拿大)获得了计算同行评审成本所需的信息。同行评议的费用估计为每人每年1272美元(初次评议1015美元,重新评议256美元),或科学界每年11 - 17亿美元。根据Dimensions数据库并考虑审稿但被拒绝的稿件,2020年同行评议的全球成本估计为60亿美元。结论:同行评议代表了科学出版的重要财务部分。我们的结果可能不代表所有国家或研究领域,但与以前的估计一致,并提供同行审稿人自己的额外背景。长期以来,研究人员和科学家一直将同行评议作为对科学界的贡献。认识到同行评议的重要性,机构应该在职位描述、绩效评估、晋升方案和资金申请中承认这些成本。期刊应该开发方法来补偿审稿人的时间,提高透明度,同时保持同行评审过程的完整性。
{"title":"Scientific sinkhole: estimating the cost of peer review based on survey data with snowball sampling.","authors":"Allana G LeBlanc,&nbsp;Joel D Barnes,&nbsp;Travis J Saunders,&nbsp;Mark S Tremblay,&nbsp;Jean-Philippe Chaput","doi":"10.1186/s41073-023-00128-2","DOIUrl":"https://doi.org/10.1186/s41073-023-00128-2","url":null,"abstract":"<p><strong>Background: </strong>There are a variety of costs associated with publication of scientific findings. The purpose of this work was to estimate the cost of peer review in scientific publishing per reviewer, per year and for the entire scientific community.</p><p><strong>Methods: </strong>Internet-based self-report, cross-sectional survey, live between June 28, 2021 and August 2, 2021 was used. Participants were recruited via snowball sampling. No restrictions were placed on geographic location or field of study. Respondents who were asked to act as a peer-reviewer for at least one manuscript submitted to a scientific journal in 2020 were eligible. The primary outcome measure was the cost of peer review per person, per year (calculated as wage-cost x number of initial reviews and number of re-reviews per year). The secondary outcome was the cost of peer review globally (calculated as the number of peer-reviewed papers in Scopus x median wage-cost of initial review and re-review).</p><p><strong>Results: </strong>A total of 354 participants completed at least one question of the survey, and information necessary to calculate the cost of peer-review was available for 308 participants from 33 countries (44% from Canada). The cost of peer review was estimated at $US1,272 per person, per year ($US1,015 for initial review and $US256 for re-review), or US$1.1-1.7 billion for the scientific community per year. The global cost of peer-review was estimated at US$6 billion in 2020 when relying on the Dimensions database and taking into account reviewed-but-rejected manuscripts.</p><p><strong>Conclusions: </strong>Peer review represents an important financial piece of scientific publishing. Our results may not represent all countries or fields of study, but are consistent with previous estimates and provide additional context from peer reviewers themselves. Researchers and scientists have long provided peer review as a contribution to the scientific community. Recognizing the importance of peer-review, institutions should acknowledge these costs in job descriptions, performance measurement, promotion packages, and funding applications. Journals should develop methods to compensate reviewers for their time and improve transparency while maintaining the integrity of the peer-review process.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"8 1","pages":"3"},"PeriodicalIF":0.0,"publicationDate":"2023-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10122980/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9776362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Research integrity and peer review
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1