首页 > 最新文献

Research integrity and peer review最新文献

英文 中文
Reporting in the abstracts presented at the 5th AfriNEAD (African Network for Evidence-to-Action in Disability) Conference in Ghana. 在加纳举行的第五届非洲残疾证据行动网络会议上提交的摘要报告。
Q1 ETHICS Pub Date : 2019-01-16 eCollection Date: 2019-01-01 DOI: 10.1186/s41073-018-0061-3
Eric Badu, Paul Okyere, Diane Bell, Naomi Gyamfi, Maxwell Peprah Opoku, Peter Agyei-Baffour, Anthony Kwaku Edusei

Introduction: The abstracts of a conference are important for informing the participants about the results that are communicated. However, there is poor reporting in conference abstracts in disability research. This paper aims to assess the reporting in the abstracts presented at the 5th African Network for Evidence-to-Action in Disability (AfriNEAD) Conference in Ghana.

Methods: This descriptive study extracted information from the abstracts presented at the 5th AfriNEAD Conference. Three reviewers independently reviewed all the included abstracts using a predefined data extraction form. Descriptive statistics were used to analyze the extracted information, using Stata version 15.

Results: Of the 76 abstracts assessed, 54 met the inclusion criteria, while 22 were excluded. More than half of all the included abstracts (32/54; 59.26%) were studies conducted in Ghana. Some of the included abstracts did not report on the study design (37/54; 68.5%), the type of analysis performed (30/54; 55.56%), the sampling (27/54; 50%), and the sample size (18/54; 33.33%). Almost all the included abstracts did not report the age distribution and the gender of the participants.

Conclusion: The study findings confirm that there is poor reporting of methods and findings in conference abstracts. Future conference organizers should critically examine abstracts to ensure that these issues are adequately addressed, so that findings are effectively communicated to participants.

会议摘要对于告知与会者会议的结果是非常重要的。然而,会议摘要对残疾研究的报道却很少。本文旨在评估在加纳举行的第五届非洲残疾证据行动网络(AfriNEAD)会议上提交的摘要报告。方法:本描述性研究从第5届非洲会议上发表的摘要中提取信息。三位审稿人使用预定义的数据提取表单独立审查所有包含的摘要。描述性统计用于分析提取的信息,使用Stata version 15。结果:76篇综述中,54篇符合纳入标准,22篇被排除。超过一半的收录摘要(32/54;59.26%)为在加纳进行的研究。一些纳入的摘要没有报道研究设计(37/54;68.5%),所进行的分析类型(30/54;55.56%),抽样(27/54;50%),样本量(18/54;33.33%)。几乎所有纳入的摘要都没有报告参与者的年龄分布和性别。结论:研究结果证实了会议摘要中对方法和结果的报道不足。未来的会议组织者应该严格审查摘要,以确保这些问题得到充分解决,以便将研究结果有效地传达给与会者。
{"title":"Reporting in the abstracts presented at the 5th AfriNEAD (African Network for Evidence-to-Action in Disability) Conference in Ghana.","authors":"Eric Badu,&nbsp;Paul Okyere,&nbsp;Diane Bell,&nbsp;Naomi Gyamfi,&nbsp;Maxwell Peprah Opoku,&nbsp;Peter Agyei-Baffour,&nbsp;Anthony Kwaku Edusei","doi":"10.1186/s41073-018-0061-3","DOIUrl":"https://doi.org/10.1186/s41073-018-0061-3","url":null,"abstract":"<p><strong>Introduction: </strong>The abstracts of a conference are important for informing the participants about the results that are communicated. However, there is poor reporting in conference abstracts in disability research. This paper aims to assess the reporting in the abstracts presented at the 5th African Network for Evidence-to-Action in Disability (AfriNEAD) Conference in Ghana.</p><p><strong>Methods: </strong>This descriptive study extracted information from the abstracts presented at the 5th AfriNEAD Conference. Three reviewers independently reviewed all the included abstracts using a predefined data extraction form. Descriptive statistics were used to analyze the extracted information, using Stata version 15.</p><p><strong>Results: </strong>Of the 76 abstracts assessed, 54 met the inclusion criteria, while 22 were excluded. More than half of all the included abstracts (32/54; 59.26%) were studies conducted in Ghana. Some of the included abstracts did not report on the study design (37/54; 68.5%), the type of analysis performed (30/54; 55.56%), the sampling (27/54; 50%), and the sample size (18/54; 33.33%). Almost all the included abstracts did not report the age distribution and the gender of the participants.</p><p><strong>Conclusion: </strong>The study findings confirm that there is poor reporting of methods and findings in conference abstracts. Future conference organizers should critically examine abstracts to ensure that these issues are adequately addressed, so that findings are effectively communicated to participants.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"4 ","pages":"1"},"PeriodicalIF":0.0,"publicationDate":"2019-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-018-0061-3","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36939596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Replicability and replication in the humanities. 人文学科中的可复制性和可复制性。
Q1 ETHICS Pub Date : 2019-01-09 eCollection Date: 2019-01-01 DOI: 10.1186/s41073-018-0060-4
Rik Peels

A large number of scientists and several news platforms have, over the last few years, been speaking of a replication crisis in various academic disciplines, especially the biomedical and social sciences. This paper answers the novel question of whether we should also pursue replication in the humanities. First, I create more conceptual clarity by defining, in addition to the term "humanities," various key terms in the debate on replication, such as "reproduction" and "replicability." In doing so, I pay attention to what is supposed to be the object of replication: certain studies, particular inferences, of specific results. After that, I spell out three reasons for thinking that replication in the humanities is not possible and argue that they are unconvincing. Subsequently, I give a more detailed case for thinking that replication in the humanities is possible. Finally, I explain why such replication in the humanities is not only possible, but also desirable.

在过去几年里,大量科学家和几个新闻平台一直在谈论各个学术学科的复制危机,尤其是生物医学和社会科学。本文回答了一个新颖的问题,即我们是否也应该在人文学科中追求复制。首先,除了“人文学科”一词之外,我还定义了关于复制的辩论中的各种关键术语,如“复制”和“可复制性”,从而使概念更加清晰。在这样做的过程中,我关注复制的对象:特定的研究、特定的推论、特定的结果。在那之后,我列出了三个理由,认为在人文学科中复制是不可能的,并认为它们不令人信服。随后,我给出了一个更详细的案例来思考在人文学科中复制是可能的。最后,我解释了为什么在人文学科中这样的复制不仅是可能的,而且是可取的。
{"title":"Replicability and replication in the humanities.","authors":"Rik Peels","doi":"10.1186/s41073-018-0060-4","DOIUrl":"10.1186/s41073-018-0060-4","url":null,"abstract":"<p><p>A large number of scientists and several news platforms have, over the last few years, been speaking of a replication crisis in various academic disciplines, especially the biomedical and social sciences. This paper answers the novel question of whether we should also pursue replication in the humanities. First, I create more conceptual clarity by defining, in addition to the term \"humanities,\" various key terms in the debate on replication, such as \"reproduction\" and \"replicability.\" In doing so, I pay attention to what is supposed to be the object of replication: certain studies, particular inferences, of specific results. After that, I spell out three reasons for thinking that replication in the humanities is not possible and argue that they are unconvincing. Subsequently, I give a more detailed case for thinking that replication in the humanities is possible. Finally, I explain why such replication in the humanities is not only possible, but also desirable.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"4 ","pages":"2"},"PeriodicalIF":0.0,"publicationDate":"2019-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6348612/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36918266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Professional medical writing support and the quality, ethics and timeliness of clinical trial reporting: a systematic review 专业医学写作支持与临床试验报告的质量、道德和及时性:系统综述
Q1 ETHICS Pub Date : 2018-12-20 DOI: 10.1186/s41073-019-0073-7
O. Evuarherhe, W. Gattrell, Richard White, C. Winchester
{"title":"Professional medical writing support and the quality, ethics and timeliness of clinical trial reporting: a systematic review","authors":"O. Evuarherhe, W. Gattrell, Richard White, C. Winchester","doi":"10.1186/s41073-019-0073-7","DOIUrl":"https://doi.org/10.1186/s41073-019-0073-7","url":null,"abstract":"","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-019-0073-7","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46108951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Protocol for the development of a CONSORT extension for RCTs using cohorts and routinely collected health data. 为使用队列和常规收集的健康数据的 RCT 制定 CONSORT 扩展协议。
Q1 ETHICS Pub Date : 2018-10-29 eCollection Date: 2018-01-01 DOI: 10.1186/s41073-018-0053-3
Linda Kwakkenbos, Edmund Juszczak, Lars G Hemkens, Margaret Sampson, Ole Fröbert, Clare Relton, Chris Gale, Merrick Zwarenstein, Sinéad M Langan, David Moher, Isabelle Boutron, Philippe Ravaud, Marion K Campbell, Kimberly A Mc Cord, Tjeerd P van Staa, Lehana Thabane, Rudolf Uher, Helena M Verkooijen, Eric I Benchimol, David Erlinge, Maureen Sauvé, David Torgerson, Brett D Thombs

Background: Randomized controlled trials (RCTs) are often complex and expensive to perform. Less than one third achieve planned recruitment targets, follow-up can be labor-intensive, and many have limited real-world generalizability. Designs for RCTs conducted using cohorts and routinely collected health data, including registries, electronic health records, and administrative databases, have been proposed to address these challenges and are being rapidly adopted. These designs, however, are relatively recent innovations, and published RCT reports often do not describe important aspects of their methodology in a standardized way. Our objective is to extend the Consolidated Standards of Reporting Trials (CONSORT) statement with a consensus-driven reporting guideline for RCTs using cohorts and routinely collected health data.

Methods: The development of this CONSORT extension will consist of five phases. Phase 1 (completed) consisted of the project launch, including fundraising, the establishment of a research team, and development of a conceptual framework. In phase 2, a systematic review will be performed to identify publications (1) that describe methods or reporting considerations for RCTs conducted using cohorts and routinely collected health data or (2) that are protocols or report results from such RCTs. An initial "long list" of possible modifications to CONSORT checklist items and possible new items for the reporting guideline will be generated based on the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) and The REporting of studies Conducted using Observational Routinely-collected health Data (RECORD) statements. Additional possible modifications and new items will be identified based on the results of the systematic review. Phase 3 will consist of a three-round Delphi exercise with methods and content experts to evaluate the "long list" and generate a "short list" of key items. In phase 4, these items will serve as the basis for an in-person consensus meeting to finalize a core set of items to be included in the reporting guideline and checklist. Phase 5 will involve drafting the checklist and elaboration-explanation documents, and dissemination and implementation of the guideline.

Discussion: Development of this CONSORT extension will contribute to more transparent reporting of RCTs conducted using cohorts and routinely collected health data.

背景:随机对照试验(RCT)通常比较复杂,而且成本高昂。只有不到三分之一的试验能达到计划的招募目标,随访工作可能会耗费大量人力物力,而且许多试验在现实世界中的可推广性有限。为了应对这些挑战,人们提出了使用队列和常规收集的健康数据(包括登记册、电子健康记录和行政数据库)进行 RCT 的设计方案,并迅速得到采用。然而,这些设计都是相对较新的创新,已发表的 RCT 报告往往没有以标准化的方式描述其方法的重要方面。我们的目标是扩展《试验报告综合标准》(CONSORT)声明,为使用队列和常规收集的健康数据的 RCT 制定一个共识驱动的报告指南:方法:CONSORT 扩展声明的制定将分为五个阶段。第 1 阶段(已完成)包括项目启动,包括筹资、建立研究团队和制定概念框架。在第 2 阶段,将进行系统性综述,以确定以下出版物:(1) 描述使用队列和常规收集的健康数据进行 RCT 的方法或报告注意事项的出版物,或 (2) 属于此类 RCT 的方案或报告结果的出版物。将根据 "加强流行病学观察性研究的报告"(STROBE)和 "使用常规收集的观察性健康数据进行的研究的报告"(RECORD)声明,为报告指南编制一份 "长清单",列出可能对 CONSORT 核对表项目进行的修改和可能新增的项目。还将根据系统审查的结果确定其他可能的修改和新项目。第 3 阶段将由方法和内容专家进行三轮德尔菲练习,以评估 "长清单 "并生成关键项目的 "短清单"。在第 4 阶段,这些项目将作为当面共识会议的基础,以最终确定将纳入报告指南和核对表的一套核心项目。第 5 阶段将包括起草核对表和阐述-解释文件,以及指南的传播和实施:该 CONSORT 扩展版的开发将有助于对使用队列和常规收集的健康数据进行的 RCT 进行更透明的报告。
{"title":"Protocol for the development of a CONSORT extension for RCTs using cohorts and routinely collected health data.","authors":"Linda Kwakkenbos, Edmund Juszczak, Lars G Hemkens, Margaret Sampson, Ole Fröbert, Clare Relton, Chris Gale, Merrick Zwarenstein, Sinéad M Langan, David Moher, Isabelle Boutron, Philippe Ravaud, Marion K Campbell, Kimberly A Mc Cord, Tjeerd P van Staa, Lehana Thabane, Rudolf Uher, Helena M Verkooijen, Eric I Benchimol, David Erlinge, Maureen Sauvé, David Torgerson, Brett D Thombs","doi":"10.1186/s41073-018-0053-3","DOIUrl":"10.1186/s41073-018-0053-3","url":null,"abstract":"<p><strong>Background: </strong>Randomized controlled trials (RCTs) are often complex and expensive to perform. Less than one third achieve planned recruitment targets, follow-up can be labor-intensive, and many have limited real-world generalizability. Designs for RCTs conducted using cohorts and routinely collected health data, including registries, electronic health records, and administrative databases, have been proposed to address these challenges and are being rapidly adopted. These designs, however, are relatively recent innovations, and published RCT reports often do not describe important aspects of their methodology in a standardized way. Our objective is to extend the Consolidated Standards of Reporting Trials (CONSORT) statement with a consensus-driven reporting guideline for RCTs using cohorts and routinely collected health data.</p><p><strong>Methods: </strong>The development of this CONSORT extension will consist of five phases. Phase 1 (completed) consisted of the project launch, including fundraising, the establishment of a research team, and development of a conceptual framework. In phase 2, a systematic review will be performed to identify publications (1) that describe methods or reporting considerations for RCTs conducted using cohorts and routinely collected health data or (2) that are protocols or report results from such RCTs. An initial \"long list\" of possible modifications to CONSORT checklist items and possible new items for the reporting guideline will be generated based on the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) and The REporting of studies Conducted using Observational Routinely-collected health Data (RECORD) statements. Additional possible modifications and new items will be identified based on the results of the systematic review. Phase 3 will consist of a three-round Delphi exercise with methods and content experts to evaluate the \"long list\" and generate a \"short list\" of key items. In phase 4, these items will serve as the basis for an in-person consensus meeting to finalize a core set of items to be included in the reporting guideline and checklist. Phase 5 will involve drafting the checklist and elaboration-explanation documents, and dissemination and implementation of the guideline.</p><p><strong>Discussion: </strong>Development of this CONSORT extension will contribute to more transparent reporting of RCTs conducted using cohorts and routinely collected health data.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"3 ","pages":"9"},"PeriodicalIF":0.0,"publicationDate":"2018-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6205772/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9105072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Designing integrated research integrity training: authorship, publication, and peer review 设计综合研究诚信培训:作者身份、出版和同行评审
Q1 ETHICS Pub Date : 2018-02-26 DOI: 10.1186/s41073-018-0046-2
Mark Hooper, Virginia Barbour, Anne Walsh, Stephanie Bradbury, Jane Jacobs
This paper describes the experience of an academic institution, the Queensland University of Technology (QUT), developing training courses about research integrity practices in authorship, publication, and Journal Peer Review. The importance of providing research integrity training in these areas is now widely accepted; however, it remains an open question how best to conduct this training. For this reason, it is vital for institutions, journals, and peak bodies to share learnings.We describe how we have collaborated across our institution to develop training that supports QUT’s principles and which is in line with insights from contemporary research on best practices in learning design, universal design, and faculty involvement. We also discuss how we have refined these courses iteratively over time, and consider potential mechanisms for evaluating the effectiveness of the courses more formally.
本文介绍了学术机构昆士兰科技大学(QUT)开发有关作者身份、出版和期刊同行评审方面的研究诚信实践培训课程的经验。在这些领域提供研究诚信培训的重要性现已得到广泛认可;但是,如何以最佳方式开展培训仍是一个未决问题。我们将介绍如何在全校范围内合作开展培训,以支持昆士兰科技大学的原则,并与学习设计、通用设计和教师参与方面最佳实践的当代研究成果保持一致。我们还讨论了如何随着时间的推移不断改进这些课程,并考虑了更正式地评估课程有效性的潜在机制。
{"title":"Designing integrated research integrity training: authorship, publication, and peer review","authors":"Mark Hooper, Virginia Barbour, Anne Walsh, Stephanie Bradbury, Jane Jacobs","doi":"10.1186/s41073-018-0046-2","DOIUrl":"https://doi.org/10.1186/s41073-018-0046-2","url":null,"abstract":"This paper describes the experience of an academic institution, the Queensland University of Technology (QUT), developing training courses about research integrity practices in authorship, publication, and Journal Peer Review. The importance of providing research integrity training in these areas is now widely accepted; however, it remains an open question how best to conduct this training. For this reason, it is vital for institutions, journals, and peak bodies to share learnings.We describe how we have collaborated across our institution to develop training that supports QUT’s principles and which is in line with insights from contemporary research on best practices in learning design, universal design, and faculty involvement. We also discuss how we have refined these courses iteratively over time, and consider potential mechanisms for evaluating the effectiveness of the courses more formally.","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140884148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Simple decision-tree tool to facilitate author identification of reporting guidelines during submission: a before-after study. 简单的决策树工具,方便作者在提交过程中确定报告准则:前后研究。
Q1 ETHICS Pub Date : 2017-12-18 eCollection Date: 2017-01-01 DOI: 10.1186/s41073-017-0044-9
Daniel R Shanahan, Ines Lopes de Sousa, Diana M Marshall

Background: There is evidence that direct journal endorsement of reporting guidelines can lead to important improvements in the quality and reliability of the published research. However, over the last 20 years, there has been a proliferation of reporting guidelines for different study designs, making it impractical for a journal to explicitly endorse them all. The objective of this study was to investigate whether a decision tree tool made available during the submission process facilitates author identification of the relevant reporting guideline.

Methods: This was a prospective 14-week before-after study across four speciality medical research journals. During the submission process, authors were prompted to follow the relevant reporting guideline from the EQUATOR Network and asked to confirm that they followed the guideline ('before'). After 7 weeks, this prompt was updated to include a direct link to the decision-tree tool and an additional prompt for those authors who stated that 'no guidelines were applicable' ('after'). For each article submitted, the authors' response, what guideline they followed (if any) and what reporting guideline they should have followed (including none relevant) were recorded.

Results: Overall, 590 manuscripts were included in this analysis-300 in the before cohort and 290 in the after. There were relevant reporting guidelines for 75% of manuscripts in each group; STROBE was the most commonly applicable reporting guideline, relevant for 35% (n = 106) and 37% (n = 106) of manuscripts, respectively. Use of the tool was associated with an 8.4% improvement in the number of authors correctly identifying the relevant reporting guideline for their study (p < 0.0001), a 14% reduction in the number of authors incorrectly stating that there were no relevant reporting guidelines (p < 0.0001), and a 1.7% reduction in authors choosing a guideline (p = 0.10). However, the 'after' cohort also saw a significant increase in the number of authors stating that there were relevant reporting guidelines for their study, but not specifying which (34 vs 29%; p = 0.04).

Conclusion: This study suggests that use of a decision-tree tool during submission of a manuscript is associated with improved author identification of the relevant reporting guidelines for their study type; however, the majority of authors still failed to correctly identify the relevant guidelines.

背景:有证据表明,期刊对报告指南的直接认可可以显著提高已发表研究的质量和可靠性。然而,在过去的20年里,针对不同研究设计的报告指南激增,使得期刊明确认可所有这些指南变得不切实际。本研究的目的是调查在提交过程中可用的决策树工具是否有助于作者识别相关的报告指南。方法:这是一项为期14周的前瞻性前后对照研究,涉及四家专业医学研究期刊。在提交过程中,作者被提示遵循赤道网络的相关报告指南,并被要求确认他们遵循了指南(“之前”)。7周后,这个提示被更新为包括一个直接链接到决策树工具和一个额外的提示,用于那些声明“没有指南适用”(“之后”)的作者。对于提交的每篇文章,记录了作者的回应,他们遵循的指南(如果有的话)以及他们应该遵循的报告指南(包括不相关的)。结果:总共有590篇论文被纳入本分析,其中300篇在前队列,290篇在后队列。每组75%的稿件有相关的报告指南;STROBE是最普遍适用的报告指南,分别与35% (n = 106)和37% (n = 106)的手稿相关。使用该工具与正确识别其研究相关报告指南的作者数量增加8.4%相关(p p p = 0.10)。然而,在“之后”的队列中,有更多的作者表示他们的研究有相关的报告指南,但没有具体说明是哪些指南(34% vs 29%;p = 0.04)。结论:本研究表明,在投稿过程中使用决策树工具可以提高作者对其研究类型的相关报告指南的识别能力;然而,大多数作者仍未能正确识别相关指南。
{"title":"Simple decision-tree tool to facilitate author identification of reporting guidelines during submission: a before-after study.","authors":"Daniel R Shanahan,&nbsp;Ines Lopes de Sousa,&nbsp;Diana M Marshall","doi":"10.1186/s41073-017-0044-9","DOIUrl":"https://doi.org/10.1186/s41073-017-0044-9","url":null,"abstract":"<p><strong>Background: </strong>There is evidence that direct journal endorsement of reporting guidelines can lead to important improvements in the quality and reliability of the published research. However, over the last 20 years, there has been a proliferation of reporting guidelines for different study designs, making it impractical for a journal to explicitly endorse them all. The objective of this study was to investigate whether a decision tree tool made available during the submission process facilitates author identification of the relevant reporting guideline.</p><p><strong>Methods: </strong>This was a prospective 14-week before-after study across four speciality medical research journals. During the submission process, authors were prompted to follow the relevant reporting guideline from the EQUATOR Network and asked to confirm that they followed the guideline ('before'). After 7 weeks, this prompt was updated to include a direct link to the decision-tree tool and an additional prompt for those authors who stated that 'no guidelines were applicable' ('after'). For each article submitted, the authors' response, what guideline they followed (if any) and what reporting guideline they should have followed (including none relevant) were recorded.</p><p><strong>Results: </strong>Overall, 590 manuscripts were included in this analysis-300 in the before cohort and 290 in the after. There were relevant reporting guidelines for 75% of manuscripts in each group; STROBE was the most commonly applicable reporting guideline, relevant for 35% (<i>n</i> = 106) and 37% (<i>n</i> = 106) of manuscripts, respectively. Use of the tool was associated with an 8.4% improvement in the number of authors correctly identifying the relevant reporting guideline for their study (<i>p</i> < 0.0001), a 14% reduction in the number of authors incorrectly stating that there were no relevant reporting guidelines (<i>p</i> < 0.0001), and a 1.7% reduction in authors choosing a guideline (<i>p</i> = 0.10). However, the 'after' cohort also saw a significant increase in the number of authors stating that there were relevant reporting guidelines for their study, but not specifying which (34 vs 29%; <i>p</i> = 0.04).</p><p><strong>Conclusion: </strong>This study suggests that use of a decision-tree tool during submission of a manuscript is associated with improved author identification of the relevant reporting guidelines for their study type; however, the majority of authors still failed to correctly identify the relevant guidelines.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"2 ","pages":"20"},"PeriodicalIF":0.0,"publicationDate":"2017-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-017-0044-9","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35837675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
'Are you siding with a personality or the grant proposal?': observations on how peer review panels function. 你是站在个性一边,还是站在拨款提案一边?":对同行评审小组运作方式的观察。
Q1 ETHICS Pub Date : 2017-12-04 eCollection Date: 2017-01-01 DOI: 10.1186/s41073-017-0043-x
John Coveney, Danielle L Herbert, Kathy Hill, Karen E Mow, Nicholas Graves, Adrian Barnett

Background: In Australia, the peer review process for competitive funding is usually conducted by a peer review group in conjunction with prior assessment from external assessors. This process is quite mysterious to those outside it. The purpose of this research was to throw light on grant review panels (sometimes called the 'black box') through an examination of the impact of panel procedures, panel composition and panel dynamics on the decision-making in the grant review process. A further purpose was to compare experience of a simplified review process with more conventional processes used in assessing grant proposals in Australia.

Methods: This project was one aspect of a larger study into the costs and benefits of a simplified peer review process. The Queensland University of Technology (QUT)-simplified process was compared with the National Health and Medical Research Council's (NHMRC) more complex process. Grant review panellists involved in both processes were interviewed about their experience of the decision-making process that assesses the excellence of an application. All interviews were recorded and transcribed. Each transcription was de-identified and returned to the respondent for review. Final transcripts were read repeatedly and coded, and similar codes were amalgamated into categories that were used to build themes. Final themes were shared with the research team for feedback.

Results: Two major themes arose from the research: (1) assessing grant proposals and (2) factors influencing the fairness, integrity and objectivity of review. Issues such as the quality of writing in a grant proposal, comparison of the two review methods, the purpose and use of the rebuttal, assessing the financial value of funded projects, the importance of the experience of the panel membership and the role of track record and the impact of group dynamics on the review process were all discussed. The research also examined the influence of research culture on decision-making in grant review panels. One of the aims of this study was to compare a simplified review process with more conventional processes. Generally, participants were supportive of the simplified process.

Conclusions: Transparency in the grant review process will result in better appreciation of the outcome. Despite the provision of clear guidelines for peer review, reviewing processes are likely to be subjective to the extent that different reviewers apply different rules. The peer review process will come under more scrutiny as funding for research becomes even more competitive. There is justification for further research on the process, especially of a kind that taps more deeply into the 'black box' of peer review.

背景:在澳大利亚,竞争性资助的同行评审过程通常由同行评审小组与外部评审人员的事先评估共同进行。这个过程对于外界人士来说相当神秘。本研究的目的是通过考察小组程序、小组构成和小组动态对资助评审过程中决策的影响,揭示资助评审小组(有时被称为 "黑箱")。另一个目的是比较简化审查程序与澳大利亚赠款提案评估中使用的更传统程序的经验:该项目是对简化同行评审程序的成本和效益进行的大型研究的一个方面。昆士兰科技大学(QUT)的简化流程与国家健康与医学研究委员会(NHMRC)的复杂流程进行了比较。对参与这两个过程的拨款评审小组成员进行了访谈,了解他们在评估申请卓越性的决策过程中的经验。所有访谈都进行了录音和转录。每份笔录都经过去标识化处理,并交还给受访者审阅。对最后的记录誊本进行反复阅读和编码,并将相似的编码合并成类别,用于构建主题。最后的主题与研究小组共享,以征求反馈意见:研究产生了两大主题:(1) 评估赠款提案;(2) 影响评审公平性、完整性和客观性的因素。研究讨论的问题包括:资助提案的写作质量、两种审查方法的比较、反驳的目的和使用、评估资助项目的财务价值、小组成员经验的重要性、跟踪记录的作用以及小组动态对审查过程的影响。研究还探讨了研究文化对资助评审小组决策的影响。这项研究的目的之一是比较简化的评审程序和更为传统的程序。总体而言,参与者支持简化流程:赠款评审过程的透明度将使评审结果得到更好的评价。尽管为同行评审提供了明确的指导方针,但评审过程很可能是主观的,因为不同的评审人适用不同的规则。随着研究经费的竞争日趋激烈,同行评审程序将受到更多的审查。有理由对这一过程进行进一步研究,特别是对同行评审的 "黑箱 "进行更深入的研究。
{"title":"'Are you siding with a personality or the grant proposal?': observations on how peer review panels function.","authors":"John Coveney, Danielle L Herbert, Kathy Hill, Karen E Mow, Nicholas Graves, Adrian Barnett","doi":"10.1186/s41073-017-0043-x","DOIUrl":"10.1186/s41073-017-0043-x","url":null,"abstract":"<p><strong>Background: </strong>In Australia, the peer review process for competitive funding is usually conducted by a peer review group in conjunction with prior assessment from external assessors. This process is quite mysterious to those outside it. The purpose of this research was to throw light on grant review panels (sometimes called the 'black box') through an examination of the impact of panel procedures, panel composition and panel dynamics on the decision-making in the grant review process. A further purpose was to compare experience of a simplified review process with more conventional processes used in assessing grant proposals in Australia.</p><p><strong>Methods: </strong>This project was one aspect of a larger study into the costs and benefits of a simplified peer review process. The Queensland University of Technology (QUT)-simplified process was compared with the National Health and Medical Research Council's (NHMRC) more complex process. Grant review panellists involved in both processes were interviewed about their experience of the decision-making process that assesses the excellence of an application. All interviews were recorded and transcribed. Each transcription was de-identified and returned to the respondent for review. Final transcripts were read repeatedly and coded, and similar codes were amalgamated into categories that were used to build themes. Final themes were shared with the research team for feedback.</p><p><strong>Results: </strong>Two major themes arose from the research: (1) assessing grant proposals and (2) factors influencing the fairness, integrity and objectivity of review. Issues such as the quality of writing in a grant proposal, comparison of the two review methods, the purpose and use of the rebuttal, assessing the financial value of funded projects, the importance of the experience of the panel membership and the role of track record and the impact of group dynamics on the review process were all discussed. The research also examined the influence of research culture on decision-making in grant review panels. One of the aims of this study was to compare a simplified review process with more conventional processes. Generally, participants were supportive of the simplified process.</p><p><strong>Conclusions: </strong>Transparency in the grant review process will result in better appreciation of the outcome. Despite the provision of clear guidelines for peer review, reviewing processes are likely to be subjective to the extent that different reviewers apply different rules. The peer review process will come under more scrutiny as funding for research becomes even more competitive. There is justification for further research on the process, especially of a kind that taps more deeply into the 'black box' of peer review.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"2 ","pages":"19"},"PeriodicalIF":0.0,"publicationDate":"2017-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5803633/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35838151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Percentage-based Author Contribution Index: a universal measure of author contribution to scientific articles. 基于百分比的作者贡献指数:衡量作者对科学文章贡献的通用指标。
Q1 ETHICS Pub Date : 2017-11-03 eCollection Date: 2017-01-01 DOI: 10.1186/s41073-017-0042-y
Stéphane Boyer, Takayoshi Ikeda, Marie-Caroline Lefort, Jagoba Malumbres-Olarte, Jason M Schmidt

Background: Deciphering the amount of work provided by different co-authors of a scientific paper has been a recurrent problem in science. Despite the myriad of metrics available, the scientific community still largely relies on the position in the list of authors to evaluate contributions, a metric that attributes subjective and unfounded credit to co-authors. We propose an easy to apply, universally comparable and fair metric to measure and report co-authors contribution in the scientific literature.

Methods: The proposed Author Contribution Index (ACI) is based on contribution percentages provided by the authors, preferably at the time of submission. Researchers can use ACI to compare the contributions of different authors, describe the contribution profile of a particular researcher or analyse how contribution changes through time. We provide such an analysis based on contribution percentages provided by 97 scientists from the field of ecology who voluntarily responded to an online anonymous survey.

Results: ACI is simple to understand and to implement because it is based solely on percentage contributions and the number of co-authors. It provides a continuous score that reflects the contribution of one author as compared to the average contribution of all other authors. For example, ACI(i) = 3, means that author i contributed three times more than what the other authors contributed on average. Our analysis comprised 836 papers published in 2014-2016 and revealed patterns of ACI values that relate to career advancement.

Conclusion: There are many examples of author contribution indices that have been proposed but none has really been adopted by scientific journals. Many of the proposed solutions are either too complicated, not accurate enough or not comparable across articles, authors and disciplines. The author contribution index presented here addresses these three major issues and has the potential to contribute to more transparency in the science literature. If adopted by scientific journals, it could provide job seekers, recruiters and evaluating bodies with a tool to gather information that is essential to them and cannot be easily and accurately obtained otherwise. We also suggest that scientists use the index regardless of whether it is implemented by journals or not.

背景:破解科学论文中不同共同作者的工作量一直是科学界反复出现的问题。尽管有无数的衡量标准,但科学界在很大程度上仍然依赖于作者列表中的位置来评估贡献,这种衡量标准将主观和无根据的功劳归于共同作者。我们提出了一个易于应用、具有普遍可比性和公平性的指标来衡量和报告科学文献中共同作者的贡献:建议的作者贡献指数(ACI)基于作者提供的贡献百分比,最好是在投稿时提供。研究人员可以利用 ACI 比较不同作者的贡献,描述特定研究人员的贡献概况,或分析贡献随时间的变化情况。我们根据生态学领域 97 位科学家提供的贡献百分比进行了这样的分析,这些科学家自愿回答了在线匿名调查:ACI 易于理解和实施,因为它完全基于贡献百分比和合著者人数。它提供了一个连续的分数,反映了一位作者的贡献与所有其他作者的平均贡献的比较。例如,ACI(i) = 3 表示作者 i 的贡献是其他作者平均贡献的三倍。我们的分析包括 2014-2016 年发表的 836 篇论文,揭示了 ACI 值与职业发展相关的模式:提出作者贡献指数的例子很多,但没有一个真正被科学期刊采用。许多建议的解决方案要么过于复杂,要么不够准确,要么无法在不同文章、作者和学科之间进行比较。本文提出的作者贡献指数解决了这三个主要问题,有可能提高科学文献的透明度。如果被科学期刊采用,它可以为求职者、招聘者和评估机构提供一种工具,用于收集对他们至关重要的信息,而这些信息是无法通过其他方式轻松准确地获得的。我们还建议,无论期刊是否采用该索引,科学家都应使用它。
{"title":"Percentage-based Author Contribution Index: a universal measure of author contribution to scientific articles.","authors":"Stéphane Boyer, Takayoshi Ikeda, Marie-Caroline Lefort, Jagoba Malumbres-Olarte, Jason M Schmidt","doi":"10.1186/s41073-017-0042-y","DOIUrl":"10.1186/s41073-017-0042-y","url":null,"abstract":"<p><strong>Background: </strong>Deciphering the amount of work provided by different co-authors of a scientific paper has been a recurrent problem in science. Despite the myriad of metrics available, the scientific community still largely relies on the position in the list of authors to evaluate contributions, a metric that attributes subjective and unfounded credit to co-authors. We propose an easy to apply, universally comparable and fair metric to measure and report co-authors contribution in the scientific literature.</p><p><strong>Methods: </strong>The proposed Author Contribution Index (ACI) is based on contribution percentages provided by the authors, preferably at the time of submission. Researchers can use ACI to compare the contributions of different authors, describe the contribution profile of a particular researcher or analyse how contribution changes through time. We provide such an analysis based on contribution percentages provided by 97 scientists from the field of ecology who voluntarily responded to an online anonymous survey.</p><p><strong>Results: </strong>ACI is simple to understand and to implement because it is based solely on percentage contributions and the number of co-authors. It provides a continuous score that reflects the contribution of one author as compared to the average contribution of all other authors. For example, ACI(i) = 3, means that author i contributed three times more than what the other authors contributed on average. Our analysis comprised 836 papers published in 2014-2016 and revealed patterns of ACI values that relate to career advancement.</p><p><strong>Conclusion: </strong>There are many examples of author contribution indices that have been proposed but none has really been adopted by scientific journals. Many of the proposed solutions are either too complicated, not accurate enough or not comparable across articles, authors and disciplines. The author contribution index presented here addresses these three major issues and has the potential to contribute to more transparency in the science literature. If adopted by scientific journals, it could provide job seekers, recruiters and evaluating bodies with a tool to gather information that is essential to them and cannot be easily and accurately obtained otherwise. We also suggest that scientists use the index regardless of whether it is implemented by journals or not.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"2 ","pages":"18"},"PeriodicalIF":0.0,"publicationDate":"2017-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5803580/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35837677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Selective citation in the literature on swimming in chlorinated water and childhood asthma: a network analysis. 在氯化水中游泳与儿童哮喘的文献选择性引用:网络分析。
Q1 ETHICS Pub Date : 2017-10-02 eCollection Date: 2017-01-01 DOI: 10.1186/s41073-017-0041-z
Bram Duyx, Miriam J E Urlings, Gerard M H Swaen, Lex M Bouter, Maurice P Zeegers

Background: Knowledge development depends on an unbiased representation of the available evidence. Selective citation may distort this representation. Recently, some controversy emerged regarding the possible impact of swimming on childhood asthma, raising the question about the role of selective citation in this field. Our objective was to assess the occurrence and determinants of selective citation in scientific publications on the relationship between swimming in chlorinated pools and childhood asthma.

Methods: We identified scientific journal articles on this relationship via a systematic literature search. The following factors were taken into account: study outcome (authors' conclusion, data-based conclusion), other content-related article characteristics (article type, sample size, research quality, specificity), content-unrelated article characteristics (language, publication title, funding source, number of authors, number of affiliations, number of references, journal impact factor), author characteristics (gender, country, affiliation), and citation characteristics (time to citation, authority, self-citation). To assess the impact of these factors on citation, we performed a series of univariate and adjusted random-effects logistic regressions, with potential citation path as unit of analysis.

Results: Thirty-six articles were identified in this network, consisting of 570 potential citation paths of which 191 (34%) were realized. There was strong evidence that articles with at least one author in common, cited each other more often than articles that had no common authors (odds ratio (OR) 5.2, 95% confidence interval (CI) 3.1-8.8). Similarly, the chance of being cited was higher for articles that were empirical rather than narrative (OR 4.2, CI 2.6-6.7), that reported a large sample size (OR 5.8, CI 2.9-11.6), and that were written by authors with a high authority within the network (OR 4.1, CI 2.1-8.0). Further, there was some evidence for citation bias: articles that confirmed the relation between swimming and asthma were cited more often (OR 1.8, CI 1.1-2.9), but this finding was not robust.

Conclusions: There is clear evidence of selective citation in this research field, but the evidence for citation bias is not very strong.

背景:知识的发展依赖于对现有证据的公正表述。选择性引用可能会扭曲这种表述。最近,关于游泳对儿童哮喘的可能影响出现了一些争议,提出了关于选择性引用在该领域的作用的问题。我们的目的是评估科学出版物中关于在氯化泳池游泳与儿童哮喘之间关系的选择性引用的发生率和决定因素。方法:通过系统的文献检索,我们找到了关于这种关系的科学期刊文章。考虑了以下因素:研究结果(作者结论、基于数据的结论)、其他与内容相关的文章特征(文章类型、样本量、研究质量、专一性)、与内容无关的文章特征(语言、出版名称、资金来源、作者数量、隶属机构数量、参考文献数量、期刊影响因子)、作者特征(性别、国家、隶属机构)和引用特征(引用时间、权威、自引)。为了评估这些因素对引文的影响,我们以潜在的引文路径为分析单位,进行了一系列单变量和调整后的随机效应逻辑回归。结果:该网络共识别出36篇文章,570条潜在被引路径,其中191条(34%)被实现。有强有力的证据表明,至少有一个共同作者的文章比没有共同作者的文章更常被引用(优势比(OR) 5.2, 95%可信区间(CI) 3.1-8.8)。同样,经证性文章被引用的几率比叙述性文章高(OR 4.2, CI 2.6-6.7),报告样本量大(OR 5.8, CI 2.9-11.6),并且作者在网络中具有很高的权威(OR 4.1, CI 2.1-8.0)。此外,还有一些证据表明存在引文偏倚:证实游泳和哮喘之间关系的文章被引用的频率更高(OR 1.8, CI 1.1-2.9),但这一发现并不稳健。结论:本研究领域存在明显的选择性被引证据,但存在引文偏倚的证据并不强。
{"title":"Selective citation in the literature on swimming in chlorinated water and childhood asthma: a network analysis.","authors":"Bram Duyx,&nbsp;Miriam J E Urlings,&nbsp;Gerard M H Swaen,&nbsp;Lex M Bouter,&nbsp;Maurice P Zeegers","doi":"10.1186/s41073-017-0041-z","DOIUrl":"https://doi.org/10.1186/s41073-017-0041-z","url":null,"abstract":"<p><strong>Background: </strong>Knowledge development depends on an unbiased representation of the available evidence. Selective citation may distort this representation. Recently, some controversy emerged regarding the possible impact of swimming on childhood asthma, raising the question about the role of selective citation in this field. Our objective was to assess the occurrence and determinants of selective citation in scientific publications on the relationship between swimming in chlorinated pools and childhood asthma.</p><p><strong>Methods: </strong>We identified scientific journal articles on this relationship via a systematic literature search. The following factors were taken into account: study outcome (authors' conclusion, data-based conclusion), other content-related article characteristics (article type, sample size, research quality, specificity), content-unrelated article characteristics (language, publication title, funding source, number of authors, number of affiliations, number of references, journal impact factor), author characteristics (gender, country, affiliation), and citation characteristics (time to citation, authority, self-citation). To assess the impact of these factors on citation, we performed a series of univariate and adjusted random-effects logistic regressions, with potential citation path as unit of analysis.</p><p><strong>Results: </strong>Thirty-six articles were identified in this network, consisting of 570 potential citation paths of which 191 (34%) were realized. There was strong evidence that articles with at least one author in common, cited each other more often than articles that had no common authors (odds ratio (OR) 5.2, 95% confidence interval (CI) 3.1-8.8). Similarly, the chance of being cited was higher for articles that were empirical rather than narrative (OR 4.2, CI 2.6-6.7), that reported a large sample size (OR 5.8, CI 2.9-11.6), and that were written by authors with a high authority within the network (OR 4.1, CI 2.1-8.0). Further, there was some evidence for citation bias: articles that confirmed the relation between swimming and asthma were cited more often (OR 1.8, CI 1.1-2.9), but this finding was not robust.</p><p><strong>Conclusions: </strong>There is clear evidence of selective citation in this research field, but the evidence for citation bias is not very strong.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"2 ","pages":"17"},"PeriodicalIF":0.0,"publicationDate":"2017-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-017-0041-z","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35838150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Using democracy to award research funding: an observational study. 利用民主发放研究经费:一项观察研究。
Q1 ETHICS Pub Date : 2017-09-15 eCollection Date: 2017-01-01 DOI: 10.1186/s41073-017-0040-0
Adrian G Barnett, Philip Clarke, Cedryck Vaquette, Nicholas Graves

Background: Winning funding for health and medical research usually involves a lengthy application process. With success rates under 20%, much of the time spent by 80% of applicants could have been better used on actual research. An alternative funding system that could save time is using democracy to award the most deserving researchers based on votes from the research community. We aimed to pilot how such a system could work and examine some potential biases.

Methods: We used an online survey with a convenience sample of Australian researchers. Researchers were asked to name the 10 scientists currently working in Australia that they thought most deserved funding for future research. For comparison, we used recent winners from large national fellowship schemes that used traditional peer review.

Results: Voting took a median of 5 min (inter-quartile range 3 to 10 min). Extrapolating to a national voting scheme, we estimate 599 working days of voting time (95% CI 490 to 728), compared with 827 working days for the current peer review system for fellowships. The gender ratio in the votes was a more equal 45:55 (female to male) compared with 34:66 in recent fellowship winners, although this could be explained by Simpson's paradox. Voters were biased towards their own institution, with an additional 1.6 votes per ballot (inter-quartile range 0.8 to 2.2) above the expected number. Respondents raised many concerns about the idea of using democracy to fund research, including vote rigging, lobbying and it becoming a popularity contest.

Conclusions: This is a preliminary study of using voting that does not investigate many of the concerns about how a voting system would work. We were able to show that voting would take less time than traditional peer review and would spread the workload over many more reviewers. Further studies of alternative funding systems are needed as well as a wide discussion with the research community about potential changes.

背景:赢得健康和医学研究资金通常需要漫长的申请过程。由于成功率低于 20%,80% 的申请者所花费的大量时间本可以更好地用于实际研究。另一种可以节省时间的资助系统是利用民主,根据研究界的投票来奖励最值得奖励的研究人员。我们的目的是试验这种制度如何运作,并研究一些潜在的偏见:方法:我们对澳大利亚研究人员进行了在线调查。研究人员被要求说出他们认为目前在澳大利亚工作的最值得资助未来研究的 10 位科学家。为了进行比较,我们使用了采用传统同行评审的大型国家奖学金计划的近期获奖者:投票时间中位数为 5 分钟(四分位数间距为 3-10 分钟)。根据全国性投票计划推断,我们估计投票时间为 599 个工作日(95% CI 为 490-728 天),而目前的研究金同行评审制度为 827 个工作日。投票中的男女比例为 45:55(女性对男性),而最近的研究金获得者中男女比例为 34:66,尽管这可以用辛普森悖论来解释。投票者偏向自己所在的机构,每张选票比预期多出 1.6 票(四分位数之间的范围为 0.8-2.2 票)。受访者对利用民主来资助研究的想法提出了许多担忧,包括操纵投票、游说和成为一场人气竞赛:这是一项关于利用投票的初步研究,并没有对投票系统如何运作的许多问题进行调查。我们能够证明,投票比传统的同行评审花费的时间更少,而且可以将工作量分摊给更多的评审人员。我们需要进一步研究其他资助系统,并与研究界广泛讨论潜在的变革。
{"title":"Using democracy to award research funding: an observational study.","authors":"Adrian G Barnett, Philip Clarke, Cedryck Vaquette, Nicholas Graves","doi":"10.1186/s41073-017-0040-0","DOIUrl":"10.1186/s41073-017-0040-0","url":null,"abstract":"<p><strong>Background: </strong>Winning funding for health and medical research usually involves a lengthy application process. With success rates under 20%, much of the time spent by 80% of applicants could have been better used on actual research. An alternative funding system that could save time is using democracy to award the most deserving researchers based on votes from the research community. We aimed to pilot how such a system could work and examine some potential biases.</p><p><strong>Methods: </strong>We used an online survey with a convenience sample of Australian researchers. Researchers were asked to name the 10 scientists currently working in Australia that they thought most deserved funding for future research. For comparison, we used recent winners from large national fellowship schemes that used traditional peer review.</p><p><strong>Results: </strong>Voting took a median of 5 min (inter-quartile range 3 to 10 min). Extrapolating to a national voting scheme, we estimate 599 working days of voting time (95% CI 490 to 728), compared with 827 working days for the current peer review system for fellowships. The gender ratio in the votes was a more equal 45:55 (female to male) compared with 34:66 in recent fellowship winners, although this could be explained by Simpson's paradox. Voters were biased towards their own institution, with an additional 1.6 votes per ballot (inter-quartile range 0.8 to 2.2) above the expected number. Respondents raised many concerns about the idea of using democracy to fund research, including vote rigging, lobbying and it becoming a popularity contest.</p><p><strong>Conclusions: </strong>This is a preliminary study of using voting that does not investigate many of the concerns about how a voting system would work. We were able to show that voting would take less time than traditional peer review and would spread the workload over many more reviewers. Further studies of alternative funding systems are needed as well as a wide discussion with the research community about potential changes.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"2 ","pages":"16"},"PeriodicalIF":0.0,"publicationDate":"2017-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5803583/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35837673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Research integrity and peer review
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1