首页 > 最新文献

Research integrity and peer review最新文献

英文 中文
Systematic review and meta-analysis of quotation inaccuracy in medicine. 医学引文不准确的系统回顾与荟萃分析。
IF 10.7 Q1 ETHICS Pub Date : 2025-07-23 DOI: 10.1186/s41073-025-00173-z
Christopher Baethge, Hannah Jergas

Background: Quotations are crucial to science but have been shown to be often inaccurate. Quotation errors, that is, a reference not supporting the authors' claim, may still be a significant issue in scientific medical writing. This study aimed to examine the quotation error rate and trends over time in the medical literature.

Methods: A systematic search of PubMed, Web of Science, and reference lists for quotation error studies in medicine and without date or language restrictions identified 46 studies analyzing 32,000 quotations/references. Literature search, data extraction, and risk of bias assessments were performed independently by two raters. Random-effects meta-analyses and meta-regression were used to analyze error rates and trends (protocol pre-registered on OSF).

Results: 16.9% (95% CI: 14.1%-20.0%) of quotations were incorrect, with approximately half classified as major errors (8.0% [95% CI: 6.4%-10.0%]). Heterogeneity was high, and Egger's test for small study effects remained negative throughout. Meta-regression showed no significant improvement in quotation accuracy over recent years (slope: -0.002 [95% CI: -0.03 to 0.02], p = 0.85). Neither risk of bias, nor the number of references were statistically significantly associated with total error rate, but journal impact factor was: Spearman's ρ = -0.253 (p = 0.043, binomial test, N = 25).

Conclusions: Quotation errors remain a problem in the medical literature, with no improvement over time. Addressing this issue requires concerted efforts to improve scholarly practices and editorial processes.

背景:引文对科学至关重要,但事实证明往往是不准确的。引文错误,即参考文献不支持作者的主张,可能仍然是科学医学写作中的一个重大问题。本研究旨在探讨医学文献中引语错误率及其随时间变化的趋势。方法:系统检索PubMed、Web of Science和医学引文错误研究的参考文献列表,没有日期或语言限制,确定了46项研究,分析了32,000条引文/参考文献。文献检索、数据提取和偏倚风险评估由两名评分员独立完成。随机效应荟萃分析和元回归分析错误率和趋势(在OSF上预注册的方案)。结果:16.9% (95% CI: 14.1%-20.0%)的引语不正确,其中约一半被归类为严重错误(8.0% [95% CI: 6.4%-10.0%])。异质性很高,艾格对小研究效应的检验始终是负的。meta回归显示近年来引文准确性没有显著提高(斜率:-0.002 [95% CI: -0.03至0.02],p = 0.85)。偏倚风险和文献数量与总错误率均无统计学显著相关,但期刊影响因子为:Spearman ρ = -0.253 (p = 0.043,二项检验,N = 25)。结论:引文错误仍然是医学文献中的一个问题,没有随着时间的推移而改善。解决这一问题需要共同努力,改进学术实践和编辑过程。
{"title":"Systematic review and meta-analysis of quotation inaccuracy in medicine.","authors":"Christopher Baethge, Hannah Jergas","doi":"10.1186/s41073-025-00173-z","DOIUrl":"10.1186/s41073-025-00173-z","url":null,"abstract":"<p><strong>Background: </strong>Quotations are crucial to science but have been shown to be often inaccurate. Quotation errors, that is, a reference not supporting the authors' claim, may still be a significant issue in scientific medical writing. This study aimed to examine the quotation error rate and trends over time in the medical literature.</p><p><strong>Methods: </strong>A systematic search of PubMed, Web of Science, and reference lists for quotation error studies in medicine and without date or language restrictions identified 46 studies analyzing 32,000 quotations/references. Literature search, data extraction, and risk of bias assessments were performed independently by two raters. Random-effects meta-analyses and meta-regression were used to analyze error rates and trends (protocol pre-registered on OSF).</p><p><strong>Results: </strong>16.9% (95% CI: 14.1%-20.0%) of quotations were incorrect, with approximately half classified as major errors (8.0% [95% CI: 6.4%-10.0%]). Heterogeneity was high, and Egger's test for small study effects remained negative throughout. Meta-regression showed no significant improvement in quotation accuracy over recent years (slope: -0.002 [95% CI: -0.03 to 0.02], p = 0.85). Neither risk of bias, nor the number of references were statistically significantly associated with total error rate, but journal impact factor was: Spearman's ρ = -0.253 (p = 0.043, binomial test, N = 25).</p><p><strong>Conclusions: </strong>Quotation errors remain a problem in the medical literature, with no improvement over time. Addressing this issue requires concerted efforts to improve scholarly practices and editorial processes.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"13"},"PeriodicalIF":10.7,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12285159/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144692730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating psychiatry journals' adherence to informed consent guidelines for case reports. 评估精神病学期刊对病例报告知情同意指南的遵守情况。
IF 10.7 Q1 ETHICS Pub Date : 2025-07-18 DOI: 10.1186/s41073-025-00171-1
Ashley J Tsang, John Z Sadler, E Sherwood Brown, Elizabeth Heitman

Background: Case reports are valuable tools that illustrate and analyze practical scenarios, novel problems, and the effectiveness of interventions. In psychiatry they often explore unique and potentially stigmatizing aspects of mental health, underscoring the importance of confidentiality and informed consent. However, journals' guidance on consent and confidentiality for case reports varies. In 2013, an international expert group developed the CAse REports (CARE) Guidelines for best practices in case reports, which include guidelines for informed consent and de-identification. In 2016, the Committee on Publication Ethics (COPE) issued ethical standards for publishing case reports, calling for written informed consent from featured patients.

Methods: Using a cross-sectional approach, we assessed the instructions for authors of 253 indexed psychiatry journals, of which 129 had published English-language case reports in the prior five years. Our research identified and evaluated journals' use of COPE and CARE guidelines on informed consent and de-identification in case reports.

Results: Among these 129 journals, 84 (65%) referred to COPE guidelines, and 59 (46%) referenced CARE guidelines. Furthermore, 46 (36%) required informed consent without de-identification, 7 (5%) required only de-identification, and 21 (16%) required both, specifying consent for identifying information. Notably, 40 (31%) lacked informed consent instructions. Of the 82 journals that required informed consent, 69 (85%) required documentation of consent.

Conclusion: A decade after the publication of expert guidance, psychiatry journals remain inconsistent in their adherence to ethical guidelines for informed consent in case reports. More attention to clear instructions from journals on informed consent-a notable topic across different fields-would provide an important educational message about both publication ethics and fundamental respect for patients' confidentiality.

背景:病例报告是说明和分析实际情况、新问题和干预措施有效性的宝贵工具。在精神病学中,他们经常探讨精神健康的独特和潜在的污名化方面,强调保密和知情同意的重要性。然而,期刊对病例报告的同意和保密的指导各不相同。2013年,一个国际专家组制定了《病例报告(CARE)最佳做法指南》,其中包括知情同意和去识别准则。2016年,出版伦理委员会(COPE)发布了出版病例报告的伦理标准,要求获得特写患者的书面知情同意。方法:采用横断面方法,我们评估了253份被索引的精神病学期刊的作者指南,其中129份在过去五年内发表过英文病例报告。我们的研究确定并评估了期刊对病例报告中知情同意和去识别的COPE和CARE指南的使用。结果:129种期刊中,84种(65%)参考了COPE指南,59种(46%)参考了CARE指南。此外,46个(36%)要求在没有去识别的情况下获得知情同意,7个(5%)只要求去识别,21个(16%)两者都要求,明确了识别信息的同意。值得注意的是,40个(31%)缺乏知情同意说明。在要求知情同意的82种期刊中,69种(85%)要求知情同意文件。结论:在专家指南发表十年后,精神病学期刊在病例报告中知情同意的道德准则方面仍然不一致。更多地关注期刊关于知情同意的明确说明——这是一个在不同领域都值得关注的话题——将提供关于出版伦理和对患者保密的基本尊重的重要教育信息。
{"title":"Evaluating psychiatry journals' adherence to informed consent guidelines for case reports.","authors":"Ashley J Tsang, John Z Sadler, E Sherwood Brown, Elizabeth Heitman","doi":"10.1186/s41073-025-00171-1","DOIUrl":"10.1186/s41073-025-00171-1","url":null,"abstract":"<p><strong>Background: </strong>Case reports are valuable tools that illustrate and analyze practical scenarios, novel problems, and the effectiveness of interventions. In psychiatry they often explore unique and potentially stigmatizing aspects of mental health, underscoring the importance of confidentiality and informed consent. However, journals' guidance on consent and confidentiality for case reports varies. In 2013, an international expert group developed the CAse REports (CARE) Guidelines for best practices in case reports, which include guidelines for informed consent and de-identification. In 2016, the Committee on Publication Ethics (COPE) issued ethical standards for publishing case reports, calling for written informed consent from featured patients.</p><p><strong>Methods: </strong>Using a cross-sectional approach, we assessed the instructions for authors of 253 indexed psychiatry journals, of which 129 had published English-language case reports in the prior five years. Our research identified and evaluated journals' use of COPE and CARE guidelines on informed consent and de-identification in case reports.</p><p><strong>Results: </strong>Among these 129 journals, 84 (65%) referred to COPE guidelines, and 59 (46%) referenced CARE guidelines. Furthermore, 46 (36%) required informed consent without de-identification, 7 (5%) required only de-identification, and 21 (16%) required both, specifying consent for identifying information. Notably, 40 (31%) lacked informed consent instructions. Of the 82 journals that required informed consent, 69 (85%) required documentation of consent.</p><p><strong>Conclusion: </strong>A decade after the publication of expert guidance, psychiatry journals remain inconsistent in their adherence to ethical guidelines for informed consent in case reports. More attention to clear instructions from journals on informed consent-a notable topic across different fields-would provide an important educational message about both publication ethics and fundamental respect for patients' confidentiality.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"15"},"PeriodicalIF":10.7,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12273215/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144661239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Misidentified cell lines: failures of peer review, varying journal responses to misidentification inquiries, and strategies for safeguarding biomedical research. 鉴定错误的细胞系:同行评议的失败,期刊对鉴定错误询问的不同回应,以及保护生物医学研究的策略。
IF 7.2 Q1 ETHICS Pub Date : 2025-07-11 DOI: 10.1186/s41073-025-00170-2
Ralf Weiskirchen

Background: Continuous cell lines are indispensable in basic and preclinical research. However, cross-contamination, misidentification, and over-passaging affect the validity and reproducibility of biomedical results. Although there have been efforts to highlight this problem for decades, definitive prevention remains a challenge. The International Cell Line Authentication Committee (ICLAC) registry (version 13, 26 April 2024) lists nearly 600 misidentified or contaminated cell lines. The inappropriate use of such cells has led to countless publications containing invalid data, creating a ripple effect of wasted resources, misleading follow-up studies, and compromised evidence-based conclusions.

Methods: The ICLAC registry was consulted to identify commonly misidentified cell lines. A literature search of PubMed was performed to identify recent papers using these lines in liver-related experiments. Four publications with questionable conclusions were highlighted, and the editors of the respective journals were informed with short comments or letters to the editor.

Results: Reactions from journal editors varied widely. In two cases, the editors quickly published the comments, resulting in transparent corrections. In the third example, the editor conducted an internal investigation without immediately publishing a correction. In the fourth example, the journal declined to address concerns publicly.

Conclusions: Misidentified cell lines pose an ongoing threat to scientific rigor. Despite some responsible editorial interventions, the lack of universal standards fosters the dissemination of erroneous data. However, authors, reviewers, and editors have some important tools to prevent publications with misidentified cells by consulting available resources (e.g., ICLAC, Cellosaurus, Research Resource Identification Portal, SciScore™), and adopting consistent procedures to maintain research integrity.

背景:连续细胞系在基础和临床前研究中是不可或缺的。然而,交叉污染、误鉴定和交叉传代会影响生物医学结果的有效性和可重复性。尽管几十年来一直在努力突出这一问题,但明确的预防仍然是一项挑战。国际细胞系认证委员会(ICLAC)注册表(第13版,2024年4月26日)列出了近600个被错误识别或污染的细胞系。对此类细胞的不当使用导致了无数包含无效数据的出版物,造成了资源浪费的连锁反应,误导了后续研究,并损害了基于证据的结论。方法:参考ICLAC注册表来识别常见的错误识别细胞系。对PubMed进行文献检索,以确定最近在肝脏相关实验中使用这些细胞系的论文。突出了四份结论有问题的出版物,并向各自期刊的编辑通报了简短的评论或给编辑的信。结果:期刊编辑的反应差异很大。在两个案例中,编辑迅速发表了评论,导致了透明的更正。在第三个例子中,编辑进行了内部调查,但没有立即发表更正。在第四个例子中,《华尔街日报》拒绝公开回应担忧。结论:错误识别的细胞系对科学严谨性构成持续威胁。尽管有一些负责任的编辑干预,但缺乏普遍标准助长了错误数据的传播。然而,作者、审稿人和编辑有一些重要的工具,通过查阅可用资源(例如,ICLAC, Cellosaurus, Research Resource Identification Portal, SciScore™),并采用一致的程序来保持研究的完整性,来防止出版物中存在错误鉴定的细胞。
{"title":"Misidentified cell lines: failures of peer review, varying journal responses to misidentification inquiries, and strategies for safeguarding biomedical research.","authors":"Ralf Weiskirchen","doi":"10.1186/s41073-025-00170-2","DOIUrl":"10.1186/s41073-025-00170-2","url":null,"abstract":"<p><strong>Background: </strong>Continuous cell lines are indispensable in basic and preclinical research. However, cross-contamination, misidentification, and over-passaging affect the validity and reproducibility of biomedical results. Although there have been efforts to highlight this problem for decades, definitive prevention remains a challenge. The International Cell Line Authentication Committee (ICLAC) registry (version 13, 26 April 2024) lists nearly 600 misidentified or contaminated cell lines. The inappropriate use of such cells has led to countless publications containing invalid data, creating a ripple effect of wasted resources, misleading follow-up studies, and compromised evidence-based conclusions.</p><p><strong>Methods: </strong>The ICLAC registry was consulted to identify commonly misidentified cell lines. A literature search of PubMed was performed to identify recent papers using these lines in liver-related experiments. Four publications with questionable conclusions were highlighted, and the editors of the respective journals were informed with short comments or letters to the editor.</p><p><strong>Results: </strong>Reactions from journal editors varied widely. In two cases, the editors quickly published the comments, resulting in transparent corrections. In the third example, the editor conducted an internal investigation without immediately publishing a correction. In the fourth example, the journal declined to address concerns publicly.</p><p><strong>Conclusions: </strong>Misidentified cell lines pose an ongoing threat to scientific rigor. Despite some responsible editorial interventions, the lack of universal standards fosters the dissemination of erroneous data. However, authors, reviewers, and editors have some important tools to prevent publications with misidentified cells by consulting available resources (e.g., ICLAC, Cellosaurus, Research Resource Identification Portal, SciScore™), and adopting consistent procedures to maintain research integrity.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"12"},"PeriodicalIF":7.2,"publicationDate":"2025-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12247328/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144610565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Institutional animal care and use committees and the challenges of evaluating animal research proposals. 机构动物护理和使用委员会以及评估动物研究提案的挑战。
IF 7.2 Q1 ETHICS Pub Date : 2025-07-04 DOI: 10.1186/s41073-025-00169-9
John J Pippin, Jarrod Bailey, Mark Kennedy, Deborah Dubow Press, Janine McCarthy, Ron Baron, Stephen Farghali, Elizabeth Baker, Neal D Barnard

Background: In the U.S. and many other countries, animal use in research, testing, and education is under the purview of Institutional Animal Care and Use Committees or similar bodies. Their responsibility for reviewing proposed experiments, particularly with regard to adherence to legal and ethical mandates, can be a challenging task.

Objective: To understand factors that may limit the effectiveness of Institutional Animal Care and Use Committees and identify possible solutions.

Methods: This editorial review summarizes scientific literature describing the challenges faced by U.S. Institutional Animal Care and Use Committees and those who rely on them and describes actions that may improve their functioning.

Results: Apart from what may be a sizable workload and the need to satisfy applicable regulations, committees have fundamental structural challenges and limitations. Under U.S. law, there is no requirement that committee members have expertise in the research areas under review or in methods that could replace animal use, nor could expertise in such vast technical areas be expected, in contrast with the review process of many scientific journals in which experts in the conditions being studied critique the choice of subjects and methods used. Although investigators are expected to consider alternatives to procedures that may cause more than momentary or slight pain or distress, they are not required to use them. While investigators must assure committee members that studies do not duplicate other research, committee members are not required to verify this. Consideration of alternatives to painful procedures is not required at all for experiments on animals not covered by the Animal Welfare Act. The majority of U.S. research institutions now allow research proposals to be approved by a single committee member, using a system called Designated Member Review, without full committee consideration. In other countries, requirements differ considerably. In the European Union, for example, investigators must complete a harm-benefit analysis and must use alternatives, not simply consider them.

Conclusions: The review process may be improved by requiring searches for nonanimal methods regardless of species, favoring alternatives based on human biology, improving the education of committee members and investigators, using reviewers with subject matter expertise, and minimizing conflicts of interest. Because of the limitations of the review process, funding institutions and scientific journals should not use Institutional Animal Care and Use Committee approval of submissions as evidence of adherence to ethical guidelines beyond those legally required.

背景:在美国和许多其他国家,动物在研究、试验和教育中的使用属于机构动物护理和使用委员会或类似机构的职权范围。他们审查拟议实验的责任,特别是在遵守法律和道德规定方面的责任,可能是一项具有挑战性的任务。目的:了解可能限制机构动物护理和使用委员会有效性的因素,并确定可能的解决方案。方法:本编辑综述总结了描述美国机构动物护理和使用委员会及其依赖者所面临的挑战的科学文献,并描述了可能改善其功能的措施。结果:除了可能有相当大的工作量和满足适用法规的需要外,委员会还面临着基本的结构挑战和限制。根据美国法律,委员会成员不需要在被审查的研究领域或可以替代动物使用的方法方面具有专业知识,也不需要在如此广泛的技术领域具有专业知识,这与许多科学期刊的审查过程形成鲜明对比,在审查过程中,被研究条件的专家会批评所使用的主题和方法的选择。虽然期望调查人员考虑替代可能导致短暂或轻微疼痛或痛苦的程序,但他们并不需要使用它们。虽然调查人员必须向委员会成员保证研究不会重复其他研究,但委员会成员不需要对此进行验证。对于《动物福利法》未涵盖的动物实验,根本不需要考虑替代痛苦手术的方法。美国大多数研究机构现在允许研究提案由一名委员会成员批准,采用一种称为指定成员审查的制度,无需委员会全面审议。在其他国家,要求有很大不同。例如,在欧盟,调查人员必须完成损益分析,必须使用替代方案,而不是简单地考虑它们。结论:审评过程可以通过以下几个方面得到改进:要求不分物种地寻找非动物方法,支持基于人类生物学的替代方法,提高委员会成员和研究者的教育水平,使用具有主题专业知识的审评者,并最大限度地减少利益冲突。由于审查过程的限制,资助机构和科学期刊不应将机构动物保护和使用委员会批准的提交作为遵守法律要求以外的道德准则的证据。
{"title":"Institutional animal care and use committees and the challenges of evaluating animal research proposals.","authors":"John J Pippin, Jarrod Bailey, Mark Kennedy, Deborah Dubow Press, Janine McCarthy, Ron Baron, Stephen Farghali, Elizabeth Baker, Neal D Barnard","doi":"10.1186/s41073-025-00169-9","DOIUrl":"10.1186/s41073-025-00169-9","url":null,"abstract":"<p><strong>Background: </strong>In the U.S. and many other countries, animal use in research, testing, and education is under the purview of Institutional Animal Care and Use Committees or similar bodies. Their responsibility for reviewing proposed experiments, particularly with regard to adherence to legal and ethical mandates, can be a challenging task.</p><p><strong>Objective: </strong>To understand factors that may limit the effectiveness of Institutional Animal Care and Use Committees and identify possible solutions.</p><p><strong>Methods: </strong>This editorial review summarizes scientific literature describing the challenges faced by U.S. Institutional Animal Care and Use Committees and those who rely on them and describes actions that may improve their functioning.</p><p><strong>Results: </strong>Apart from what may be a sizable workload and the need to satisfy applicable regulations, committees have fundamental structural challenges and limitations. Under U.S. law, there is no requirement that committee members have expertise in the research areas under review or in methods that could replace animal use, nor could expertise in such vast technical areas be expected, in contrast with the review process of many scientific journals in which experts in the conditions being studied critique the choice of subjects and methods used. Although investigators are expected to consider alternatives to procedures that may cause more than momentary or slight pain or distress, they are not required to use them. While investigators must assure committee members that studies do not duplicate other research, committee members are not required to verify this. Consideration of alternatives to painful procedures is not required at all for experiments on animals not covered by the Animal Welfare Act. The majority of U.S. research institutions now allow research proposals to be approved by a single committee member, using a system called Designated Member Review, without full committee consideration. In other countries, requirements differ considerably. In the European Union, for example, investigators must complete a harm-benefit analysis and must use alternatives, not simply consider them.</p><p><strong>Conclusions: </strong>The review process may be improved by requiring searches for nonanimal methods regardless of species, favoring alternatives based on human biology, improving the education of committee members and investigators, using reviewers with subject matter expertise, and minimizing conflicts of interest. Because of the limitations of the review process, funding institutions and scientific journals should not use Institutional Animal Care and Use Committee approval of submissions as evidence of adherence to ethical guidelines beyond those legally required.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"11"},"PeriodicalIF":7.2,"publicationDate":"2025-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12231287/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144562314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gaps in the Ottawa Statement on the Ethical Design and Conduct of Cluster Randomized Trials: a citation analysis reveals a need for updated ethics guidelines. 关于聚类随机试验的伦理设计和行为的渥太华声明中的空白:引用分析显示需要更新伦理指南。
IF 7.2 Q1 ETHICS Pub Date : 2025-06-18 DOI: 10.1186/s41073-025-00166-y
Cory E Goldstein, Jessica du Toit, Nicholas B Murphy, Stuart G Nicholls, Julia F Shaw, Fernando Althabe, Ariella Binik, Jamie Brehaut, Sandra Eldridge, Rashida A Ferrand, Katie Gillies, Bruno Giraudeau, Rieke van der Graaf, Lars G Hemkens, Karla Hemming, Mira Johri, Scott Y H Kim, Emily Largent, Alex John London, Lawrence Mbuagbaw, Susan L Mitchell, Maureen Smith, Peter Tugwell, Shaun Treweek, Vivian A Welch, Monica Taljaard, Charles Weijer

Background: Although commonly used to evaluate health interventions, cluster randomized trials raise difficult ethical issues. Recognizing this, the Ottawa Statement on the Ethical Design and Conduct of Cluster Randomized Trials, published in 2012, provides 15 recommendations to address ethical issues across seven domains. But due to several developments in the design and implementation of cluster randomized trials, there are new issues requiring guidance. To inform the forthcoming update of the Ottawa Statement, we aimed to identify any gaps in the Ottawa Statement discussed within the literature.

Methods: We searched Google Scholar, Scopus, and Web of Science using the 'cited by' function on 11 November 2022.We included all types of publications, including articles, book chapters, commentaries, editorials, ethics guidelines, theses and trial-related publications (i.e., primary reports, protocols, and secondary analyses), that cited and engaged with the Ottawa Statement, the Ottawa Statement précis, or one or more of its four background papers. Data were extracted by four reviewers working in rotating pairs. Reviewers captured relevant text verbatim and recorded whether it reflected a gap relating to one or more of the Ottawa Statement domains. Using a thematic analysis approach, semantic coding was used to summarize the content of the data into distinct gaps within the Ottawa Statement domains, which was subsequently expanded in an inductive manner through discussion.

Results: The qualitative analysis of the text from 53 articles resulted in the identification of 24 distinct gaps in the Ottawa Statement: 4 gaps about justifying the cluster randomized design; 2 gaps about research ethics committee review; 3 gaps about identifying research participants; 4 gaps about obtaining informed consent; 3 gaps about gatekepeers; 6 gaps about assessing benefits and harms; 1 gap about protecting vulnerable participants; and 1 gap about equity-related issues in cluster randomized trials.

Conclusion: Identifying 24 gaps reveals a need to update the Ottawa Statement. Alongside additional gaps identified in ongoing empirical work and through engagement with our patient and public partners, the gaps identified through this citation analysis should be considered in the forthcoming Ottawa Statement update.

背景:虽然通常用于评估健康干预措施,但聚类随机试验引起了困难的伦理问题。认识到这一点,2012年发表的《关于聚类随机试验的伦理设计和行为的渥太华声明》提供了15条建议,以解决七个领域的伦理问题。但是由于在设计和实施集群随机试验方面的一些发展,有一些新的问题需要指导。为了为即将更新的渥太华声明提供信息,我们旨在找出文献中讨论的渥太华声明中的任何空白。方法:我们于2022年11月11日使用‘cited by’功能检索谷歌Scholar、Scopus和Web of Science。我们纳入了所有类型的出版物,包括文章、书籍章节、评论、社论、伦理指南、论文和与试验相关的出版物(即主要报告、协议和二次分析),这些出版物引用并涉及渥太华声明、渥太华声明的修订,或其四篇背景论文中的一篇或多篇。数据由四名审稿人轮流抽取。审稿人逐字捕获相关文本,并记录它是否反映了与渥太华声明的一个或多个领域有关的差距。使用主题分析方法,使用语义编码将数据的内容总结为渥太华声明域中的不同空白,随后通过讨论以归纳的方式扩展。结果:对53篇文献的定性分析发现渥太华声明中有24个明显的空白:4个空白是关于证明聚类随机设计的;2研究伦理委员会审查的空白;识别研究参与者的3个空白;4获取知情同意的差距;关于门节点的3个缺口;6 .效益和危害评估方面的差距;1 .弱势参与者保护差距;在分组随机试验中,公平相关问题的差距为1。结论:确定24个差距表明需要更新渥太华声明。除了在正在进行的实证工作中发现的其他差距以及通过与患者和公共合作伙伴的接触发现的差距外,通过引用分析发现的差距应在即将发布的渥太华声明更新中予以考虑。
{"title":"Gaps in the Ottawa Statement on the Ethical Design and Conduct of Cluster Randomized Trials: a citation analysis reveals a need for updated ethics guidelines.","authors":"Cory E Goldstein, Jessica du Toit, Nicholas B Murphy, Stuart G Nicholls, Julia F Shaw, Fernando Althabe, Ariella Binik, Jamie Brehaut, Sandra Eldridge, Rashida A Ferrand, Katie Gillies, Bruno Giraudeau, Rieke van der Graaf, Lars G Hemkens, Karla Hemming, Mira Johri, Scott Y H Kim, Emily Largent, Alex John London, Lawrence Mbuagbaw, Susan L Mitchell, Maureen Smith, Peter Tugwell, Shaun Treweek, Vivian A Welch, Monica Taljaard, Charles Weijer","doi":"10.1186/s41073-025-00166-y","DOIUrl":"10.1186/s41073-025-00166-y","url":null,"abstract":"<p><strong>Background: </strong>Although commonly used to evaluate health interventions, cluster randomized trials raise difficult ethical issues. Recognizing this, the Ottawa Statement on the Ethical Design and Conduct of Cluster Randomized Trials, published in 2012, provides 15 recommendations to address ethical issues across seven domains. But due to several developments in the design and implementation of cluster randomized trials, there are new issues requiring guidance. To inform the forthcoming update of the Ottawa Statement, we aimed to identify any gaps in the Ottawa Statement discussed within the literature.</p><p><strong>Methods: </strong>We searched Google Scholar, Scopus, and Web of Science using the 'cited by' function on 11 November 2022.We included all types of publications, including articles, book chapters, commentaries, editorials, ethics guidelines, theses and trial-related publications (i.e., primary reports, protocols, and secondary analyses), that cited and engaged with the Ottawa Statement, the Ottawa Statement précis, or one or more of its four background papers. Data were extracted by four reviewers working in rotating pairs. Reviewers captured relevant text verbatim and recorded whether it reflected a gap relating to one or more of the Ottawa Statement domains. Using a thematic analysis approach, semantic coding was used to summarize the content of the data into distinct gaps within the Ottawa Statement domains, which was subsequently expanded in an inductive manner through discussion.</p><p><strong>Results: </strong>The qualitative analysis of the text from 53 articles resulted in the identification of 24 distinct gaps in the Ottawa Statement: 4 gaps about justifying the cluster randomized design; 2 gaps about research ethics committee review; 3 gaps about identifying research participants; 4 gaps about obtaining informed consent; 3 gaps about gatekepeers; 6 gaps about assessing benefits and harms; 1 gap about protecting vulnerable participants; and 1 gap about equity-related issues in cluster randomized trials.</p><p><strong>Conclusion: </strong>Identifying 24 gaps reveals a need to update the Ottawa Statement. Alongside additional gaps identified in ongoing empirical work and through engagement with our patient and public partners, the gaps identified through this citation analysis should be considered in the forthcoming Ottawa Statement update.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"10"},"PeriodicalIF":7.2,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12175472/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144318915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predicting retracted research: a dataset and machine learning approaches. 预测撤回研究:数据集和机器学习方法。
IF 7.2 Q1 ETHICS Pub Date : 2025-06-11 DOI: 10.1186/s41073-025-00168-w
Aaron H A Fletcher, Mark Stevenson

Background: Retractions undermine the scientific record's reliability and can lead to the continued propagation of flawed research. This study aimed to (1) create a dataset aggregating retraction information with bibliographic metadata, (2) train and evaluate various machine learning approaches to predict article retractions, and (3) assess each feature's contribution to feature-based classifier performance using ablation studies.

Methods: An open-access dataset was developed by combining information from the Retraction Watch database and the OpenAlex API. Using a case-controlled design, retracted research articles were paired with non-retracted articles published in the same period. Traditional feature-based classifiers and models leveraging contextual language representations were then trained and evaluated. Model performance was assessed using accuracy, precision, recall, and the F1-score.

Results: The Llama 3.2 base model achieved the highest overall accuracy. The Random Forest classifier achieved a precision of 0.687 for identifying non-retracted articles, while the Llama 3.2 base model reached a precision of 0.683 for identifying retracted articles. Traditional feature-based classifiers generally outperformed most contextual language models, except for the Llama 3.2 base model, which showed competitive performance across several metrics.

Conclusions: Although no single model excelled across all metrics, our findings indicate that machine learning techniques can effectively support the identification of retracted research. These results provide a foundation for developing automated tools to assist publishers and reviewers in detecting potentially problematic publications. Further research should focus on refining these models and investigating additional features to improve predictive performance.

Trial registration: Not applicable.

背景:撤稿破坏了科学记录的可靠性,并可能导致有缺陷的研究继续传播。本研究旨在(1)创建一个包含文献元数据的撤稿信息的数据集,(2)训练和评估各种机器学习方法来预测文章撤稿,以及(3)使用消融研究评估每个特征对基于特征的分类器性能的贡献。方法:结合《撤稿观察》数据库信息和OpenAlex API开发开放获取数据集。采用病例对照设计,将撤回的研究文章与同期发表的未撤回的文章配对。然后对传统的基于特征的分类器和利用上下文语言表示的模型进行训练和评估。使用准确性、精密度、召回率和f1分数来评估模型的性能。结果:Llama 3.2基础模型总体精度最高。随机森林分类器识别未撤稿文章的精度为0.687,而Llama 3.2基础模型识别撤稿文章的精度为0.683。传统的基于特征的分类器通常优于大多数上下文语言模型,除了Llama 3.2基本模型,它在几个指标上都表现出竞争力。结论:尽管没有一个单一的模型在所有指标上都表现出色,但我们的研究结果表明,机器学习技术可以有效地支持撤回研究的识别。这些结果为开发自动化工具提供了基础,以帮助出版商和审稿人检测潜在的问题出版物。进一步的研究应该集中在改进这些模型和研究其他特征以提高预测性能。试验注册:不适用。
{"title":"Predicting retracted research: a dataset and machine learning approaches.","authors":"Aaron H A Fletcher, Mark Stevenson","doi":"10.1186/s41073-025-00168-w","DOIUrl":"10.1186/s41073-025-00168-w","url":null,"abstract":"<p><strong>Background: </strong>Retractions undermine the scientific record's reliability and can lead to the continued propagation of flawed research. This study aimed to (1) create a dataset aggregating retraction information with bibliographic metadata, (2) train and evaluate various machine learning approaches to predict article retractions, and (3) assess each feature's contribution to feature-based classifier performance using ablation studies.</p><p><strong>Methods: </strong>An open-access dataset was developed by combining information from the Retraction Watch database and the OpenAlex API. Using a case-controlled design, retracted research articles were paired with non-retracted articles published in the same period. Traditional feature-based classifiers and models leveraging contextual language representations were then trained and evaluated. Model performance was assessed using accuracy, precision, recall, and the F1-score.</p><p><strong>Results: </strong>The Llama 3.2 base model achieved the highest overall accuracy. The Random Forest classifier achieved a precision of 0.687 for identifying non-retracted articles, while the Llama 3.2 base model reached a precision of 0.683 for identifying retracted articles. Traditional feature-based classifiers generally outperformed most contextual language models, except for the Llama 3.2 base model, which showed competitive performance across several metrics.</p><p><strong>Conclusions: </strong>Although no single model excelled across all metrics, our findings indicate that machine learning techniques can effectively support the identification of retracted research. These results provide a foundation for developing automated tools to assist publishers and reviewers in detecting potentially problematic publications. Further research should focus on refining these models and investigating additional features to improve predictive performance.</p><p><strong>Trial registration: </strong>Not applicable.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"9"},"PeriodicalIF":7.2,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12153192/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144268110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
False authorship: an explorative case study around an AI-generated article published under my name. 虚假作者:关于以我的名义发表的人工智能生成文章的探索性案例研究。
IF 7.2 Q1 ETHICS Pub Date : 2025-05-27 DOI: 10.1186/s41073-025-00165-z
Diomidis Spinellis

Background: The proliferation of generative artificial intelligence (AI) has facilitated the creation and publication of fraudulent scientific articles, often in predatory journals. This study investigates the extent of AI-generated content in the Global International Journal of Innovative Research (GIJIR), where a fabricated article was falsely attributed to me.

Methods: The entire GIJIR website was crawled to collect article PDFs and metadata. Automated scripts were used to extract the number of probable in-text citations, DOIs, affiliations, and contact emails. A heuristic based on the number of in-text citations was employed to identify the probability of AI-generated content. A subset of articles was manually reviewed for AI indicators such as formulaic writing and missing empirical data. Turnitin's AI detection tool was used as an additional indicator. The extracted data were compiled into a structured dataset, which was analyzed to examine human-authored and AI-generated articles.

Results: Of the 53 examined articles with the fewest in-text citations, at least 48 appeared to be AI-generated, while five showed signs of human involvement. Turnitin's AI detection scores confirmed high probabilities of AI-generated content in most cases, with scores reaching 100% for multiple papers. The analysis also revealed fraudulent authorship attribution, with AI-generated articles falsely assigned to researchers from prestigious institutions. The journal appears to use AI-generated content both to inflate its standing through misattributed papers and to attract authors aiming to inflate their publication record.

Conclusions: The findings highlight the risks posed by AI-generated and misattributed research articles, which threaten the credibility of academic publishing. Ways to mitigate these issues include strengthening identity verification mechanisms for DOIs and ORCIDs, enhancing AI detection methods, and reforming research assessment practices. Without effective countermeasures, the unchecked growth of AI-generated content in scientific literature could severely undermine trust in scholarly communication.

背景:生成式人工智能(AI)的扩散促进了欺诈性科学文章的创作和发表,通常是在掠夺性期刊上。这项研究调查了全球国际创新研究杂志(GIJIR)上人工智能生成内容的程度,其中一篇捏造的文章被错误地归因于我。方法:对整个GIJIR网站进行抓取,收集文章pdf和元数据。自动化脚本用于提取可能的文本引用、doi、隶属关系和联系电子邮件的数量。采用基于文本引用次数的启发式方法来确定人工智能生成内容的概率。人工审查了部分文章的人工智能指标,如公式化写作和缺少经验数据。Turnitin的AI检测工具作为附加指标。提取的数据被编译成一个结构化的数据集,并对其进行分析,以检查人类撰写和人工智能生成的文章。结果:在53篇文本引用最少的文章中,至少48篇似乎是人工智能生成的,而5篇显示出人类参与的迹象。Turnitin的AI检测得分在大多数情况下证实了AI生成内容的高概率,多篇论文的得分达到100%。分析还发现了虚假的作者归属,人工智能生成的文章被错误地分配给了知名机构的研究人员。该杂志似乎利用人工智能生成的内容,通过错误署名的论文来提高其地位,并吸引旨在提高其发表记录的作者。结论:研究结果强调了人工智能生成和错误署名的研究文章所带来的风险,这些风险威胁到学术出版的可信度。缓解这些问题的方法包括加强doi和orcid的身份验证机制,增强人工智能检测方法,以及改革研究评估实践。如果没有有效的对策,科学文献中人工智能生成内容的无限制增长可能会严重破坏学术交流的信任。
{"title":"False authorship: an explorative case study around an AI-generated article published under my name.","authors":"Diomidis Spinellis","doi":"10.1186/s41073-025-00165-z","DOIUrl":"10.1186/s41073-025-00165-z","url":null,"abstract":"<p><strong>Background: </strong>The proliferation of generative artificial intelligence (AI) has facilitated the creation and publication of fraudulent scientific articles, often in predatory journals. This study investigates the extent of AI-generated content in the Global International Journal of Innovative Research (GIJIR), where a fabricated article was falsely attributed to me.</p><p><strong>Methods: </strong>The entire GIJIR website was crawled to collect article PDFs and metadata. Automated scripts were used to extract the number of probable in-text citations, DOIs, affiliations, and contact emails. A heuristic based on the number of in-text citations was employed to identify the probability of AI-generated content. A subset of articles was manually reviewed for AI indicators such as formulaic writing and missing empirical data. Turnitin's AI detection tool was used as an additional indicator. The extracted data were compiled into a structured dataset, which was analyzed to examine human-authored and AI-generated articles.</p><p><strong>Results: </strong>Of the 53 examined articles with the fewest in-text citations, at least 48 appeared to be AI-generated, while five showed signs of human involvement. Turnitin's AI detection scores confirmed high probabilities of AI-generated content in most cases, with scores reaching 100% for multiple papers. The analysis also revealed fraudulent authorship attribution, with AI-generated articles falsely assigned to researchers from prestigious institutions. The journal appears to use AI-generated content both to inflate its standing through misattributed papers and to attract authors aiming to inflate their publication record.</p><p><strong>Conclusions: </strong>The findings highlight the risks posed by AI-generated and misattributed research articles, which threaten the credibility of academic publishing. Ways to mitigate these issues include strengthening identity verification mechanisms for DOIs and ORCIDs, enhancing AI detection methods, and reforming research assessment practices. Without effective countermeasures, the unchecked growth of AI-generated content in scientific literature could severely undermine trust in scholarly communication.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"8"},"PeriodicalIF":7.2,"publicationDate":"2025-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12107892/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144153024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on policy mechanisms to address funding bias and conflicts of interest in biomedical research: a scoping review. 解决生物医学研究中资助偏见和利益冲突的政策机制研究:范围审查。
IF 10.7 Q1 ETHICS Pub Date : 2025-05-14 DOI: 10.1186/s41073-025-00164-0
S Scott Graham, Quinn Grundy, Nandini Sharma, Jade Shiva Edward, Joshua B Barbour, Justin F Rousseau, Zoltan P Majdik, Lisa Bero

Background: Industry funding and author conflicts of interest (COI) have been consistently shown to introduce bias into agenda-setting and results-reporting in biomedical research. Accordingly, maintaining public trust, diminishing patient harm, and securing the integrity of the biomedical research enterprise are critical policy priorities. In this context, a coordinated and methodical research effort is required to effectively identify which policy interventions are most likely to mitigate against the risks of funding bias. Subsequently this scoping review aims to identify and synthesize the available research on policy mechanisms designed to address funding bias and COI in biomedical research.

Methods: We searched PubMed for peer-reviewed, empirical analyses of policy mechanisms designed to address industry sponsorship of research studies, author industry affiliation, and author COI at any stage of the biomedical research process and published between January 2009 and 28 August 2023. The review identified literature conducting five primary analysis types: (1) surveys of COI policies, (2) disclosure compliance analyses, (3) disclosure concordance analyses, (4) COI policy effects analyses, and (5) studies of policy perceptions and contexts. Most available research is devoted to evaluating the prevalence, nature, and effects of author COI disclosure policies.

Results: Six thousand three hundreds eighty five articles were screened, and 81 studies were included. Studies were conducted in 11 geographic regions, with studies of international scope being the most common. Most available research is devoted to evaluating the prevalence, nature, and effects of author COI disclosure policies. This evidence demonstrates that while disclosure policies are pervasive, those policies are not consistently designed, implemented, or enforced. The available evidence also indicates that COI disclosure policies are not particularly effective in mitigating risk of bias or subsequent negative externalities.

Conclusions: The results of this review indicate that the COI policy landscape could benefit from a significant shift in the research agenda. The available literature predominantly focuses on a single policy intervention-author disclosure requirements. As a result, new lines of research are needed to establish a more robust evidence-based policy landscape. There is a particular need for implementation research, greater attention to the structural conditions that create COI, and evaluation of policy mechanisms other than disclosure.

背景:行业资助和作者利益冲突(COI)一直被证明会在生物医学研究的议程设置和结果报告中引入偏见。因此,维持公众信任、减少对患者的伤害和确保生物医学研究企业的完整性是关键的政策优先事项。在这方面,需要进行协调和有条不紊的研究工作,以有效地确定哪些政策干预措施最有可能减轻资助偏见的风险。随后,本范围审查旨在确定和综合现有的研究,旨在解决生物医学研究中的资助偏见和COI的政策机制。方法:我们在PubMed检索了2009年1月至2023年8月28日期间发表的生物医学研究过程中任何阶段旨在解决研究的行业赞助、作者行业隶属关系和作者COI的政策机制的同行评议实证分析。该综述确定了进行五种主要分析类型的文献:(1)COI政策调查,(2)披露合规性分析,(3)披露一致性分析,(4)COI政策效果分析,以及(5)政策认知和背景研究。大多数现有的研究都致力于评估作者COI披露政策的普遍性、性质和影响。结果:共筛选了六千三百八十五篇文章,纳入了81项研究。研究在11个地理区域进行,其中国际范围的研究最为普遍。大多数现有的研究都致力于评估作者COI披露政策的普遍性、性质和影响。这一证据表明,虽然披露政策普遍存在,但这些政策的设计、实施或执行并不一致。现有证据还表明,COI披露政策在减轻偏见风险或随后的负面外部性方面并不是特别有效。结论:本综述的结果表明,COI政策格局可以从研究议程的重大转变中受益。现有文献主要集中于单一政策干预——作者披露要求。因此,需要新的研究方向来建立一个更强有力的以证据为基础的政策格局。特别需要进行实施研究,更多地关注产生COI的结构条件,以及评估除披露以外的政策机制。
{"title":"Research on policy mechanisms to address funding bias and conflicts of interest in biomedical research: a scoping review.","authors":"S Scott Graham, Quinn Grundy, Nandini Sharma, Jade Shiva Edward, Joshua B Barbour, Justin F Rousseau, Zoltan P Majdik, Lisa Bero","doi":"10.1186/s41073-025-00164-0","DOIUrl":"10.1186/s41073-025-00164-0","url":null,"abstract":"<p><strong>Background: </strong>Industry funding and author conflicts of interest (COI) have been consistently shown to introduce bias into agenda-setting and results-reporting in biomedical research. Accordingly, maintaining public trust, diminishing patient harm, and securing the integrity of the biomedical research enterprise are critical policy priorities. In this context, a coordinated and methodical research effort is required to effectively identify which policy interventions are most likely to mitigate against the risks of funding bias. Subsequently this scoping review aims to identify and synthesize the available research on policy mechanisms designed to address funding bias and COI in biomedical research.</p><p><strong>Methods: </strong>We searched PubMed for peer-reviewed, empirical analyses of policy mechanisms designed to address industry sponsorship of research studies, author industry affiliation, and author COI at any stage of the biomedical research process and published between January 2009 and 28 August 2023. The review identified literature conducting five primary analysis types: (1) surveys of COI policies, (2) disclosure compliance analyses, (3) disclosure concordance analyses, (4) COI policy effects analyses, and (5) studies of policy perceptions and contexts. Most available research is devoted to evaluating the prevalence, nature, and effects of author COI disclosure policies.</p><p><strong>Results: </strong>Six thousand three hundreds eighty five articles were screened, and 81 studies were included. Studies were conducted in 11 geographic regions, with studies of international scope being the most common. Most available research is devoted to evaluating the prevalence, nature, and effects of author COI disclosure policies. This evidence demonstrates that while disclosure policies are pervasive, those policies are not consistently designed, implemented, or enforced. The available evidence also indicates that COI disclosure policies are not particularly effective in mitigating risk of bias or subsequent negative externalities.</p><p><strong>Conclusions: </strong>The results of this review indicate that the COI policy landscape could benefit from a significant shift in the research agenda. The available literature predominantly focuses on a single policy intervention-author disclosure requirements. As a result, new lines of research are needed to establish a more robust evidence-based policy landscape. There is a particular need for implementation research, greater attention to the structural conditions that create COI, and evaluation of policy mechanisms other than disclosure.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"6"},"PeriodicalIF":10.7,"publicationDate":"2025-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12076912/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144060408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correction: Raising concerns on questionable ethics approvals - a case study of 456 trials from the Institut Hospitalo-Universitaire Méditerranée Infection. 更正:引起对可疑伦理批准的关注——一项来自医院和大学的456项试验的案例研究。
IF 7.2 Q1 ETHICS Pub Date : 2025-05-09 DOI: 10.1186/s41073-025-00162-2
Fabrice Frank, Nans Florens, Gideon Meyerowitz-Katz, Jerome Barriere, Eric Billy, Veronique Saada, Alexander Samuel, Jacques Robert, Lonni Besancon
{"title":"Correction: Raising concerns on questionable ethics approvals - a case study of 456 trials from the Institut Hospitalo-Universitaire Méditerranée Infection.","authors":"Fabrice Frank, Nans Florens, Gideon Meyerowitz-Katz, Jerome Barriere, Eric Billy, Veronique Saada, Alexander Samuel, Jacques Robert, Lonni Besancon","doi":"10.1186/s41073-025-00162-2","DOIUrl":"https://doi.org/10.1186/s41073-025-00162-2","url":null,"abstract":"","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"7"},"PeriodicalIF":7.2,"publicationDate":"2025-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12063339/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144045630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From 2015 to 2023, eight years of empirical research on research integrity: a scoping review. 2015 - 2023年8年科研诚信实证研究综述
IF 7.2 Q1 ETHICS Pub Date : 2025-04-30 DOI: 10.1186/s41073-025-00163-1
Baptiste Vendé, Anouk Barberousse, Stéphanie Ruphy

Background: Research on research integrity (RI) has grown exponentially over the past several decades. Although the earliest publications emerged in the 1980 s, more than half of the existing literature has been produced within the last five years. Given that the most recent comprehensive literature review is now eight years old, the present study aims to extend and update previous findings.

Method: We conducted a systematic search of the Web of Science and Constellate databases for articles published between 2015 and 2023. To structure our overview and guide our inquiry, we addressed the following seven broad questions about the field:-What topics does the empirical literature on RI explore? What are the primary objectives of the empirical literature on RI? What methodologies are prevalent in the empirical literature on RI? What populations or organizations are studied in the empirical literature on RI? Where are the empirical studies on RI conducted? Where is the empirical literature on RI published? To what degree is the general literature on RI grounded in empirical research? Additionally, we used the previous scoping review as a benchmark to identify emerging trends and shifts.

Results: Our search yielded a total of 3,282 studies, of which 660 articles met our inclusion criteria. All research questions were comprehensively addressed. Notably, we observed a significant shift in methodologies: the reliance on interviews and surveys decreased from 51 to 30%, whereas the application of meta-scientific methods increased from 17 to 31%. In terms of theoretical orientation, the previously dominant "Bad Apple" hypothesis declined from 54 to 30%, while the "Wicked System" hypothesis increased from 46 to 52%. Furthermore, there has been a pronounced trend toward testing solutions, rising from 31 to 56% at the expense of merely describing the problem, which fell from 69 to 44%.

Conclusion: Three gaps highlighted eight years ago by the previous scoping review remain unresolved. Research on decision makers (e.g., scientists in positions of power, policymakers, accounting for 3%), the private research sector and patents (4.7%), and the peer review system (0.3%) continues to be underexplored. Even more concerning, if current trends persist, these gaps are likely to become increasingly problematic.

背景:在过去的几十年里,关于科研诚信的研究呈指数级增长。虽然最早的出版物出现在20世纪80年代,但现有文献的一半以上是在最近五年内出版的。鉴于最近的综合文献综述已有8年历史,本研究旨在扩展和更新先前的研究结果。方法:系统检索Web of Science和constellation数据库中2015 - 2023年间发表的文章。为了构建我们的概述并指导我们的调查,我们解决了关于该领域的以下七个广泛问题:关于国际扶轮的实证文献探讨了哪些主题?国际扶轮实证文献的主要目标是什么?在国际扶轮的实证文献中,什么方法学是流行的?在国际扶轮的实证文献中研究了哪些人口或组织?国际扶轮的实证研究在哪里进行?关于RI的实证文献在哪里发表?关于国际扶轮的一般文献在多大程度上是基于实证研究?此外,我们使用之前的范围审查作为基准,以确定新出现的趋势和变化。结果:我们共检索到3282篇研究,其中660篇符合我们的纳入标准。所有的研究问题都得到了全面的解决。值得注意的是,我们观察到方法论的重大转变:对访谈和调查的依赖从51%下降到30%,而元科学方法的应用从17%增加到31%。在理论取向上,先前占主导地位的“坏苹果”假说从54%下降到30%,而“邪恶系统”假说从46%上升到52%。此外,有一个明显的趋势是测试解决方案,从31%上升到56%,代价是仅仅描述问题,从69%下降到44%。结论:八年前的范围审查强调的三个差距仍然没有解决。对决策者(例如,有权力的科学家、政策制定者,占3%)、私营研究部门和专利(4.7%)以及同行评议制度(0.3%)的研究仍未得到充分探索。更令人担忧的是,如果目前的趋势持续下去,这些差距可能会变得越来越成问题。
{"title":"From 2015 to 2023, eight years of empirical research on research integrity: a scoping review.","authors":"Baptiste Vendé, Anouk Barberousse, Stéphanie Ruphy","doi":"10.1186/s41073-025-00163-1","DOIUrl":"https://doi.org/10.1186/s41073-025-00163-1","url":null,"abstract":"<p><strong>Background: </strong>Research on research integrity (RI) has grown exponentially over the past several decades. Although the earliest publications emerged in the 1980 s, more than half of the existing literature has been produced within the last five years. Given that the most recent comprehensive literature review is now eight years old, the present study aims to extend and update previous findings.</p><p><strong>Method: </strong>We conducted a systematic search of the Web of Science and Constellate databases for articles published between 2015 and 2023. To structure our overview and guide our inquiry, we addressed the following seven broad questions about the field:-What topics does the empirical literature on RI explore? What are the primary objectives of the empirical literature on RI? What methodologies are prevalent in the empirical literature on RI? What populations or organizations are studied in the empirical literature on RI? Where are the empirical studies on RI conducted? Where is the empirical literature on RI published? To what degree is the general literature on RI grounded in empirical research? Additionally, we used the previous scoping review as a benchmark to identify emerging trends and shifts.</p><p><strong>Results: </strong>Our search yielded a total of 3,282 studies, of which 660 articles met our inclusion criteria. All research questions were comprehensively addressed. Notably, we observed a significant shift in methodologies: the reliance on interviews and surveys decreased from 51 to 30%, whereas the application of meta-scientific methods increased from 17 to 31%. In terms of theoretical orientation, the previously dominant \"Bad Apple\" hypothesis declined from 54 to 30%, while the \"Wicked System\" hypothesis increased from 46 to 52%. Furthermore, there has been a pronounced trend toward testing solutions, rising from 31 to 56% at the expense of merely describing the problem, which fell from 69 to 44%.</p><p><strong>Conclusion: </strong>Three gaps highlighted eight years ago by the previous scoping review remain unresolved. Research on decision makers (e.g., scientists in positions of power, policymakers, accounting for 3%), the private research sector and patents (4.7%), and the peer review system (0.3%) continues to be underexplored. Even more concerning, if current trends persist, these gaps are likely to become increasingly problematic.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"5"},"PeriodicalIF":7.2,"publicationDate":"2025-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12042460/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144058381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Research integrity and peer review
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1