Pub Date : 2025-07-23DOI: 10.1186/s41073-025-00173-z
Christopher Baethge, Hannah Jergas
Background: Quotations are crucial to science but have been shown to be often inaccurate. Quotation errors, that is, a reference not supporting the authors' claim, may still be a significant issue in scientific medical writing. This study aimed to examine the quotation error rate and trends over time in the medical literature.
Methods: A systematic search of PubMed, Web of Science, and reference lists for quotation error studies in medicine and without date or language restrictions identified 46 studies analyzing 32,000 quotations/references. Literature search, data extraction, and risk of bias assessments were performed independently by two raters. Random-effects meta-analyses and meta-regression were used to analyze error rates and trends (protocol pre-registered on OSF).
Results: 16.9% (95% CI: 14.1%-20.0%) of quotations were incorrect, with approximately half classified as major errors (8.0% [95% CI: 6.4%-10.0%]). Heterogeneity was high, and Egger's test for small study effects remained negative throughout. Meta-regression showed no significant improvement in quotation accuracy over recent years (slope: -0.002 [95% CI: -0.03 to 0.02], p = 0.85). Neither risk of bias, nor the number of references were statistically significantly associated with total error rate, but journal impact factor was: Spearman's ρ = -0.253 (p = 0.043, binomial test, N = 25).
Conclusions: Quotation errors remain a problem in the medical literature, with no improvement over time. Addressing this issue requires concerted efforts to improve scholarly practices and editorial processes.
{"title":"Systematic review and meta-analysis of quotation inaccuracy in medicine.","authors":"Christopher Baethge, Hannah Jergas","doi":"10.1186/s41073-025-00173-z","DOIUrl":"10.1186/s41073-025-00173-z","url":null,"abstract":"<p><strong>Background: </strong>Quotations are crucial to science but have been shown to be often inaccurate. Quotation errors, that is, a reference not supporting the authors' claim, may still be a significant issue in scientific medical writing. This study aimed to examine the quotation error rate and trends over time in the medical literature.</p><p><strong>Methods: </strong>A systematic search of PubMed, Web of Science, and reference lists for quotation error studies in medicine and without date or language restrictions identified 46 studies analyzing 32,000 quotations/references. Literature search, data extraction, and risk of bias assessments were performed independently by two raters. Random-effects meta-analyses and meta-regression were used to analyze error rates and trends (protocol pre-registered on OSF).</p><p><strong>Results: </strong>16.9% (95% CI: 14.1%-20.0%) of quotations were incorrect, with approximately half classified as major errors (8.0% [95% CI: 6.4%-10.0%]). Heterogeneity was high, and Egger's test for small study effects remained negative throughout. Meta-regression showed no significant improvement in quotation accuracy over recent years (slope: -0.002 [95% CI: -0.03 to 0.02], p = 0.85). Neither risk of bias, nor the number of references were statistically significantly associated with total error rate, but journal impact factor was: Spearman's ρ = -0.253 (p = 0.043, binomial test, N = 25).</p><p><strong>Conclusions: </strong>Quotation errors remain a problem in the medical literature, with no improvement over time. Addressing this issue requires concerted efforts to improve scholarly practices and editorial processes.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"13"},"PeriodicalIF":10.7,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12285159/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144692730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-18DOI: 10.1186/s41073-025-00171-1
Ashley J Tsang, John Z Sadler, E Sherwood Brown, Elizabeth Heitman
Background: Case reports are valuable tools that illustrate and analyze practical scenarios, novel problems, and the effectiveness of interventions. In psychiatry they often explore unique and potentially stigmatizing aspects of mental health, underscoring the importance of confidentiality and informed consent. However, journals' guidance on consent and confidentiality for case reports varies. In 2013, an international expert group developed the CAse REports (CARE) Guidelines for best practices in case reports, which include guidelines for informed consent and de-identification. In 2016, the Committee on Publication Ethics (COPE) issued ethical standards for publishing case reports, calling for written informed consent from featured patients.
Methods: Using a cross-sectional approach, we assessed the instructions for authors of 253 indexed psychiatry journals, of which 129 had published English-language case reports in the prior five years. Our research identified and evaluated journals' use of COPE and CARE guidelines on informed consent and de-identification in case reports.
Results: Among these 129 journals, 84 (65%) referred to COPE guidelines, and 59 (46%) referenced CARE guidelines. Furthermore, 46 (36%) required informed consent without de-identification, 7 (5%) required only de-identification, and 21 (16%) required both, specifying consent for identifying information. Notably, 40 (31%) lacked informed consent instructions. Of the 82 journals that required informed consent, 69 (85%) required documentation of consent.
Conclusion: A decade after the publication of expert guidance, psychiatry journals remain inconsistent in their adherence to ethical guidelines for informed consent in case reports. More attention to clear instructions from journals on informed consent-a notable topic across different fields-would provide an important educational message about both publication ethics and fundamental respect for patients' confidentiality.
{"title":"Evaluating psychiatry journals' adherence to informed consent guidelines for case reports.","authors":"Ashley J Tsang, John Z Sadler, E Sherwood Brown, Elizabeth Heitman","doi":"10.1186/s41073-025-00171-1","DOIUrl":"10.1186/s41073-025-00171-1","url":null,"abstract":"<p><strong>Background: </strong>Case reports are valuable tools that illustrate and analyze practical scenarios, novel problems, and the effectiveness of interventions. In psychiatry they often explore unique and potentially stigmatizing aspects of mental health, underscoring the importance of confidentiality and informed consent. However, journals' guidance on consent and confidentiality for case reports varies. In 2013, an international expert group developed the CAse REports (CARE) Guidelines for best practices in case reports, which include guidelines for informed consent and de-identification. In 2016, the Committee on Publication Ethics (COPE) issued ethical standards for publishing case reports, calling for written informed consent from featured patients.</p><p><strong>Methods: </strong>Using a cross-sectional approach, we assessed the instructions for authors of 253 indexed psychiatry journals, of which 129 had published English-language case reports in the prior five years. Our research identified and evaluated journals' use of COPE and CARE guidelines on informed consent and de-identification in case reports.</p><p><strong>Results: </strong>Among these 129 journals, 84 (65%) referred to COPE guidelines, and 59 (46%) referenced CARE guidelines. Furthermore, 46 (36%) required informed consent without de-identification, 7 (5%) required only de-identification, and 21 (16%) required both, specifying consent for identifying information. Notably, 40 (31%) lacked informed consent instructions. Of the 82 journals that required informed consent, 69 (85%) required documentation of consent.</p><p><strong>Conclusion: </strong>A decade after the publication of expert guidance, psychiatry journals remain inconsistent in their adherence to ethical guidelines for informed consent in case reports. More attention to clear instructions from journals on informed consent-a notable topic across different fields-would provide an important educational message about both publication ethics and fundamental respect for patients' confidentiality.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"15"},"PeriodicalIF":10.7,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12273215/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144661239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-11DOI: 10.1186/s41073-025-00170-2
Ralf Weiskirchen
Background: Continuous cell lines are indispensable in basic and preclinical research. However, cross-contamination, misidentification, and over-passaging affect the validity and reproducibility of biomedical results. Although there have been efforts to highlight this problem for decades, definitive prevention remains a challenge. The International Cell Line Authentication Committee (ICLAC) registry (version 13, 26 April 2024) lists nearly 600 misidentified or contaminated cell lines. The inappropriate use of such cells has led to countless publications containing invalid data, creating a ripple effect of wasted resources, misleading follow-up studies, and compromised evidence-based conclusions.
Methods: The ICLAC registry was consulted to identify commonly misidentified cell lines. A literature search of PubMed was performed to identify recent papers using these lines in liver-related experiments. Four publications with questionable conclusions were highlighted, and the editors of the respective journals were informed with short comments or letters to the editor.
Results: Reactions from journal editors varied widely. In two cases, the editors quickly published the comments, resulting in transparent corrections. In the third example, the editor conducted an internal investigation without immediately publishing a correction. In the fourth example, the journal declined to address concerns publicly.
Conclusions: Misidentified cell lines pose an ongoing threat to scientific rigor. Despite some responsible editorial interventions, the lack of universal standards fosters the dissemination of erroneous data. However, authors, reviewers, and editors have some important tools to prevent publications with misidentified cells by consulting available resources (e.g., ICLAC, Cellosaurus, Research Resource Identification Portal, SciScore™), and adopting consistent procedures to maintain research integrity.
背景:连续细胞系在基础和临床前研究中是不可或缺的。然而,交叉污染、误鉴定和交叉传代会影响生物医学结果的有效性和可重复性。尽管几十年来一直在努力突出这一问题,但明确的预防仍然是一项挑战。国际细胞系认证委员会(ICLAC)注册表(第13版,2024年4月26日)列出了近600个被错误识别或污染的细胞系。对此类细胞的不当使用导致了无数包含无效数据的出版物,造成了资源浪费的连锁反应,误导了后续研究,并损害了基于证据的结论。方法:参考ICLAC注册表来识别常见的错误识别细胞系。对PubMed进行文献检索,以确定最近在肝脏相关实验中使用这些细胞系的论文。突出了四份结论有问题的出版物,并向各自期刊的编辑通报了简短的评论或给编辑的信。结果:期刊编辑的反应差异很大。在两个案例中,编辑迅速发表了评论,导致了透明的更正。在第三个例子中,编辑进行了内部调查,但没有立即发表更正。在第四个例子中,《华尔街日报》拒绝公开回应担忧。结论:错误识别的细胞系对科学严谨性构成持续威胁。尽管有一些负责任的编辑干预,但缺乏普遍标准助长了错误数据的传播。然而,作者、审稿人和编辑有一些重要的工具,通过查阅可用资源(例如,ICLAC, Cellosaurus, Research Resource Identification Portal, SciScore™),并采用一致的程序来保持研究的完整性,来防止出版物中存在错误鉴定的细胞。
{"title":"Misidentified cell lines: failures of peer review, varying journal responses to misidentification inquiries, and strategies for safeguarding biomedical research.","authors":"Ralf Weiskirchen","doi":"10.1186/s41073-025-00170-2","DOIUrl":"10.1186/s41073-025-00170-2","url":null,"abstract":"<p><strong>Background: </strong>Continuous cell lines are indispensable in basic and preclinical research. However, cross-contamination, misidentification, and over-passaging affect the validity and reproducibility of biomedical results. Although there have been efforts to highlight this problem for decades, definitive prevention remains a challenge. The International Cell Line Authentication Committee (ICLAC) registry (version 13, 26 April 2024) lists nearly 600 misidentified or contaminated cell lines. The inappropriate use of such cells has led to countless publications containing invalid data, creating a ripple effect of wasted resources, misleading follow-up studies, and compromised evidence-based conclusions.</p><p><strong>Methods: </strong>The ICLAC registry was consulted to identify commonly misidentified cell lines. A literature search of PubMed was performed to identify recent papers using these lines in liver-related experiments. Four publications with questionable conclusions were highlighted, and the editors of the respective journals were informed with short comments or letters to the editor.</p><p><strong>Results: </strong>Reactions from journal editors varied widely. In two cases, the editors quickly published the comments, resulting in transparent corrections. In the third example, the editor conducted an internal investigation without immediately publishing a correction. In the fourth example, the journal declined to address concerns publicly.</p><p><strong>Conclusions: </strong>Misidentified cell lines pose an ongoing threat to scientific rigor. Despite some responsible editorial interventions, the lack of universal standards fosters the dissemination of erroneous data. However, authors, reviewers, and editors have some important tools to prevent publications with misidentified cells by consulting available resources (e.g., ICLAC, Cellosaurus, Research Resource Identification Portal, SciScore™), and adopting consistent procedures to maintain research integrity.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"12"},"PeriodicalIF":7.2,"publicationDate":"2025-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12247328/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144610565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-04DOI: 10.1186/s41073-025-00169-9
John J Pippin, Jarrod Bailey, Mark Kennedy, Deborah Dubow Press, Janine McCarthy, Ron Baron, Stephen Farghali, Elizabeth Baker, Neal D Barnard
Background: In the U.S. and many other countries, animal use in research, testing, and education is under the purview of Institutional Animal Care and Use Committees or similar bodies. Their responsibility for reviewing proposed experiments, particularly with regard to adherence to legal and ethical mandates, can be a challenging task.
Objective: To understand factors that may limit the effectiveness of Institutional Animal Care and Use Committees and identify possible solutions.
Methods: This editorial review summarizes scientific literature describing the challenges faced by U.S. Institutional Animal Care and Use Committees and those who rely on them and describes actions that may improve their functioning.
Results: Apart from what may be a sizable workload and the need to satisfy applicable regulations, committees have fundamental structural challenges and limitations. Under U.S. law, there is no requirement that committee members have expertise in the research areas under review or in methods that could replace animal use, nor could expertise in such vast technical areas be expected, in contrast with the review process of many scientific journals in which experts in the conditions being studied critique the choice of subjects and methods used. Although investigators are expected to consider alternatives to procedures that may cause more than momentary or slight pain or distress, they are not required to use them. While investigators must assure committee members that studies do not duplicate other research, committee members are not required to verify this. Consideration of alternatives to painful procedures is not required at all for experiments on animals not covered by the Animal Welfare Act. The majority of U.S. research institutions now allow research proposals to be approved by a single committee member, using a system called Designated Member Review, without full committee consideration. In other countries, requirements differ considerably. In the European Union, for example, investigators must complete a harm-benefit analysis and must use alternatives, not simply consider them.
Conclusions: The review process may be improved by requiring searches for nonanimal methods regardless of species, favoring alternatives based on human biology, improving the education of committee members and investigators, using reviewers with subject matter expertise, and minimizing conflicts of interest. Because of the limitations of the review process, funding institutions and scientific journals should not use Institutional Animal Care and Use Committee approval of submissions as evidence of adherence to ethical guidelines beyond those legally required.
{"title":"Institutional animal care and use committees and the challenges of evaluating animal research proposals.","authors":"John J Pippin, Jarrod Bailey, Mark Kennedy, Deborah Dubow Press, Janine McCarthy, Ron Baron, Stephen Farghali, Elizabeth Baker, Neal D Barnard","doi":"10.1186/s41073-025-00169-9","DOIUrl":"10.1186/s41073-025-00169-9","url":null,"abstract":"<p><strong>Background: </strong>In the U.S. and many other countries, animal use in research, testing, and education is under the purview of Institutional Animal Care and Use Committees or similar bodies. Their responsibility for reviewing proposed experiments, particularly with regard to adherence to legal and ethical mandates, can be a challenging task.</p><p><strong>Objective: </strong>To understand factors that may limit the effectiveness of Institutional Animal Care and Use Committees and identify possible solutions.</p><p><strong>Methods: </strong>This editorial review summarizes scientific literature describing the challenges faced by U.S. Institutional Animal Care and Use Committees and those who rely on them and describes actions that may improve their functioning.</p><p><strong>Results: </strong>Apart from what may be a sizable workload and the need to satisfy applicable regulations, committees have fundamental structural challenges and limitations. Under U.S. law, there is no requirement that committee members have expertise in the research areas under review or in methods that could replace animal use, nor could expertise in such vast technical areas be expected, in contrast with the review process of many scientific journals in which experts in the conditions being studied critique the choice of subjects and methods used. Although investigators are expected to consider alternatives to procedures that may cause more than momentary or slight pain or distress, they are not required to use them. While investigators must assure committee members that studies do not duplicate other research, committee members are not required to verify this. Consideration of alternatives to painful procedures is not required at all for experiments on animals not covered by the Animal Welfare Act. The majority of U.S. research institutions now allow research proposals to be approved by a single committee member, using a system called Designated Member Review, without full committee consideration. In other countries, requirements differ considerably. In the European Union, for example, investigators must complete a harm-benefit analysis and must use alternatives, not simply consider them.</p><p><strong>Conclusions: </strong>The review process may be improved by requiring searches for nonanimal methods regardless of species, favoring alternatives based on human biology, improving the education of committee members and investigators, using reviewers with subject matter expertise, and minimizing conflicts of interest. Because of the limitations of the review process, funding institutions and scientific journals should not use Institutional Animal Care and Use Committee approval of submissions as evidence of adherence to ethical guidelines beyond those legally required.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"11"},"PeriodicalIF":7.2,"publicationDate":"2025-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12231287/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144562314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-18DOI: 10.1186/s41073-025-00166-y
Cory E Goldstein, Jessica du Toit, Nicholas B Murphy, Stuart G Nicholls, Julia F Shaw, Fernando Althabe, Ariella Binik, Jamie Brehaut, Sandra Eldridge, Rashida A Ferrand, Katie Gillies, Bruno Giraudeau, Rieke van der Graaf, Lars G Hemkens, Karla Hemming, Mira Johri, Scott Y H Kim, Emily Largent, Alex John London, Lawrence Mbuagbaw, Susan L Mitchell, Maureen Smith, Peter Tugwell, Shaun Treweek, Vivian A Welch, Monica Taljaard, Charles Weijer
Background: Although commonly used to evaluate health interventions, cluster randomized trials raise difficult ethical issues. Recognizing this, the Ottawa Statement on the Ethical Design and Conduct of Cluster Randomized Trials, published in 2012, provides 15 recommendations to address ethical issues across seven domains. But due to several developments in the design and implementation of cluster randomized trials, there are new issues requiring guidance. To inform the forthcoming update of the Ottawa Statement, we aimed to identify any gaps in the Ottawa Statement discussed within the literature.
Methods: We searched Google Scholar, Scopus, and Web of Science using the 'cited by' function on 11 November 2022.We included all types of publications, including articles, book chapters, commentaries, editorials, ethics guidelines, theses and trial-related publications (i.e., primary reports, protocols, and secondary analyses), that cited and engaged with the Ottawa Statement, the Ottawa Statement précis, or one or more of its four background papers. Data were extracted by four reviewers working in rotating pairs. Reviewers captured relevant text verbatim and recorded whether it reflected a gap relating to one or more of the Ottawa Statement domains. Using a thematic analysis approach, semantic coding was used to summarize the content of the data into distinct gaps within the Ottawa Statement domains, which was subsequently expanded in an inductive manner through discussion.
Results: The qualitative analysis of the text from 53 articles resulted in the identification of 24 distinct gaps in the Ottawa Statement: 4 gaps about justifying the cluster randomized design; 2 gaps about research ethics committee review; 3 gaps about identifying research participants; 4 gaps about obtaining informed consent; 3 gaps about gatekepeers; 6 gaps about assessing benefits and harms; 1 gap about protecting vulnerable participants; and 1 gap about equity-related issues in cluster randomized trials.
Conclusion: Identifying 24 gaps reveals a need to update the Ottawa Statement. Alongside additional gaps identified in ongoing empirical work and through engagement with our patient and public partners, the gaps identified through this citation analysis should be considered in the forthcoming Ottawa Statement update.
背景:虽然通常用于评估健康干预措施,但聚类随机试验引起了困难的伦理问题。认识到这一点,2012年发表的《关于聚类随机试验的伦理设计和行为的渥太华声明》提供了15条建议,以解决七个领域的伦理问题。但是由于在设计和实施集群随机试验方面的一些发展,有一些新的问题需要指导。为了为即将更新的渥太华声明提供信息,我们旨在找出文献中讨论的渥太华声明中的任何空白。方法:我们于2022年11月11日使用‘cited by’功能检索谷歌Scholar、Scopus和Web of Science。我们纳入了所有类型的出版物,包括文章、书籍章节、评论、社论、伦理指南、论文和与试验相关的出版物(即主要报告、协议和二次分析),这些出版物引用并涉及渥太华声明、渥太华声明的修订,或其四篇背景论文中的一篇或多篇。数据由四名审稿人轮流抽取。审稿人逐字捕获相关文本,并记录它是否反映了与渥太华声明的一个或多个领域有关的差距。使用主题分析方法,使用语义编码将数据的内容总结为渥太华声明域中的不同空白,随后通过讨论以归纳的方式扩展。结果:对53篇文献的定性分析发现渥太华声明中有24个明显的空白:4个空白是关于证明聚类随机设计的;2研究伦理委员会审查的空白;识别研究参与者的3个空白;4获取知情同意的差距;关于门节点的3个缺口;6 .效益和危害评估方面的差距;1 .弱势参与者保护差距;在分组随机试验中,公平相关问题的差距为1。结论:确定24个差距表明需要更新渥太华声明。除了在正在进行的实证工作中发现的其他差距以及通过与患者和公共合作伙伴的接触发现的差距外,通过引用分析发现的差距应在即将发布的渥太华声明更新中予以考虑。
{"title":"Gaps in the Ottawa Statement on the Ethical Design and Conduct of Cluster Randomized Trials: a citation analysis reveals a need for updated ethics guidelines.","authors":"Cory E Goldstein, Jessica du Toit, Nicholas B Murphy, Stuart G Nicholls, Julia F Shaw, Fernando Althabe, Ariella Binik, Jamie Brehaut, Sandra Eldridge, Rashida A Ferrand, Katie Gillies, Bruno Giraudeau, Rieke van der Graaf, Lars G Hemkens, Karla Hemming, Mira Johri, Scott Y H Kim, Emily Largent, Alex John London, Lawrence Mbuagbaw, Susan L Mitchell, Maureen Smith, Peter Tugwell, Shaun Treweek, Vivian A Welch, Monica Taljaard, Charles Weijer","doi":"10.1186/s41073-025-00166-y","DOIUrl":"10.1186/s41073-025-00166-y","url":null,"abstract":"<p><strong>Background: </strong>Although commonly used to evaluate health interventions, cluster randomized trials raise difficult ethical issues. Recognizing this, the Ottawa Statement on the Ethical Design and Conduct of Cluster Randomized Trials, published in 2012, provides 15 recommendations to address ethical issues across seven domains. But due to several developments in the design and implementation of cluster randomized trials, there are new issues requiring guidance. To inform the forthcoming update of the Ottawa Statement, we aimed to identify any gaps in the Ottawa Statement discussed within the literature.</p><p><strong>Methods: </strong>We searched Google Scholar, Scopus, and Web of Science using the 'cited by' function on 11 November 2022.We included all types of publications, including articles, book chapters, commentaries, editorials, ethics guidelines, theses and trial-related publications (i.e., primary reports, protocols, and secondary analyses), that cited and engaged with the Ottawa Statement, the Ottawa Statement précis, or one or more of its four background papers. Data were extracted by four reviewers working in rotating pairs. Reviewers captured relevant text verbatim and recorded whether it reflected a gap relating to one or more of the Ottawa Statement domains. Using a thematic analysis approach, semantic coding was used to summarize the content of the data into distinct gaps within the Ottawa Statement domains, which was subsequently expanded in an inductive manner through discussion.</p><p><strong>Results: </strong>The qualitative analysis of the text from 53 articles resulted in the identification of 24 distinct gaps in the Ottawa Statement: 4 gaps about justifying the cluster randomized design; 2 gaps about research ethics committee review; 3 gaps about identifying research participants; 4 gaps about obtaining informed consent; 3 gaps about gatekepeers; 6 gaps about assessing benefits and harms; 1 gap about protecting vulnerable participants; and 1 gap about equity-related issues in cluster randomized trials.</p><p><strong>Conclusion: </strong>Identifying 24 gaps reveals a need to update the Ottawa Statement. Alongside additional gaps identified in ongoing empirical work and through engagement with our patient and public partners, the gaps identified through this citation analysis should be considered in the forthcoming Ottawa Statement update.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"10"},"PeriodicalIF":7.2,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12175472/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144318915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-11DOI: 10.1186/s41073-025-00168-w
Aaron H A Fletcher, Mark Stevenson
Background: Retractions undermine the scientific record's reliability and can lead to the continued propagation of flawed research. This study aimed to (1) create a dataset aggregating retraction information with bibliographic metadata, (2) train and evaluate various machine learning approaches to predict article retractions, and (3) assess each feature's contribution to feature-based classifier performance using ablation studies.
Methods: An open-access dataset was developed by combining information from the Retraction Watch database and the OpenAlex API. Using a case-controlled design, retracted research articles were paired with non-retracted articles published in the same period. Traditional feature-based classifiers and models leveraging contextual language representations were then trained and evaluated. Model performance was assessed using accuracy, precision, recall, and the F1-score.
Results: The Llama 3.2 base model achieved the highest overall accuracy. The Random Forest classifier achieved a precision of 0.687 for identifying non-retracted articles, while the Llama 3.2 base model reached a precision of 0.683 for identifying retracted articles. Traditional feature-based classifiers generally outperformed most contextual language models, except for the Llama 3.2 base model, which showed competitive performance across several metrics.
Conclusions: Although no single model excelled across all metrics, our findings indicate that machine learning techniques can effectively support the identification of retracted research. These results provide a foundation for developing automated tools to assist publishers and reviewers in detecting potentially problematic publications. Further research should focus on refining these models and investigating additional features to improve predictive performance.
{"title":"Predicting retracted research: a dataset and machine learning approaches.","authors":"Aaron H A Fletcher, Mark Stevenson","doi":"10.1186/s41073-025-00168-w","DOIUrl":"10.1186/s41073-025-00168-w","url":null,"abstract":"<p><strong>Background: </strong>Retractions undermine the scientific record's reliability and can lead to the continued propagation of flawed research. This study aimed to (1) create a dataset aggregating retraction information with bibliographic metadata, (2) train and evaluate various machine learning approaches to predict article retractions, and (3) assess each feature's contribution to feature-based classifier performance using ablation studies.</p><p><strong>Methods: </strong>An open-access dataset was developed by combining information from the Retraction Watch database and the OpenAlex API. Using a case-controlled design, retracted research articles were paired with non-retracted articles published in the same period. Traditional feature-based classifiers and models leveraging contextual language representations were then trained and evaluated. Model performance was assessed using accuracy, precision, recall, and the F1-score.</p><p><strong>Results: </strong>The Llama 3.2 base model achieved the highest overall accuracy. The Random Forest classifier achieved a precision of 0.687 for identifying non-retracted articles, while the Llama 3.2 base model reached a precision of 0.683 for identifying retracted articles. Traditional feature-based classifiers generally outperformed most contextual language models, except for the Llama 3.2 base model, which showed competitive performance across several metrics.</p><p><strong>Conclusions: </strong>Although no single model excelled across all metrics, our findings indicate that machine learning techniques can effectively support the identification of retracted research. These results provide a foundation for developing automated tools to assist publishers and reviewers in detecting potentially problematic publications. Further research should focus on refining these models and investigating additional features to improve predictive performance.</p><p><strong>Trial registration: </strong>Not applicable.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"9"},"PeriodicalIF":7.2,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12153192/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144268110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-27DOI: 10.1186/s41073-025-00165-z
Diomidis Spinellis
Background: The proliferation of generative artificial intelligence (AI) has facilitated the creation and publication of fraudulent scientific articles, often in predatory journals. This study investigates the extent of AI-generated content in the Global International Journal of Innovative Research (GIJIR), where a fabricated article was falsely attributed to me.
Methods: The entire GIJIR website was crawled to collect article PDFs and metadata. Automated scripts were used to extract the number of probable in-text citations, DOIs, affiliations, and contact emails. A heuristic based on the number of in-text citations was employed to identify the probability of AI-generated content. A subset of articles was manually reviewed for AI indicators such as formulaic writing and missing empirical data. Turnitin's AI detection tool was used as an additional indicator. The extracted data were compiled into a structured dataset, which was analyzed to examine human-authored and AI-generated articles.
Results: Of the 53 examined articles with the fewest in-text citations, at least 48 appeared to be AI-generated, while five showed signs of human involvement. Turnitin's AI detection scores confirmed high probabilities of AI-generated content in most cases, with scores reaching 100% for multiple papers. The analysis also revealed fraudulent authorship attribution, with AI-generated articles falsely assigned to researchers from prestigious institutions. The journal appears to use AI-generated content both to inflate its standing through misattributed papers and to attract authors aiming to inflate their publication record.
Conclusions: The findings highlight the risks posed by AI-generated and misattributed research articles, which threaten the credibility of academic publishing. Ways to mitigate these issues include strengthening identity verification mechanisms for DOIs and ORCIDs, enhancing AI detection methods, and reforming research assessment practices. Without effective countermeasures, the unchecked growth of AI-generated content in scientific literature could severely undermine trust in scholarly communication.
{"title":"False authorship: an explorative case study around an AI-generated article published under my name.","authors":"Diomidis Spinellis","doi":"10.1186/s41073-025-00165-z","DOIUrl":"10.1186/s41073-025-00165-z","url":null,"abstract":"<p><strong>Background: </strong>The proliferation of generative artificial intelligence (AI) has facilitated the creation and publication of fraudulent scientific articles, often in predatory journals. This study investigates the extent of AI-generated content in the Global International Journal of Innovative Research (GIJIR), where a fabricated article was falsely attributed to me.</p><p><strong>Methods: </strong>The entire GIJIR website was crawled to collect article PDFs and metadata. Automated scripts were used to extract the number of probable in-text citations, DOIs, affiliations, and contact emails. A heuristic based on the number of in-text citations was employed to identify the probability of AI-generated content. A subset of articles was manually reviewed for AI indicators such as formulaic writing and missing empirical data. Turnitin's AI detection tool was used as an additional indicator. The extracted data were compiled into a structured dataset, which was analyzed to examine human-authored and AI-generated articles.</p><p><strong>Results: </strong>Of the 53 examined articles with the fewest in-text citations, at least 48 appeared to be AI-generated, while five showed signs of human involvement. Turnitin's AI detection scores confirmed high probabilities of AI-generated content in most cases, with scores reaching 100% for multiple papers. The analysis also revealed fraudulent authorship attribution, with AI-generated articles falsely assigned to researchers from prestigious institutions. The journal appears to use AI-generated content both to inflate its standing through misattributed papers and to attract authors aiming to inflate their publication record.</p><p><strong>Conclusions: </strong>The findings highlight the risks posed by AI-generated and misattributed research articles, which threaten the credibility of academic publishing. Ways to mitigate these issues include strengthening identity verification mechanisms for DOIs and ORCIDs, enhancing AI detection methods, and reforming research assessment practices. Without effective countermeasures, the unchecked growth of AI-generated content in scientific literature could severely undermine trust in scholarly communication.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"8"},"PeriodicalIF":7.2,"publicationDate":"2025-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12107892/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144153024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-14DOI: 10.1186/s41073-025-00164-0
S Scott Graham, Quinn Grundy, Nandini Sharma, Jade Shiva Edward, Joshua B Barbour, Justin F Rousseau, Zoltan P Majdik, Lisa Bero
Background: Industry funding and author conflicts of interest (COI) have been consistently shown to introduce bias into agenda-setting and results-reporting in biomedical research. Accordingly, maintaining public trust, diminishing patient harm, and securing the integrity of the biomedical research enterprise are critical policy priorities. In this context, a coordinated and methodical research effort is required to effectively identify which policy interventions are most likely to mitigate against the risks of funding bias. Subsequently this scoping review aims to identify and synthesize the available research on policy mechanisms designed to address funding bias and COI in biomedical research.
Methods: We searched PubMed for peer-reviewed, empirical analyses of policy mechanisms designed to address industry sponsorship of research studies, author industry affiliation, and author COI at any stage of the biomedical research process and published between January 2009 and 28 August 2023. The review identified literature conducting five primary analysis types: (1) surveys of COI policies, (2) disclosure compliance analyses, (3) disclosure concordance analyses, (4) COI policy effects analyses, and (5) studies of policy perceptions and contexts. Most available research is devoted to evaluating the prevalence, nature, and effects of author COI disclosure policies.
Results: Six thousand three hundreds eighty five articles were screened, and 81 studies were included. Studies were conducted in 11 geographic regions, with studies of international scope being the most common. Most available research is devoted to evaluating the prevalence, nature, and effects of author COI disclosure policies. This evidence demonstrates that while disclosure policies are pervasive, those policies are not consistently designed, implemented, or enforced. The available evidence also indicates that COI disclosure policies are not particularly effective in mitigating risk of bias or subsequent negative externalities.
Conclusions: The results of this review indicate that the COI policy landscape could benefit from a significant shift in the research agenda. The available literature predominantly focuses on a single policy intervention-author disclosure requirements. As a result, new lines of research are needed to establish a more robust evidence-based policy landscape. There is a particular need for implementation research, greater attention to the structural conditions that create COI, and evaluation of policy mechanisms other than disclosure.
{"title":"Research on policy mechanisms to address funding bias and conflicts of interest in biomedical research: a scoping review.","authors":"S Scott Graham, Quinn Grundy, Nandini Sharma, Jade Shiva Edward, Joshua B Barbour, Justin F Rousseau, Zoltan P Majdik, Lisa Bero","doi":"10.1186/s41073-025-00164-0","DOIUrl":"10.1186/s41073-025-00164-0","url":null,"abstract":"<p><strong>Background: </strong>Industry funding and author conflicts of interest (COI) have been consistently shown to introduce bias into agenda-setting and results-reporting in biomedical research. Accordingly, maintaining public trust, diminishing patient harm, and securing the integrity of the biomedical research enterprise are critical policy priorities. In this context, a coordinated and methodical research effort is required to effectively identify which policy interventions are most likely to mitigate against the risks of funding bias. Subsequently this scoping review aims to identify and synthesize the available research on policy mechanisms designed to address funding bias and COI in biomedical research.</p><p><strong>Methods: </strong>We searched PubMed for peer-reviewed, empirical analyses of policy mechanisms designed to address industry sponsorship of research studies, author industry affiliation, and author COI at any stage of the biomedical research process and published between January 2009 and 28 August 2023. The review identified literature conducting five primary analysis types: (1) surveys of COI policies, (2) disclosure compliance analyses, (3) disclosure concordance analyses, (4) COI policy effects analyses, and (5) studies of policy perceptions and contexts. Most available research is devoted to evaluating the prevalence, nature, and effects of author COI disclosure policies.</p><p><strong>Results: </strong>Six thousand three hundreds eighty five articles were screened, and 81 studies were included. Studies were conducted in 11 geographic regions, with studies of international scope being the most common. Most available research is devoted to evaluating the prevalence, nature, and effects of author COI disclosure policies. This evidence demonstrates that while disclosure policies are pervasive, those policies are not consistently designed, implemented, or enforced. The available evidence also indicates that COI disclosure policies are not particularly effective in mitigating risk of bias or subsequent negative externalities.</p><p><strong>Conclusions: </strong>The results of this review indicate that the COI policy landscape could benefit from a significant shift in the research agenda. The available literature predominantly focuses on a single policy intervention-author disclosure requirements. As a result, new lines of research are needed to establish a more robust evidence-based policy landscape. There is a particular need for implementation research, greater attention to the structural conditions that create COI, and evaluation of policy mechanisms other than disclosure.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"6"},"PeriodicalIF":10.7,"publicationDate":"2025-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12076912/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144060408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-09DOI: 10.1186/s41073-025-00162-2
Fabrice Frank, Nans Florens, Gideon Meyerowitz-Katz, Jerome Barriere, Eric Billy, Veronique Saada, Alexander Samuel, Jacques Robert, Lonni Besancon
{"title":"Correction: Raising concerns on questionable ethics approvals - a case study of 456 trials from the Institut Hospitalo-Universitaire Méditerranée Infection.","authors":"Fabrice Frank, Nans Florens, Gideon Meyerowitz-Katz, Jerome Barriere, Eric Billy, Veronique Saada, Alexander Samuel, Jacques Robert, Lonni Besancon","doi":"10.1186/s41073-025-00162-2","DOIUrl":"https://doi.org/10.1186/s41073-025-00162-2","url":null,"abstract":"","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"7"},"PeriodicalIF":7.2,"publicationDate":"2025-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12063339/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144045630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: Research on research integrity (RI) has grown exponentially over the past several decades. Although the earliest publications emerged in the 1980 s, more than half of the existing literature has been produced within the last five years. Given that the most recent comprehensive literature review is now eight years old, the present study aims to extend and update previous findings.
Method: We conducted a systematic search of the Web of Science and Constellate databases for articles published between 2015 and 2023. To structure our overview and guide our inquiry, we addressed the following seven broad questions about the field:-What topics does the empirical literature on RI explore? What are the primary objectives of the empirical literature on RI? What methodologies are prevalent in the empirical literature on RI? What populations or organizations are studied in the empirical literature on RI? Where are the empirical studies on RI conducted? Where is the empirical literature on RI published? To what degree is the general literature on RI grounded in empirical research? Additionally, we used the previous scoping review as a benchmark to identify emerging trends and shifts.
Results: Our search yielded a total of 3,282 studies, of which 660 articles met our inclusion criteria. All research questions were comprehensively addressed. Notably, we observed a significant shift in methodologies: the reliance on interviews and surveys decreased from 51 to 30%, whereas the application of meta-scientific methods increased from 17 to 31%. In terms of theoretical orientation, the previously dominant "Bad Apple" hypothesis declined from 54 to 30%, while the "Wicked System" hypothesis increased from 46 to 52%. Furthermore, there has been a pronounced trend toward testing solutions, rising from 31 to 56% at the expense of merely describing the problem, which fell from 69 to 44%.
Conclusion: Three gaps highlighted eight years ago by the previous scoping review remain unresolved. Research on decision makers (e.g., scientists in positions of power, policymakers, accounting for 3%), the private research sector and patents (4.7%), and the peer review system (0.3%) continues to be underexplored. Even more concerning, if current trends persist, these gaps are likely to become increasingly problematic.
背景:在过去的几十年里,关于科研诚信的研究呈指数级增长。虽然最早的出版物出现在20世纪80年代,但现有文献的一半以上是在最近五年内出版的。鉴于最近的综合文献综述已有8年历史,本研究旨在扩展和更新先前的研究结果。方法:系统检索Web of Science和constellation数据库中2015 - 2023年间发表的文章。为了构建我们的概述并指导我们的调查,我们解决了关于该领域的以下七个广泛问题:关于国际扶轮的实证文献探讨了哪些主题?国际扶轮实证文献的主要目标是什么?在国际扶轮的实证文献中,什么方法学是流行的?在国际扶轮的实证文献中研究了哪些人口或组织?国际扶轮的实证研究在哪里进行?关于RI的实证文献在哪里发表?关于国际扶轮的一般文献在多大程度上是基于实证研究?此外,我们使用之前的范围审查作为基准,以确定新出现的趋势和变化。结果:我们共检索到3282篇研究,其中660篇符合我们的纳入标准。所有的研究问题都得到了全面的解决。值得注意的是,我们观察到方法论的重大转变:对访谈和调查的依赖从51%下降到30%,而元科学方法的应用从17%增加到31%。在理论取向上,先前占主导地位的“坏苹果”假说从54%下降到30%,而“邪恶系统”假说从46%上升到52%。此外,有一个明显的趋势是测试解决方案,从31%上升到56%,代价是仅仅描述问题,从69%下降到44%。结论:八年前的范围审查强调的三个差距仍然没有解决。对决策者(例如,有权力的科学家、政策制定者,占3%)、私营研究部门和专利(4.7%)以及同行评议制度(0.3%)的研究仍未得到充分探索。更令人担忧的是,如果目前的趋势持续下去,这些差距可能会变得越来越成问题。
{"title":"From 2015 to 2023, eight years of empirical research on research integrity: a scoping review.","authors":"Baptiste Vendé, Anouk Barberousse, Stéphanie Ruphy","doi":"10.1186/s41073-025-00163-1","DOIUrl":"https://doi.org/10.1186/s41073-025-00163-1","url":null,"abstract":"<p><strong>Background: </strong>Research on research integrity (RI) has grown exponentially over the past several decades. Although the earliest publications emerged in the 1980 s, more than half of the existing literature has been produced within the last five years. Given that the most recent comprehensive literature review is now eight years old, the present study aims to extend and update previous findings.</p><p><strong>Method: </strong>We conducted a systematic search of the Web of Science and Constellate databases for articles published between 2015 and 2023. To structure our overview and guide our inquiry, we addressed the following seven broad questions about the field:-What topics does the empirical literature on RI explore? What are the primary objectives of the empirical literature on RI? What methodologies are prevalent in the empirical literature on RI? What populations or organizations are studied in the empirical literature on RI? Where are the empirical studies on RI conducted? Where is the empirical literature on RI published? To what degree is the general literature on RI grounded in empirical research? Additionally, we used the previous scoping review as a benchmark to identify emerging trends and shifts.</p><p><strong>Results: </strong>Our search yielded a total of 3,282 studies, of which 660 articles met our inclusion criteria. All research questions were comprehensively addressed. Notably, we observed a significant shift in methodologies: the reliance on interviews and surveys decreased from 51 to 30%, whereas the application of meta-scientific methods increased from 17 to 31%. In terms of theoretical orientation, the previously dominant \"Bad Apple\" hypothesis declined from 54 to 30%, while the \"Wicked System\" hypothesis increased from 46 to 52%. Furthermore, there has been a pronounced trend toward testing solutions, rising from 31 to 56% at the expense of merely describing the problem, which fell from 69 to 44%.</p><p><strong>Conclusion: </strong>Three gaps highlighted eight years ago by the previous scoping review remain unresolved. Research on decision makers (e.g., scientists in positions of power, policymakers, accounting for 3%), the private research sector and patents (4.7%), and the peer review system (0.3%) continues to be underexplored. Even more concerning, if current trends persist, these gaps are likely to become increasingly problematic.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"5"},"PeriodicalIF":7.2,"publicationDate":"2025-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12042460/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144058381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}