Pub Date : 2025-09-22DOI: 10.1186/s41073-025-00180-0
Clovis Mariano Faggion, Carla Brigitte Susan Kohl
Background: Reporting guidelines are key tools for enhancing the transparency and reproducibility of research. To support responsible reporting, such guidelines should also address ethical considerations. However, the extent to which these elements are integrated into reporting checklists remains unclear. This study aimed to evaluate how ethical elements are incorporated in these guidelines.
Methods: We identified reporting guidelines indexed on the "Enhancing the Quality and Transparency of Health Research (EQUATOR) Network" website. On 30 January 2025, a random sample of 128 reporting guidelines and extensions was drawn from a total of 657. For each, we retrieved the associated development publication and extracted data into a standardised table. The assessed ethical elements included COI disclosure, sponsorship, authorship criteria, data sharing guidance, and protocol development and study registration. Data extraction for the first 13 guidelines was conducted independently and in duplicate. After achieving 100% agreement, the remaining data were extracted by one author, following "A MeaSurement Tool to Assess Systematic Reviews" (AMSTAR)-2 recommendations.
Results: The dataset comprised 101 original guidelines and 27 extensions of existing guidelines. Half of the included guidelines were published from 2015 onward, with 32.0% published between 2020 and 2024. The median year of publication was 2016. Approximately 90 of the 128 assessed guidelines focused on clinical studies. Over 70% of the guidelines did not include items related to conflicts of interest (COI) or sponsorship. Only 8.6% addressed COI and sponsorship jointly in a single item, while fewer than 9% covered them as two separate items. Notably, only two guidelines (1.6%) provided instructions for using the ICMJE disclosure form to report potential conflicts of interest. Nearly 20% of the guidelines offered guidance on study registration. Fewer than 30% recommended the development of a research protocol, and only 18.8% provided guidance on protocol sharing. Additionally, fewer than 10% of the checklists included guidance on authorship criteria or data sharing.
Conclusion: Ethical considerations are insufficiently addressed in current reporting guidelines. The absence of standardised items on COIs, funding, authorship, and data sharing represents a missed opportunity to promote transparency and research integrity. Future updates to reporting guidelines should systematically incorporate these elements.
{"title":"Exploring ethical elements in reporting guidelines: results from a research-on-research study.","authors":"Clovis Mariano Faggion, Carla Brigitte Susan Kohl","doi":"10.1186/s41073-025-00180-0","DOIUrl":"10.1186/s41073-025-00180-0","url":null,"abstract":"<p><strong>Background: </strong>Reporting guidelines are key tools for enhancing the transparency and reproducibility of research. To support responsible reporting, such guidelines should also address ethical considerations. However, the extent to which these elements are integrated into reporting checklists remains unclear. This study aimed to evaluate how ethical elements are incorporated in these guidelines.</p><p><strong>Methods: </strong>We identified reporting guidelines indexed on the \"Enhancing the Quality and Transparency of Health Research (EQUATOR) Network\" website. On 30 January 2025, a random sample of 128 reporting guidelines and extensions was drawn from a total of 657. For each, we retrieved the associated development publication and extracted data into a standardised table. The assessed ethical elements included COI disclosure, sponsorship, authorship criteria, data sharing guidance, and protocol development and study registration. Data extraction for the first 13 guidelines was conducted independently and in duplicate. After achieving 100% agreement, the remaining data were extracted by one author, following \"A MeaSurement Tool to Assess Systematic Reviews\" (AMSTAR)-2 recommendations.</p><p><strong>Results: </strong>The dataset comprised 101 original guidelines and 27 extensions of existing guidelines. Half of the included guidelines were published from 2015 onward, with 32.0% published between 2020 and 2024. The median year of publication was 2016. Approximately 90 of the 128 assessed guidelines focused on clinical studies. Over 70% of the guidelines did not include items related to conflicts of interest (COI) or sponsorship. Only 8.6% addressed COI and sponsorship jointly in a single item, while fewer than 9% covered them as two separate items. Notably, only two guidelines (1.6%) provided instructions for using the ICMJE disclosure form to report potential conflicts of interest. Nearly 20% of the guidelines offered guidance on study registration. Fewer than 30% recommended the development of a research protocol, and only 18.8% provided guidance on protocol sharing. Additionally, fewer than 10% of the checklists included guidance on authorship criteria or data sharing.</p><p><strong>Conclusion: </strong>Ethical considerations are insufficiently addressed in current reporting guidelines. The absence of standardised items on COIs, funding, authorship, and data sharing represents a missed opportunity to promote transparency and research integrity. Future updates to reporting guidelines should systematically incorporate these elements.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"20"},"PeriodicalIF":10.7,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12452000/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145115404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-08DOI: 10.1186/s41073-025-00178-8
Jeremy Y Ng, Malvika Krishnamurthy, Gursimran Deol, Wid Al-Zahraa Al-Khafaji, Vetrivel Balaji, Magdalene Abebe, Jyot Adhvaryu, Tejas Karrthik, Pranavee Mohanakanthan, Adharva Vellaparambil, Lex M Bouter, R Brian Haynes, Alfonso Iorio, Cynthia Lokker, Hervé Maisonneuve, Ana Marušić, David Moher
Background: Artificial intelligence chatbots (AICs) are designed to mimic human conversations through text or speech, offering both opportunities and challenges in scholarly publishing. While journal policies of AICs are becoming more defined, there is still a limited understanding of how Editors in chief (EiCs) of biomedical journals' view these tools. This survey examined EiCs' attitudes and perceptions, highlighting positive aspects, such as language and grammar support, and concerns regarding setup time, training requirements, and ethical considerations towards the use of AICs in the scholarly publishing process.
Methods: A cross-sectional survey was conducted, targeting EiCs of biomedical journals across multiple publishers. Of 3725 journals screened, 3381 eligible emails were identified through web scraping and manual verification. Survey invitations were sent to all identified EiCs. The survey remained open for five weeks, with three follow-up email reminders.
Results: The survey had a response rate of 16.5% (510 total responses) and a completion rate of 87.0%. Most respondents were familiar with AIs (66.7%), however, most had not utilized AICs in their editorial work (83.7%) and many expressed interest in further training (64.4%). EiCs acknowledged benefits such as language and grammar support (70.8%) but expressed mixed attitudes on AIC roles in accelerating peer review. Perceptions included the initial time and resources required for setup (83.7%), training needs (83.9%), and ethical considerations (80.6%).
Conclusions: This study found that EiCs have mixed attitudes toward AICs, with some EICs acknowledging their potential to enhance editorial efficiency, particularly in tasks like language editing, while others expressed concerns about the ethical implications, the time and resources required for implementation, and the need for additional training.
{"title":"Attitudes and perceptions of biomedical journal editors in chief towards the use of artificial intelligence chatbots in the scholarly publishing process: a cross-sectional survey.","authors":"Jeremy Y Ng, Malvika Krishnamurthy, Gursimran Deol, Wid Al-Zahraa Al-Khafaji, Vetrivel Balaji, Magdalene Abebe, Jyot Adhvaryu, Tejas Karrthik, Pranavee Mohanakanthan, Adharva Vellaparambil, Lex M Bouter, R Brian Haynes, Alfonso Iorio, Cynthia Lokker, Hervé Maisonneuve, Ana Marušić, David Moher","doi":"10.1186/s41073-025-00178-8","DOIUrl":"10.1186/s41073-025-00178-8","url":null,"abstract":"<p><strong>Background: </strong>Artificial intelligence chatbots (AICs) are designed to mimic human conversations through text or speech, offering both opportunities and challenges in scholarly publishing. While journal policies of AICs are becoming more defined, there is still a limited understanding of how Editors in chief (EiCs) of biomedical journals' view these tools. This survey examined EiCs' attitudes and perceptions, highlighting positive aspects, such as language and grammar support, and concerns regarding setup time, training requirements, and ethical considerations towards the use of AICs in the scholarly publishing process.</p><p><strong>Methods: </strong>A cross-sectional survey was conducted, targeting EiCs of biomedical journals across multiple publishers. Of 3725 journals screened, 3381 eligible emails were identified through web scraping and manual verification. Survey invitations were sent to all identified EiCs. The survey remained open for five weeks, with three follow-up email reminders.</p><p><strong>Results: </strong>The survey had a response rate of 16.5% (510 total responses) and a completion rate of 87.0%. Most respondents were familiar with AIs (66.7%), however, most had not utilized AICs in their editorial work (83.7%) and many expressed interest in further training (64.4%). EiCs acknowledged benefits such as language and grammar support (70.8%) but expressed mixed attitudes on AIC roles in accelerating peer review. Perceptions included the initial time and resources required for setup (83.7%), training needs (83.9%), and ethical considerations (80.6%).</p><p><strong>Conclusions: </strong>This study found that EiCs have mixed attitudes toward AICs, with some EICs acknowledging their potential to enhance editorial efficiency, particularly in tasks like language editing, while others expressed concerns about the ethical implications, the time and resources required for implementation, and the need for additional training.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"19"},"PeriodicalIF":10.7,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12416066/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145016838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-30DOI: 10.1186/s41073-025-00179-7
Carole Bandiera, Kate Lowrie, Donna Thomas, Sabuj Kanti Mistry, Elizabeth Harris, Mark F Harris, Parisa Aslani
We have been scammed in our online qualitative study by some fraudulent participants who falsely claimed to be pharmacists or community health workers. These participants were interviewed before we discovered that they were not who they claimed to be.In this commentary, we describe key indicators of potential imposters, such as the number of emails received in a short period of time, emails with similar content and address structure, participants having a keen interest in the reimbursement, camera switched off during the interview, and inconsistency in the participants' responses.We provide recommendations on how to prevent future fraud, such as promoting the study to a closed network or groups on social media, encouraging participants to provide sources that verify their identity, ensuring that the camera is switched on during the entire interview, discouraging the use of artificial intelligence (AI) to answer questions or generate content, unless when AI-based language tools are used to facilitate translation, understanding or communication, providing reimbursements with local vouchers rather than international ones, and where the participants are healthcare professionals, checking their registration number prior to the interview.It is important for Human Research Ethics Committee members to consider genuine measures to assess participant authenticity and reduce the risk of fraudulent participation. Additionally, universities and research institutions should develop guidance to educate researchers in this area. Published protocols, guidelines and checklists for online qualitative studies, and participant information statements and consent forms should be adapted to prevent and address potential fraud. For example, the COREQ checklist should be updated so that researchers report the actions undertaken to prevent and detect fraud and their experiences and actions if there was fraud.Fraud in online research impacts the integrity and quality of online research. Urgent actions are needed to raise awareness of this issue within the research community and prevent further occurrences of scams.
{"title":"I have been scammed in my qualitative research.","authors":"Carole Bandiera, Kate Lowrie, Donna Thomas, Sabuj Kanti Mistry, Elizabeth Harris, Mark F Harris, Parisa Aslani","doi":"10.1186/s41073-025-00179-7","DOIUrl":"10.1186/s41073-025-00179-7","url":null,"abstract":"<p><p>We have been scammed in our online qualitative study by some fraudulent participants who falsely claimed to be pharmacists or community health workers. These participants were interviewed before we discovered that they were not who they claimed to be.In this commentary, we describe key indicators of potential imposters, such as the number of emails received in a short period of time, emails with similar content and address structure, participants having a keen interest in the reimbursement, camera switched off during the interview, and inconsistency in the participants' responses.We provide recommendations on how to prevent future fraud, such as promoting the study to a closed network or groups on social media, encouraging participants to provide sources that verify their identity, ensuring that the camera is switched on during the entire interview, discouraging the use of artificial intelligence (AI) to answer questions or generate content, unless when AI-based language tools are used to facilitate translation, understanding or communication, providing reimbursements with local vouchers rather than international ones, and where the participants are healthcare professionals, checking their registration number prior to the interview.It is important for Human Research Ethics Committee members to consider genuine measures to assess participant authenticity and reduce the risk of fraudulent participation. Additionally, universities and research institutions should develop guidance to educate researchers in this area. Published protocols, guidelines and checklists for online qualitative studies, and participant information statements and consent forms should be adapted to prevent and address potential fraud. For example, the COREQ checklist should be updated so that researchers report the actions undertaken to prevent and detect fraud and their experiences and actions if there was fraud.Fraud in online research impacts the integrity and quality of online research. Urgent actions are needed to raise awareness of this issue within the research community and prevent further occurrences of scams.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"18"},"PeriodicalIF":10.7,"publicationDate":"2025-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12398116/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144981940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-29DOI: 10.1186/s41073-025-00176-w
Sara Steele, Tom Lavrijssen, Thomas Steckler
Background: Historically, systematic review studies of nonclinical published research articles around the life sciences have shown that the overall reporting of information on measures against bias is low. Measures such as randomization, blinding and sample size estimation are mentioned in the minority of the studies. The present study aims to provide an overview of the recent reporting standards in a large sample of nonclinical articles with focus on statistical information.
Methods: Journals were randomly selected from Journal Citation Reports (Clarivate). Biomedical research articles published in 2020 from 10 journals were analyzed for their reporting standards using a checklist.
Results: In total 860 articles; 320 articles describing in vivo methods, 187 articles describing in vitro methods and 353 articles including both in vivo and in vitro methods, were included in the study. The reporting rate of "randomization" ranged from 0%-63% between journals for in vivo articles and 0%-4% for in vitro articles. The reporting rate of "blinded conduct of the experiments" ranged from 11%-71% between journals for in vivo articles and 0%-86% for in vitro articles.
Conclusion: The analysis showed that the reporting standards remained low, also when other statistical information is concerned. Additionally, our results suggest that the reporting in articles on in vivo experiments is better compared to articles on in vitro experiments. Furthermore, important differences in reporting standards between journals seem to exist.
{"title":"Reporting of measures against bias in nonclinical published research studies: a journal-based comparison.","authors":"Sara Steele, Tom Lavrijssen, Thomas Steckler","doi":"10.1186/s41073-025-00176-w","DOIUrl":"10.1186/s41073-025-00176-w","url":null,"abstract":"<p><strong>Background: </strong>Historically, systematic review studies of nonclinical published research articles around the life sciences have shown that the overall reporting of information on measures against bias is low. Measures such as randomization, blinding and sample size estimation are mentioned in the minority of the studies. The present study aims to provide an overview of the recent reporting standards in a large sample of nonclinical articles with focus on statistical information.</p><p><strong>Methods: </strong>Journals were randomly selected from Journal Citation Reports (Clarivate). Biomedical research articles published in 2020 from 10 journals were analyzed for their reporting standards using a checklist.</p><p><strong>Results: </strong>In total 860 articles; 320 articles describing in vivo methods, 187 articles describing in vitro methods and 353 articles including both in vivo and in vitro methods, were included in the study. The reporting rate of \"randomization\" ranged from 0%-63% between journals for in vivo articles and 0%-4% for in vitro articles. The reporting rate of \"blinded conduct of the experiments\" ranged from 11%-71% between journals for in vivo articles and 0%-86% for in vitro articles.</p><p><strong>Conclusion: </strong>The analysis showed that the reporting standards remained low, also when other statistical information is concerned. Additionally, our results suggest that the reporting in articles on in vivo experiments is better compared to articles on in vitro experiments. Furthermore, important differences in reporting standards between journals seem to exist.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"17"},"PeriodicalIF":10.7,"publicationDate":"2025-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12398162/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144981881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-08DOI: 10.1186/s41073-025-00172-0
Antonija Mijatović, Marija Franka Žuljević, Luka Ursić, Nensi Bralić, Miro Vuković, Marija Roguljić, Ana Marušić
Background: Inappropriate manipulations of digital images pose significant risks to research integrity. Here we assessed the capability of students and researchers to detect image duplications in biomedical images.
Methods: We conducted a pen-and-paper survey involving medical students who had been exposed to research paper images during their studies, as well as active researchers. We asked them to identify duplications in images of Western blots, cell cultures, and histological sections and evaluated their performance based on the number of correctly and incorrectly detected duplications.
Results: A total of 831 students and 26 researchers completed the survey during 2023/2024 academic year. Out of 34 duplications of 21 unique image parts, the students correctly identified a median of 10 duplications (interquartile range [IQR] = 8-13), and made 2 mistakes (IQR = 1-4), whereas the researchers identified a median of 11 duplications (IQR = 8-14) and made 1 mistake (IQR = 1-3). There were no significant differences between the two groups in either the number of correctly detected duplications (p = .271, Cliff's δ = 0.126) or the number of mistakes (p = .731, Cliff's δ = 0.039). Both students and researchers identified higer percentage of duplications in the Western blot images than cell or tissue images (p < .005 and Cohen's d = 0.72; p < .005 and Cohen's d = 1.01, respectively). For students, gender was a weak predictor of performance, with female participants finding slightly more duplications (p < .005, Cliff's δ = 0.158), but making more mistakes (p < .005, Cliff's δ = 0.239). The study year had no significant impact on student performance (p = .209; Cliff's δ = 0.085).
Conclusions: Despite differences in expertise, both students and researchers demonstrated limited proficiency in detecting duplications in digital images. Digital image manipulation may be better detected by automated screening tools, and researchers should have clear guidance on how to prepare digital images in scientific publications.
{"title":"How good are medical students and researchers in detecting duplications in digital images from research articles: a cross-sectional survey.","authors":"Antonija Mijatović, Marija Franka Žuljević, Luka Ursić, Nensi Bralić, Miro Vuković, Marija Roguljić, Ana Marušić","doi":"10.1186/s41073-025-00172-0","DOIUrl":"10.1186/s41073-025-00172-0","url":null,"abstract":"<p><strong>Background: </strong>Inappropriate manipulations of digital images pose significant risks to research integrity. Here we assessed the capability of students and researchers to detect image duplications in biomedical images.</p><p><strong>Methods: </strong>We conducted a pen-and-paper survey involving medical students who had been exposed to research paper images during their studies, as well as active researchers. We asked them to identify duplications in images of Western blots, cell cultures, and histological sections and evaluated their performance based on the number of correctly and incorrectly detected duplications.</p><p><strong>Results: </strong>A total of 831 students and 26 researchers completed the survey during 2023/2024 academic year. Out of 34 duplications of 21 unique image parts, the students correctly identified a median of 10 duplications (interquartile range [IQR] = 8-13), and made 2 mistakes (IQR = 1-4), whereas the researchers identified a median of 11 duplications (IQR = 8-14) and made 1 mistake (IQR = 1-3). There were no significant differences between the two groups in either the number of correctly detected duplications (p = .271, Cliff's δ = 0.126) or the number of mistakes (p = .731, Cliff's δ = 0.039). Both students and researchers identified higer percentage of duplications in the Western blot images than cell or tissue images (p < .005 and Cohen's d = 0.72; p < .005 and Cohen's d = 1.01, respectively). For students, gender was a weak predictor of performance, with female participants finding slightly more duplications (p < .005, Cliff's δ = 0.158), but making more mistakes (p < .005, Cliff's δ = 0.239). The study year had no significant impact on student performance (p = .209; Cliff's δ = 0.085).</p><p><strong>Conclusions: </strong>Despite differences in expertise, both students and researchers demonstrated limited proficiency in detecting duplications in digital images. Digital image manipulation may be better detected by automated screening tools, and researchers should have clear guidance on how to prepare digital images in scientific publications.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"14"},"PeriodicalIF":10.7,"publicationDate":"2025-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12333226/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144801184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-30DOI: 10.1186/s41073-025-00175-x
Ashley J Tsang, John Z Sadler, E Sherwood Brown, Elizabeth Heitman
{"title":"Correction: Evaluating psychiatry journals' adherence to informed consent guidelines for case reports.","authors":"Ashley J Tsang, John Z Sadler, E Sherwood Brown, Elizabeth Heitman","doi":"10.1186/s41073-025-00175-x","DOIUrl":"10.1186/s41073-025-00175-x","url":null,"abstract":"","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"16"},"PeriodicalIF":10.7,"publicationDate":"2025-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12309193/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144755332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-23DOI: 10.1186/s41073-025-00173-z
Christopher Baethge, Hannah Jergas
Background: Quotations are crucial to science but have been shown to be often inaccurate. Quotation errors, that is, a reference not supporting the authors' claim, may still be a significant issue in scientific medical writing. This study aimed to examine the quotation error rate and trends over time in the medical literature.
Methods: A systematic search of PubMed, Web of Science, and reference lists for quotation error studies in medicine and without date or language restrictions identified 46 studies analyzing 32,000 quotations/references. Literature search, data extraction, and risk of bias assessments were performed independently by two raters. Random-effects meta-analyses and meta-regression were used to analyze error rates and trends (protocol pre-registered on OSF).
Results: 16.9% (95% CI: 14.1%-20.0%) of quotations were incorrect, with approximately half classified as major errors (8.0% [95% CI: 6.4%-10.0%]). Heterogeneity was high, and Egger's test for small study effects remained negative throughout. Meta-regression showed no significant improvement in quotation accuracy over recent years (slope: -0.002 [95% CI: -0.03 to 0.02], p = 0.85). Neither risk of bias, nor the number of references were statistically significantly associated with total error rate, but journal impact factor was: Spearman's ρ = -0.253 (p = 0.043, binomial test, N = 25).
Conclusions: Quotation errors remain a problem in the medical literature, with no improvement over time. Addressing this issue requires concerted efforts to improve scholarly practices and editorial processes.
{"title":"Systematic review and meta-analysis of quotation inaccuracy in medicine.","authors":"Christopher Baethge, Hannah Jergas","doi":"10.1186/s41073-025-00173-z","DOIUrl":"10.1186/s41073-025-00173-z","url":null,"abstract":"<p><strong>Background: </strong>Quotations are crucial to science but have been shown to be often inaccurate. Quotation errors, that is, a reference not supporting the authors' claim, may still be a significant issue in scientific medical writing. This study aimed to examine the quotation error rate and trends over time in the medical literature.</p><p><strong>Methods: </strong>A systematic search of PubMed, Web of Science, and reference lists for quotation error studies in medicine and without date or language restrictions identified 46 studies analyzing 32,000 quotations/references. Literature search, data extraction, and risk of bias assessments were performed independently by two raters. Random-effects meta-analyses and meta-regression were used to analyze error rates and trends (protocol pre-registered on OSF).</p><p><strong>Results: </strong>16.9% (95% CI: 14.1%-20.0%) of quotations were incorrect, with approximately half classified as major errors (8.0% [95% CI: 6.4%-10.0%]). Heterogeneity was high, and Egger's test for small study effects remained negative throughout. Meta-regression showed no significant improvement in quotation accuracy over recent years (slope: -0.002 [95% CI: -0.03 to 0.02], p = 0.85). Neither risk of bias, nor the number of references were statistically significantly associated with total error rate, but journal impact factor was: Spearman's ρ = -0.253 (p = 0.043, binomial test, N = 25).</p><p><strong>Conclusions: </strong>Quotation errors remain a problem in the medical literature, with no improvement over time. Addressing this issue requires concerted efforts to improve scholarly practices and editorial processes.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"13"},"PeriodicalIF":10.7,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12285159/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144692730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-18DOI: 10.1186/s41073-025-00171-1
Ashley J Tsang, John Z Sadler, E Sherwood Brown, Elizabeth Heitman
Background: Case reports are valuable tools that illustrate and analyze practical scenarios, novel problems, and the effectiveness of interventions. In psychiatry they often explore unique and potentially stigmatizing aspects of mental health, underscoring the importance of confidentiality and informed consent. However, journals' guidance on consent and confidentiality for case reports varies. In 2013, an international expert group developed the CAse REports (CARE) Guidelines for best practices in case reports, which include guidelines for informed consent and de-identification. In 2016, the Committee on Publication Ethics (COPE) issued ethical standards for publishing case reports, calling for written informed consent from featured patients.
Methods: Using a cross-sectional approach, we assessed the instructions for authors of 253 indexed psychiatry journals, of which 129 had published English-language case reports in the prior five years. Our research identified and evaluated journals' use of COPE and CARE guidelines on informed consent and de-identification in case reports.
Results: Among these 129 journals, 84 (65%) referred to COPE guidelines, and 59 (46%) referenced CARE guidelines. Furthermore, 46 (36%) required informed consent without de-identification, 7 (5%) required only de-identification, and 21 (16%) required both, specifying consent for identifying information. Notably, 40 (31%) lacked informed consent instructions. Of the 82 journals that required informed consent, 69 (85%) required documentation of consent.
Conclusion: A decade after the publication of expert guidance, psychiatry journals remain inconsistent in their adherence to ethical guidelines for informed consent in case reports. More attention to clear instructions from journals on informed consent-a notable topic across different fields-would provide an important educational message about both publication ethics and fundamental respect for patients' confidentiality.
{"title":"Evaluating psychiatry journals' adherence to informed consent guidelines for case reports.","authors":"Ashley J Tsang, John Z Sadler, E Sherwood Brown, Elizabeth Heitman","doi":"10.1186/s41073-025-00171-1","DOIUrl":"10.1186/s41073-025-00171-1","url":null,"abstract":"<p><strong>Background: </strong>Case reports are valuable tools that illustrate and analyze practical scenarios, novel problems, and the effectiveness of interventions. In psychiatry they often explore unique and potentially stigmatizing aspects of mental health, underscoring the importance of confidentiality and informed consent. However, journals' guidance on consent and confidentiality for case reports varies. In 2013, an international expert group developed the CAse REports (CARE) Guidelines for best practices in case reports, which include guidelines for informed consent and de-identification. In 2016, the Committee on Publication Ethics (COPE) issued ethical standards for publishing case reports, calling for written informed consent from featured patients.</p><p><strong>Methods: </strong>Using a cross-sectional approach, we assessed the instructions for authors of 253 indexed psychiatry journals, of which 129 had published English-language case reports in the prior five years. Our research identified and evaluated journals' use of COPE and CARE guidelines on informed consent and de-identification in case reports.</p><p><strong>Results: </strong>Among these 129 journals, 84 (65%) referred to COPE guidelines, and 59 (46%) referenced CARE guidelines. Furthermore, 46 (36%) required informed consent without de-identification, 7 (5%) required only de-identification, and 21 (16%) required both, specifying consent for identifying information. Notably, 40 (31%) lacked informed consent instructions. Of the 82 journals that required informed consent, 69 (85%) required documentation of consent.</p><p><strong>Conclusion: </strong>A decade after the publication of expert guidance, psychiatry journals remain inconsistent in their adherence to ethical guidelines for informed consent in case reports. More attention to clear instructions from journals on informed consent-a notable topic across different fields-would provide an important educational message about both publication ethics and fundamental respect for patients' confidentiality.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"15"},"PeriodicalIF":10.7,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12273215/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144661239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-11DOI: 10.1186/s41073-025-00170-2
Ralf Weiskirchen
Background: Continuous cell lines are indispensable in basic and preclinical research. However, cross-contamination, misidentification, and over-passaging affect the validity and reproducibility of biomedical results. Although there have been efforts to highlight this problem for decades, definitive prevention remains a challenge. The International Cell Line Authentication Committee (ICLAC) registry (version 13, 26 April 2024) lists nearly 600 misidentified or contaminated cell lines. The inappropriate use of such cells has led to countless publications containing invalid data, creating a ripple effect of wasted resources, misleading follow-up studies, and compromised evidence-based conclusions.
Methods: The ICLAC registry was consulted to identify commonly misidentified cell lines. A literature search of PubMed was performed to identify recent papers using these lines in liver-related experiments. Four publications with questionable conclusions were highlighted, and the editors of the respective journals were informed with short comments or letters to the editor.
Results: Reactions from journal editors varied widely. In two cases, the editors quickly published the comments, resulting in transparent corrections. In the third example, the editor conducted an internal investigation without immediately publishing a correction. In the fourth example, the journal declined to address concerns publicly.
Conclusions: Misidentified cell lines pose an ongoing threat to scientific rigor. Despite some responsible editorial interventions, the lack of universal standards fosters the dissemination of erroneous data. However, authors, reviewers, and editors have some important tools to prevent publications with misidentified cells by consulting available resources (e.g., ICLAC, Cellosaurus, Research Resource Identification Portal, SciScore™), and adopting consistent procedures to maintain research integrity.
背景:连续细胞系在基础和临床前研究中是不可或缺的。然而,交叉污染、误鉴定和交叉传代会影响生物医学结果的有效性和可重复性。尽管几十年来一直在努力突出这一问题,但明确的预防仍然是一项挑战。国际细胞系认证委员会(ICLAC)注册表(第13版,2024年4月26日)列出了近600个被错误识别或污染的细胞系。对此类细胞的不当使用导致了无数包含无效数据的出版物,造成了资源浪费的连锁反应,误导了后续研究,并损害了基于证据的结论。方法:参考ICLAC注册表来识别常见的错误识别细胞系。对PubMed进行文献检索,以确定最近在肝脏相关实验中使用这些细胞系的论文。突出了四份结论有问题的出版物,并向各自期刊的编辑通报了简短的评论或给编辑的信。结果:期刊编辑的反应差异很大。在两个案例中,编辑迅速发表了评论,导致了透明的更正。在第三个例子中,编辑进行了内部调查,但没有立即发表更正。在第四个例子中,《华尔街日报》拒绝公开回应担忧。结论:错误识别的细胞系对科学严谨性构成持续威胁。尽管有一些负责任的编辑干预,但缺乏普遍标准助长了错误数据的传播。然而,作者、审稿人和编辑有一些重要的工具,通过查阅可用资源(例如,ICLAC, Cellosaurus, Research Resource Identification Portal, SciScore™),并采用一致的程序来保持研究的完整性,来防止出版物中存在错误鉴定的细胞。
{"title":"Misidentified cell lines: failures of peer review, varying journal responses to misidentification inquiries, and strategies for safeguarding biomedical research.","authors":"Ralf Weiskirchen","doi":"10.1186/s41073-025-00170-2","DOIUrl":"10.1186/s41073-025-00170-2","url":null,"abstract":"<p><strong>Background: </strong>Continuous cell lines are indispensable in basic and preclinical research. However, cross-contamination, misidentification, and over-passaging affect the validity and reproducibility of biomedical results. Although there have been efforts to highlight this problem for decades, definitive prevention remains a challenge. The International Cell Line Authentication Committee (ICLAC) registry (version 13, 26 April 2024) lists nearly 600 misidentified or contaminated cell lines. The inappropriate use of such cells has led to countless publications containing invalid data, creating a ripple effect of wasted resources, misleading follow-up studies, and compromised evidence-based conclusions.</p><p><strong>Methods: </strong>The ICLAC registry was consulted to identify commonly misidentified cell lines. A literature search of PubMed was performed to identify recent papers using these lines in liver-related experiments. Four publications with questionable conclusions were highlighted, and the editors of the respective journals were informed with short comments or letters to the editor.</p><p><strong>Results: </strong>Reactions from journal editors varied widely. In two cases, the editors quickly published the comments, resulting in transparent corrections. In the third example, the editor conducted an internal investigation without immediately publishing a correction. In the fourth example, the journal declined to address concerns publicly.</p><p><strong>Conclusions: </strong>Misidentified cell lines pose an ongoing threat to scientific rigor. Despite some responsible editorial interventions, the lack of universal standards fosters the dissemination of erroneous data. However, authors, reviewers, and editors have some important tools to prevent publications with misidentified cells by consulting available resources (e.g., ICLAC, Cellosaurus, Research Resource Identification Portal, SciScore™), and adopting consistent procedures to maintain research integrity.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"12"},"PeriodicalIF":7.2,"publicationDate":"2025-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12247328/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144610565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-04DOI: 10.1186/s41073-025-00169-9
John J Pippin, Jarrod Bailey, Mark Kennedy, Deborah Dubow Press, Janine McCarthy, Ron Baron, Stephen Farghali, Elizabeth Baker, Neal D Barnard
Background: In the U.S. and many other countries, animal use in research, testing, and education is under the purview of Institutional Animal Care and Use Committees or similar bodies. Their responsibility for reviewing proposed experiments, particularly with regard to adherence to legal and ethical mandates, can be a challenging task.
Objective: To understand factors that may limit the effectiveness of Institutional Animal Care and Use Committees and identify possible solutions.
Methods: This editorial review summarizes scientific literature describing the challenges faced by U.S. Institutional Animal Care and Use Committees and those who rely on them and describes actions that may improve their functioning.
Results: Apart from what may be a sizable workload and the need to satisfy applicable regulations, committees have fundamental structural challenges and limitations. Under U.S. law, there is no requirement that committee members have expertise in the research areas under review or in methods that could replace animal use, nor could expertise in such vast technical areas be expected, in contrast with the review process of many scientific journals in which experts in the conditions being studied critique the choice of subjects and methods used. Although investigators are expected to consider alternatives to procedures that may cause more than momentary or slight pain or distress, they are not required to use them. While investigators must assure committee members that studies do not duplicate other research, committee members are not required to verify this. Consideration of alternatives to painful procedures is not required at all for experiments on animals not covered by the Animal Welfare Act. The majority of U.S. research institutions now allow research proposals to be approved by a single committee member, using a system called Designated Member Review, without full committee consideration. In other countries, requirements differ considerably. In the European Union, for example, investigators must complete a harm-benefit analysis and must use alternatives, not simply consider them.
Conclusions: The review process may be improved by requiring searches for nonanimal methods regardless of species, favoring alternatives based on human biology, improving the education of committee members and investigators, using reviewers with subject matter expertise, and minimizing conflicts of interest. Because of the limitations of the review process, funding institutions and scientific journals should not use Institutional Animal Care and Use Committee approval of submissions as evidence of adherence to ethical guidelines beyond those legally required.
{"title":"Institutional animal care and use committees and the challenges of evaluating animal research proposals.","authors":"John J Pippin, Jarrod Bailey, Mark Kennedy, Deborah Dubow Press, Janine McCarthy, Ron Baron, Stephen Farghali, Elizabeth Baker, Neal D Barnard","doi":"10.1186/s41073-025-00169-9","DOIUrl":"10.1186/s41073-025-00169-9","url":null,"abstract":"<p><strong>Background: </strong>In the U.S. and many other countries, animal use in research, testing, and education is under the purview of Institutional Animal Care and Use Committees or similar bodies. Their responsibility for reviewing proposed experiments, particularly with regard to adherence to legal and ethical mandates, can be a challenging task.</p><p><strong>Objective: </strong>To understand factors that may limit the effectiveness of Institutional Animal Care and Use Committees and identify possible solutions.</p><p><strong>Methods: </strong>This editorial review summarizes scientific literature describing the challenges faced by U.S. Institutional Animal Care and Use Committees and those who rely on them and describes actions that may improve their functioning.</p><p><strong>Results: </strong>Apart from what may be a sizable workload and the need to satisfy applicable regulations, committees have fundamental structural challenges and limitations. Under U.S. law, there is no requirement that committee members have expertise in the research areas under review or in methods that could replace animal use, nor could expertise in such vast technical areas be expected, in contrast with the review process of many scientific journals in which experts in the conditions being studied critique the choice of subjects and methods used. Although investigators are expected to consider alternatives to procedures that may cause more than momentary or slight pain or distress, they are not required to use them. While investigators must assure committee members that studies do not duplicate other research, committee members are not required to verify this. Consideration of alternatives to painful procedures is not required at all for experiments on animals not covered by the Animal Welfare Act. The majority of U.S. research institutions now allow research proposals to be approved by a single committee member, using a system called Designated Member Review, without full committee consideration. In other countries, requirements differ considerably. In the European Union, for example, investigators must complete a harm-benefit analysis and must use alternatives, not simply consider them.</p><p><strong>Conclusions: </strong>The review process may be improved by requiring searches for nonanimal methods regardless of species, favoring alternatives based on human biology, improving the education of committee members and investigators, using reviewers with subject matter expertise, and minimizing conflicts of interest. Because of the limitations of the review process, funding institutions and scientific journals should not use Institutional Animal Care and Use Committee approval of submissions as evidence of adherence to ethical guidelines beyond those legally required.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"11"},"PeriodicalIF":7.2,"publicationDate":"2025-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12231287/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144562314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}