Pub Date : 2025-11-21DOI: 10.1186/s41073-025-00181-z
Clovis Mariano Faggion
Background and aim: The International Committee of Medical Journal Editors (ICMJE) defines a potential conflict of interest (COI) as a situation where professional judgment could be influenced by secondary interests. Competing interests can introduce bias into the peer-review process, making it essential for all participants to declare any potential COIs. While authors are currently required to disclose their COIs, editors and editorial board members are not held to the same standard. This study aimed to evaluate the extent to which editors and editorial board members of ethics journals report their potential competing interests.
Methods: From October 23 to November 1, 2024, 82 ethics journals selected based on their impact factors were assessed, focusing on the disclosure of potential COIs by editors and editorial board members. Journal websites were examined to determine how editors and board members disclose potential COIs. Additionally, publisher websites were assessed for policies guiding these individuals in reporting COIs during peer review.
Results: Only 2% of the journals disclosed potential COIs for their editors, and 13% provided biographical information about editorial members. None of the journals employed a structured reporting approach, such as the ICMJE disclosure form, despite most claiming adherence to ICMJE and COPE guidelines. There was considerable variability in how journals and publishers guided their editors and board members in reporting their own COIs.
Conclusion: The findings indicate that disclosures of potential COIs by editors and editorial board members in leading ethics journals are often inconsistent and insufficient. Increasing transparency in this area could lead to a fairer and more trustworthy peer-review process.
{"title":"The disclosure of potential conflicts of interest among editors and members of editorial boards in leading ethics journals.","authors":"Clovis Mariano Faggion","doi":"10.1186/s41073-025-00181-z","DOIUrl":"10.1186/s41073-025-00181-z","url":null,"abstract":"<p><strong>Background and aim: </strong>The International Committee of Medical Journal Editors (ICMJE) defines a potential conflict of interest (COI) as a situation where professional judgment could be influenced by secondary interests. Competing interests can introduce bias into the peer-review process, making it essential for all participants to declare any potential COIs. While authors are currently required to disclose their COIs, editors and editorial board members are not held to the same standard. This study aimed to evaluate the extent to which editors and editorial board members of ethics journals report their potential competing interests.</p><p><strong>Methods: </strong>From October 23 to November 1, 2024, 82 ethics journals selected based on their impact factors were assessed, focusing on the disclosure of potential COIs by editors and editorial board members. Journal websites were examined to determine how editors and board members disclose potential COIs. Additionally, publisher websites were assessed for policies guiding these individuals in reporting COIs during peer review.</p><p><strong>Results: </strong>Only 2% of the journals disclosed potential COIs for their editors, and 13% provided biographical information about editorial members. None of the journals employed a structured reporting approach, such as the ICMJE disclosure form, despite most claiming adherence to ICMJE and COPE guidelines. There was considerable variability in how journals and publishers guided their editors and board members in reporting their own COIs.</p><p><strong>Conclusion: </strong>The findings indicate that disclosures of potential COIs by editors and editorial board members in leading ethics journals are often inconsistent and insufficient. Increasing transparency in this area could lead to a fairer and more trustworthy peer-review process.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"25"},"PeriodicalIF":10.7,"publicationDate":"2025-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12636210/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145566699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-17DOI: 10.1186/s41073-025-00183-x
Silke Kniffert, Ivan Buljan, Flavio Azevedo, Peter Babinčák, Lucija Batinović, Thomas Rhys Evans, Sara Garofalo, Christopher Graham, Lucianne Groenink, Malika Ihle, Miloslav Klugar, Lucia Kočišová, Michal Kohút, Nikolaos Kostomitsopoulos, Seán Lacey, Anita Lunić, Ana Marušić, Thomas Nordström, Charlotte R Pennington, Daniel Pizzolato, Ulf Toelch, Marta Topor, Miro Vuković, Michiel R de Boer
Background: Research methodology education aims to equip students with the foundational knowledge of robust scientific practices, emphasizing deep understanding of scientific inquiry, integrity, and critical thinking in research practice. A literature review reveals that the observed diversity in research methods course design and instruction stems from a lack of consensus about the essential foundations required to critically engage with, design, and execute research in education. This is further compounded by a limited pedagogical innovation. However, no study has yet investigated how research methodology is taught and perceived across European universities. The objective of this study is to examine practices and attitudes regarding teaching research methodology in different European countries, across different disciplines and different training stages to identify commonalities and discrepancies.
Methods: A cross-sectional survey was designed based on the Structure of Observed Learning Outcome (SOLO) taxonomy and further developed in several rounds of expert input and feedback, ensuring comprehensive inclusion of diverse teaching formats and assessment types. The survey was distributed to research methodology and non-research methodology higher education teachers across Europe through stratified and snowball sampling methods.
Results: The survey was completed by 559 respondents across 24 countries and seven disciplinary categories. The findings identified a predominant reliance on traditional passive teaching formats, such as face-to-face or online lectures. Active methods such as flipped classroom (8.4% Bachelor, 4.8% Master, 2.3% PhD) and protocol writing (8.2% Bachelor, 6.6% Master, 3.9% PhD) were less frequently used. Written exams dominated assessment strategies at all levels. Across our stratification levels, all topics were rated very important, with hypothesis formulation, research integrity, and study design as the most necessary topics, while pre-registration, peer review, and data management plan were prioritized slightly less.
Conclusions: These findings reveal relative homogeneity in research methodology teaching across academic levels and disciplines in Europe. The persistence of passive teaching formats and the limited adoption of active methodologies reflects an untapped opportunity to improve the effectiveness of research methodology education in fostering critical thinking and ethical practices. Higher education institutions need to reevaluate research methodology curricula to better align with contemporary research demands.
{"title":"Research methodology education in Europe: a multi-country, cross-disciplinary survey of current practices and perspectives.","authors":"Silke Kniffert, Ivan Buljan, Flavio Azevedo, Peter Babinčák, Lucija Batinović, Thomas Rhys Evans, Sara Garofalo, Christopher Graham, Lucianne Groenink, Malika Ihle, Miloslav Klugar, Lucia Kočišová, Michal Kohút, Nikolaos Kostomitsopoulos, Seán Lacey, Anita Lunić, Ana Marušić, Thomas Nordström, Charlotte R Pennington, Daniel Pizzolato, Ulf Toelch, Marta Topor, Miro Vuković, Michiel R de Boer","doi":"10.1186/s41073-025-00183-x","DOIUrl":"10.1186/s41073-025-00183-x","url":null,"abstract":"<p><strong>Background: </strong>Research methodology education aims to equip students with the foundational knowledge of robust scientific practices, emphasizing deep understanding of scientific inquiry, integrity, and critical thinking in research practice. A literature review reveals that the observed diversity in research methods course design and instruction stems from a lack of consensus about the essential foundations required to critically engage with, design, and execute research in education. This is further compounded by a limited pedagogical innovation. However, no study has yet investigated how research methodology is taught and perceived across European universities. The objective of this study is to examine practices and attitudes regarding teaching research methodology in different European countries, across different disciplines and different training stages to identify commonalities and discrepancies.</p><p><strong>Methods: </strong>A cross-sectional survey was designed based on the Structure of Observed Learning Outcome (SOLO) taxonomy and further developed in several rounds of expert input and feedback, ensuring comprehensive inclusion of diverse teaching formats and assessment types. The survey was distributed to research methodology and non-research methodology higher education teachers across Europe through stratified and snowball sampling methods.</p><p><strong>Results: </strong>The survey was completed by 559 respondents across 24 countries and seven disciplinary categories. The findings identified a predominant reliance on traditional passive teaching formats, such as face-to-face or online lectures. Active methods such as flipped classroom (8.4% Bachelor, 4.8% Master, 2.3% PhD) and protocol writing (8.2% Bachelor, 6.6% Master, 3.9% PhD) were less frequently used. Written exams dominated assessment strategies at all levels. Across our stratification levels, all topics were rated very important, with hypothesis formulation, research integrity, and study design as the most necessary topics, while pre-registration, peer review, and data management plan were prioritized slightly less.</p><p><strong>Conclusions: </strong>These findings reveal relative homogeneity in research methodology teaching across academic levels and disciplines in Europe. The persistence of passive teaching formats and the limited adoption of active methodologies reflects an untapped opportunity to improve the effectiveness of research methodology education in fostering critical thinking and ethical practices. Higher education institutions need to reevaluate research methodology curricula to better align with contemporary research demands.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"24"},"PeriodicalIF":10.7,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12621402/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145535098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-27DOI: 10.1186/s41073-025-00182-y
André L Teixeira
Background: Gender and geographical disparities have been widely reported in the peer-review process of biomedical journals. Artificial Intelligence (AI) is increasingly transforming the publishing system; however, its potential to identify suitable reviewers, and whether it might reduce, replicate or reinforce existing biases in peer review has never been comprehensively investigated. This study sought to determine the usefulness of AI in identifying expert scientists in medicine taking into consideration gender and geographical diversity, equity and inclusion (DEI).
Methods: The title and abstract of 50 research articles published in high-impact biomedical journals between November 2023 and September 2024 were fed into a large language model software (GPT-4o), which was prompted to identify 20 distinguished scientists in the study's field. Two trials were randomly performed with and without a gender and geographical DEI prompt. Scientists were classified based on gender, geographical location, and country of affiliation income level. Furthermore, the number of peer-reviewed publications, Google Scholar-derived total citations and h-index were computed.
Results: Without a DEI prompt, GPT-4o primarily identified male scientists (68%) and those affiliated to high-income countries (95.3%). Conversely, when DEI was explicitly prompted, GPT-4o generated a gender-balanced (51% females) and geographically diverse list of scientists. Specifically, the proportion of scientists from high-income countries decreased to 42.3%, while representation from upper-middle (3.2% to 26.2%), lower-middle (1.2% to 26.1%), and low-income (0.2% to 5.4%) countries significantly increased. The number of publications (without vs. with DEI: 284 ± 237 vs. 281 ± 245, P = 0.77), citations (48,445 ± 60,270 vs. 53,792 ± 71,903, P = 0.13), and h-index (79 ± 43 vs. 76 ± 43, P = 0.15) did not differ between groups.
Conclusions: When not prompted to consider DEI, GPT-4o successfully identified expert scientists, but primarily males and those from high-income countries. However, when DEI was explicitly prompted, GPT-4o generated a gender-balanced and geographically diverse list of scientists. The academic productivity was considerably high and comparable between groups, suggesting that GPT-4o identified potentially skilled scientists who could reasonably serve as reviewers for scientific journals. These findings provide evidence that AI can be an ally in combating gender and geographical gaps in peer review, though DEI should be explicitly prompted. Conversely, AI could perpetuate existing biases if not carefully managed.
背景:在生物医学期刊的同行评议过程中,性别和地域差异已被广泛报道。人工智能(AI)正日益改变着出版系统;然而,它识别合适审稿人的潜力,以及它是否会减少、复制或加强同行评审中现有的偏见,从未得到过全面的调查。本研究旨在确定人工智能在识别医学专家科学家方面的有用性,同时考虑到性别和地理多样性、公平和包容(DEI)。方法:将2023年11月至2024年9月在高影响力生物医学期刊上发表的50篇研究论文的标题和摘要输入大型语言模型软件(gpt - 40),该软件提示识别出该研究领域的20名杰出科学家。两项试验随机进行,有或没有性别和地理DEI提示。科学家根据性别、地理位置和所属国家的收入水平进行分类。计算同行评议论文数、学者总引用数谷歌和h指数。结果:在没有DEI提示的情况下,gpt - 40主要识别男性科学家(68%)和隶属于高收入国家的科学家(95.3%)。相反,当DEI被明确提示时,gpt - 40产生了一个性别平衡(51%的女性)和地理多样化的科学家名单。具体来说,来自高收入国家的科学家比例下降到42.3%,而来自中高收入国家(3.2%至26.2%)、中低收入国家(1.2%至26.1%)和低收入国家(0.2%至5.4%)的科学家比例显著增加。发表论文数(无DEI vs.有DEI: 284±237 vs. 281±245,P = 0.77)、被引次数(48,445±60,270 vs. 53,792±71,903,P = 0.13)和h指数(79±43 vs. 76±43,P = 0.15)组间无差异。结论:当没有提示考虑DEI时,gpt - 40成功地识别了专家科学家,但主要是男性和来自高收入国家的科学家。然而,当DEI被明确提示时,gpt - 40产生了一个性别平衡和地理多样化的科学家名单。学术生产力相当高,并且在两组之间具有可比性,这表明gpt - 40发现了潜在的有技能的科学家,他们可以合理地担任科学期刊的审稿人。这些发现提供了证据,表明人工智能可以成为消除同行评议中的性别和地域差距的盟友,尽管应该明确促进人工智能。相反,如果管理不当,人工智能可能会延续现有的偏见。
{"title":"AI in peer review: can artificial intelligence be an ally in reducing gender and geographical gaps in peer review? A randomized trial.","authors":"André L Teixeira","doi":"10.1186/s41073-025-00182-y","DOIUrl":"10.1186/s41073-025-00182-y","url":null,"abstract":"<p><strong>Background: </strong>Gender and geographical disparities have been widely reported in the peer-review process of biomedical journals. Artificial Intelligence (AI) is increasingly transforming the publishing system; however, its potential to identify suitable reviewers, and whether it might reduce, replicate or reinforce existing biases in peer review has never been comprehensively investigated. This study sought to determine the usefulness of AI in identifying expert scientists in medicine taking into consideration gender and geographical diversity, equity and inclusion (DEI).</p><p><strong>Methods: </strong>The title and abstract of 50 research articles published in high-impact biomedical journals between November 2023 and September 2024 were fed into a large language model software (GPT-4o), which was prompted to identify 20 distinguished scientists in the study's field. Two trials were randomly performed with and without a gender and geographical DEI prompt. Scientists were classified based on gender, geographical location, and country of affiliation income level. Furthermore, the number of peer-reviewed publications, Google Scholar-derived total citations and h-index were computed.</p><p><strong>Results: </strong>Without a DEI prompt, GPT-4o primarily identified male scientists (68%) and those affiliated to high-income countries (95.3%). Conversely, when DEI was explicitly prompted, GPT-4o generated a gender-balanced (51% females) and geographically diverse list of scientists. Specifically, the proportion of scientists from high-income countries decreased to 42.3%, while representation from upper-middle (3.2% to 26.2%), lower-middle (1.2% to 26.1%), and low-income (0.2% to 5.4%) countries significantly increased. The number of publications (without vs. with DEI: 284 ± 237 vs. 281 ± 245, P = 0.77), citations (48,445 ± 60,270 vs. 53,792 ± 71,903, P = 0.13), and h-index (79 ± 43 vs. 76 ± 43, P = 0.15) did not differ between groups.</p><p><strong>Conclusions: </strong>When not prompted to consider DEI, GPT-4o successfully identified expert scientists, but primarily males and those from high-income countries. However, when DEI was explicitly prompted, GPT-4o generated a gender-balanced and geographically diverse list of scientists. The academic productivity was considerably high and comparable between groups, suggesting that GPT-4o identified potentially skilled scientists who could reasonably serve as reviewers for scientific journals. These findings provide evidence that AI can be an ally in combating gender and geographical gaps in peer review, though DEI should be explicitly prompted. Conversely, AI could perpetuate existing biases if not carefully managed.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"23"},"PeriodicalIF":10.7,"publicationDate":"2025-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12557967/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145373412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01DOI: 10.1186/s41073-025-00177-9
Noa Mascato Fontaíña, Cristina Candal-Pedreira, Guadalupe García, Joseph S Ross, Alberto Ruano-Ravina, Lucía Martin-Gisbert
Objectives: To characterize journals that published and retracted articles retracted for having originated from paper mills and examine associations between paper mill retraction frequency and journal characteristics.
Methods: Retraction Watch database was used to identify papers retracted due to originating from paper mills and journals, between January 2020 and December 2022. Data on the total number of articles and journal characteristics were obtained from Web of Science and Journal Citation Reports. Journals were classified based on the frequency of retracted paper mill papers (1, 2-9, ≥ 10 retractions). Logistic regressions were conducted to explore associations between retraction frequency and journal characteristics.
Results: One hundred forty-two journals were identified that retracted 2,051 articles from paper mills. Among these, 71 (50%) journals had 1 retraction, 36 (25.4%) had 2-9 retractions, and 35 (24.6%) had ≥ 10 retractions; 4 (2.8%) journals had > 100 retractions. These journals, regardless of paper mill retraction number, were mainly in the second (35.2%) and third (29.6%) quartiles by impact factor. Medicine and health emerged as the predominant subject area, comprising 61.2% of all indexed journal categories. Comparing journals with one retraction to those with ten or more, the proportion of open access articles (72.6% vs. 19.2%) and median editorial times (86 vs. 116 days) differed across groups, although these differences were not statistically significant. An inverse correlation was observed between the proportion of paper mill papers and original articles (Spearman's Rho = -0.1891, 95%CI -0.370 to -0.008). Logistic regressions found no significant association between paper mill retraction number and other variables.
Conclusion: This study suggests that paper mill retractions are concentrated in a small number of journals with common characteristics: high open access rates, intermediate impact factor quartiles, a high volume of citable items, and classification in medicine and health categories. Short editorial times may indicate a higher presence of paper mill publications, but more research is needed to examine this factor in depth, as well as the possible influence of acceptance rates.
目的:研究发表和撤回论文的期刊的特征,并研究造纸厂撤回论文的频率与期刊特征之间的关系。方法:利用撤稿观察数据库,对2020年1月至2022年12月期间因来自造纸厂和期刊而被撤稿的论文进行检索。文章总数和期刊特征数据来自Web of Science和journal Citation Reports。根据论文被撤稿的频率(1、2-9、≥10)对期刊进行分类。运用逻辑回归来探讨撤稿频率与期刊特征之间的关系。结果:共鉴定出142种期刊,撤稿论文2051篇。其中撤稿1篇71篇(50%),撤稿2-9篇36篇(25.4%),撤稿≥10篇35篇(24.6%);4份(2.8%)期刊被撤稿100次。这些期刊,无论造纸厂撤回多少,主要分布在影响因子的第二(35.2%)和第三(29.6%)四分位数。医学和健康成为主要的学科领域,占所有索引期刊类别的61.2%。将一次撤稿的期刊与10次或以上撤稿的期刊进行比较,开放获取文章的比例(72.6% vs. 19.2%)和中位编辑时间(86 vs. 116天)在两组之间存在差异,尽管这些差异在统计学上并不显著。造纸厂论文和原创文章的比例呈负相关(Spearman’s Rho = -0.1891, 95%CI为-0.370 ~ -0.008)。Logistic回归分析发现,纸厂撤稿数与其他变量之间无显著相关性。结论:研究表明,造纸厂论文撤稿集中在少数期刊上,这些期刊具有开放获取率高、影响因子四分位数中等、可引用条目数量多、医学和卫生类分类等共同特点。编辑时间短可能表明造纸厂出版物较多,但需要更多的研究来深入研究这一因素,以及接受率的可能影响。
{"title":"Identifying common patterns in journals that retracted papers from paper mills: a cross-sectional study.","authors":"Noa Mascato Fontaíña, Cristina Candal-Pedreira, Guadalupe García, Joseph S Ross, Alberto Ruano-Ravina, Lucía Martin-Gisbert","doi":"10.1186/s41073-025-00177-9","DOIUrl":"10.1186/s41073-025-00177-9","url":null,"abstract":"<p><strong>Objectives: </strong>To characterize journals that published and retracted articles retracted for having originated from paper mills and examine associations between paper mill retraction frequency and journal characteristics.</p><p><strong>Methods: </strong>Retraction Watch database was used to identify papers retracted due to originating from paper mills and journals, between January 2020 and December 2022. Data on the total number of articles and journal characteristics were obtained from Web of Science and Journal Citation Reports. Journals were classified based on the frequency of retracted paper mill papers (1, 2-9, ≥ 10 retractions). Logistic regressions were conducted to explore associations between retraction frequency and journal characteristics.</p><p><strong>Results: </strong>One hundred forty-two journals were identified that retracted 2,051 articles from paper mills. Among these, 71 (50%) journals had 1 retraction, 36 (25.4%) had 2-9 retractions, and 35 (24.6%) had ≥ 10 retractions; 4 (2.8%) journals had > 100 retractions. These journals, regardless of paper mill retraction number, were mainly in the second (35.2%) and third (29.6%) quartiles by impact factor. Medicine and health emerged as the predominant subject area, comprising 61.2% of all indexed journal categories. Comparing journals with one retraction to those with ten or more, the proportion of open access articles (72.6% vs. 19.2%) and median editorial times (86 vs. 116 days) differed across groups, although these differences were not statistically significant. An inverse correlation was observed between the proportion of paper mill papers and original articles (Spearman's Rho = -0.1891, 95%CI -0.370 to -0.008). Logistic regressions found no significant association between paper mill retraction number and other variables.</p><p><strong>Conclusion: </strong>This study suggests that paper mill retractions are concentrated in a small number of journals with common characteristics: high open access rates, intermediate impact factor quartiles, a high volume of citable items, and classification in medicine and health categories. Short editorial times may indicate a higher presence of paper mill publications, but more research is needed to examine this factor in depth, as well as the possible influence of acceptance rates.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"21"},"PeriodicalIF":10.7,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12487316/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145202329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-22DOI: 10.1186/s41073-025-00180-0
Clovis Mariano Faggion, Carla Brigitte Susan Kohl
Background: Reporting guidelines are key tools for enhancing the transparency and reproducibility of research. To support responsible reporting, such guidelines should also address ethical considerations. However, the extent to which these elements are integrated into reporting checklists remains unclear. This study aimed to evaluate how ethical elements are incorporated in these guidelines.
Methods: We identified reporting guidelines indexed on the "Enhancing the Quality and Transparency of Health Research (EQUATOR) Network" website. On 30 January 2025, a random sample of 128 reporting guidelines and extensions was drawn from a total of 657. For each, we retrieved the associated development publication and extracted data into a standardised table. The assessed ethical elements included COI disclosure, sponsorship, authorship criteria, data sharing guidance, and protocol development and study registration. Data extraction for the first 13 guidelines was conducted independently and in duplicate. After achieving 100% agreement, the remaining data were extracted by one author, following "A MeaSurement Tool to Assess Systematic Reviews" (AMSTAR)-2 recommendations.
Results: The dataset comprised 101 original guidelines and 27 extensions of existing guidelines. Half of the included guidelines were published from 2015 onward, with 32.0% published between 2020 and 2024. The median year of publication was 2016. Approximately 90 of the 128 assessed guidelines focused on clinical studies. Over 70% of the guidelines did not include items related to conflicts of interest (COI) or sponsorship. Only 8.6% addressed COI and sponsorship jointly in a single item, while fewer than 9% covered them as two separate items. Notably, only two guidelines (1.6%) provided instructions for using the ICMJE disclosure form to report potential conflicts of interest. Nearly 20% of the guidelines offered guidance on study registration. Fewer than 30% recommended the development of a research protocol, and only 18.8% provided guidance on protocol sharing. Additionally, fewer than 10% of the checklists included guidance on authorship criteria or data sharing.
Conclusion: Ethical considerations are insufficiently addressed in current reporting guidelines. The absence of standardised items on COIs, funding, authorship, and data sharing represents a missed opportunity to promote transparency and research integrity. Future updates to reporting guidelines should systematically incorporate these elements.
{"title":"Exploring ethical elements in reporting guidelines: results from a research-on-research study.","authors":"Clovis Mariano Faggion, Carla Brigitte Susan Kohl","doi":"10.1186/s41073-025-00180-0","DOIUrl":"10.1186/s41073-025-00180-0","url":null,"abstract":"<p><strong>Background: </strong>Reporting guidelines are key tools for enhancing the transparency and reproducibility of research. To support responsible reporting, such guidelines should also address ethical considerations. However, the extent to which these elements are integrated into reporting checklists remains unclear. This study aimed to evaluate how ethical elements are incorporated in these guidelines.</p><p><strong>Methods: </strong>We identified reporting guidelines indexed on the \"Enhancing the Quality and Transparency of Health Research (EQUATOR) Network\" website. On 30 January 2025, a random sample of 128 reporting guidelines and extensions was drawn from a total of 657. For each, we retrieved the associated development publication and extracted data into a standardised table. The assessed ethical elements included COI disclosure, sponsorship, authorship criteria, data sharing guidance, and protocol development and study registration. Data extraction for the first 13 guidelines was conducted independently and in duplicate. After achieving 100% agreement, the remaining data were extracted by one author, following \"A MeaSurement Tool to Assess Systematic Reviews\" (AMSTAR)-2 recommendations.</p><p><strong>Results: </strong>The dataset comprised 101 original guidelines and 27 extensions of existing guidelines. Half of the included guidelines were published from 2015 onward, with 32.0% published between 2020 and 2024. The median year of publication was 2016. Approximately 90 of the 128 assessed guidelines focused on clinical studies. Over 70% of the guidelines did not include items related to conflicts of interest (COI) or sponsorship. Only 8.6% addressed COI and sponsorship jointly in a single item, while fewer than 9% covered them as two separate items. Notably, only two guidelines (1.6%) provided instructions for using the ICMJE disclosure form to report potential conflicts of interest. Nearly 20% of the guidelines offered guidance on study registration. Fewer than 30% recommended the development of a research protocol, and only 18.8% provided guidance on protocol sharing. Additionally, fewer than 10% of the checklists included guidance on authorship criteria or data sharing.</p><p><strong>Conclusion: </strong>Ethical considerations are insufficiently addressed in current reporting guidelines. The absence of standardised items on COIs, funding, authorship, and data sharing represents a missed opportunity to promote transparency and research integrity. Future updates to reporting guidelines should systematically incorporate these elements.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"20"},"PeriodicalIF":10.7,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12452000/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145115404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-08DOI: 10.1186/s41073-025-00178-8
Jeremy Y Ng, Malvika Krishnamurthy, Gursimran Deol, Wid Al-Zahraa Al-Khafaji, Vetrivel Balaji, Magdalene Abebe, Jyot Adhvaryu, Tejas Karrthik, Pranavee Mohanakanthan, Adharva Vellaparambil, Lex M Bouter, R Brian Haynes, Alfonso Iorio, Cynthia Lokker, Hervé Maisonneuve, Ana Marušić, David Moher
Background: Artificial intelligence chatbots (AICs) are designed to mimic human conversations through text or speech, offering both opportunities and challenges in scholarly publishing. While journal policies of AICs are becoming more defined, there is still a limited understanding of how Editors in chief (EiCs) of biomedical journals' view these tools. This survey examined EiCs' attitudes and perceptions, highlighting positive aspects, such as language and grammar support, and concerns regarding setup time, training requirements, and ethical considerations towards the use of AICs in the scholarly publishing process.
Methods: A cross-sectional survey was conducted, targeting EiCs of biomedical journals across multiple publishers. Of 3725 journals screened, 3381 eligible emails were identified through web scraping and manual verification. Survey invitations were sent to all identified EiCs. The survey remained open for five weeks, with three follow-up email reminders.
Results: The survey had a response rate of 16.5% (510 total responses) and a completion rate of 87.0%. Most respondents were familiar with AIs (66.7%), however, most had not utilized AICs in their editorial work (83.7%) and many expressed interest in further training (64.4%). EiCs acknowledged benefits such as language and grammar support (70.8%) but expressed mixed attitudes on AIC roles in accelerating peer review. Perceptions included the initial time and resources required for setup (83.7%), training needs (83.9%), and ethical considerations (80.6%).
Conclusions: This study found that EiCs have mixed attitudes toward AICs, with some EICs acknowledging their potential to enhance editorial efficiency, particularly in tasks like language editing, while others expressed concerns about the ethical implications, the time and resources required for implementation, and the need for additional training.
{"title":"Attitudes and perceptions of biomedical journal editors in chief towards the use of artificial intelligence chatbots in the scholarly publishing process: a cross-sectional survey.","authors":"Jeremy Y Ng, Malvika Krishnamurthy, Gursimran Deol, Wid Al-Zahraa Al-Khafaji, Vetrivel Balaji, Magdalene Abebe, Jyot Adhvaryu, Tejas Karrthik, Pranavee Mohanakanthan, Adharva Vellaparambil, Lex M Bouter, R Brian Haynes, Alfonso Iorio, Cynthia Lokker, Hervé Maisonneuve, Ana Marušić, David Moher","doi":"10.1186/s41073-025-00178-8","DOIUrl":"10.1186/s41073-025-00178-8","url":null,"abstract":"<p><strong>Background: </strong>Artificial intelligence chatbots (AICs) are designed to mimic human conversations through text or speech, offering both opportunities and challenges in scholarly publishing. While journal policies of AICs are becoming more defined, there is still a limited understanding of how Editors in chief (EiCs) of biomedical journals' view these tools. This survey examined EiCs' attitudes and perceptions, highlighting positive aspects, such as language and grammar support, and concerns regarding setup time, training requirements, and ethical considerations towards the use of AICs in the scholarly publishing process.</p><p><strong>Methods: </strong>A cross-sectional survey was conducted, targeting EiCs of biomedical journals across multiple publishers. Of 3725 journals screened, 3381 eligible emails were identified through web scraping and manual verification. Survey invitations were sent to all identified EiCs. The survey remained open for five weeks, with three follow-up email reminders.</p><p><strong>Results: </strong>The survey had a response rate of 16.5% (510 total responses) and a completion rate of 87.0%. Most respondents were familiar with AIs (66.7%), however, most had not utilized AICs in their editorial work (83.7%) and many expressed interest in further training (64.4%). EiCs acknowledged benefits such as language and grammar support (70.8%) but expressed mixed attitudes on AIC roles in accelerating peer review. Perceptions included the initial time and resources required for setup (83.7%), training needs (83.9%), and ethical considerations (80.6%).</p><p><strong>Conclusions: </strong>This study found that EiCs have mixed attitudes toward AICs, with some EICs acknowledging their potential to enhance editorial efficiency, particularly in tasks like language editing, while others expressed concerns about the ethical implications, the time and resources required for implementation, and the need for additional training.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"19"},"PeriodicalIF":10.7,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12416066/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145016838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-30DOI: 10.1186/s41073-025-00179-7
Carole Bandiera, Kate Lowrie, Donna Thomas, Sabuj Kanti Mistry, Elizabeth Harris, Mark F Harris, Parisa Aslani
We have been scammed in our online qualitative study by some fraudulent participants who falsely claimed to be pharmacists or community health workers. These participants were interviewed before we discovered that they were not who they claimed to be.In this commentary, we describe key indicators of potential imposters, such as the number of emails received in a short period of time, emails with similar content and address structure, participants having a keen interest in the reimbursement, camera switched off during the interview, and inconsistency in the participants' responses.We provide recommendations on how to prevent future fraud, such as promoting the study to a closed network or groups on social media, encouraging participants to provide sources that verify their identity, ensuring that the camera is switched on during the entire interview, discouraging the use of artificial intelligence (AI) to answer questions or generate content, unless when AI-based language tools are used to facilitate translation, understanding or communication, providing reimbursements with local vouchers rather than international ones, and where the participants are healthcare professionals, checking their registration number prior to the interview.It is important for Human Research Ethics Committee members to consider genuine measures to assess participant authenticity and reduce the risk of fraudulent participation. Additionally, universities and research institutions should develop guidance to educate researchers in this area. Published protocols, guidelines and checklists for online qualitative studies, and participant information statements and consent forms should be adapted to prevent and address potential fraud. For example, the COREQ checklist should be updated so that researchers report the actions undertaken to prevent and detect fraud and their experiences and actions if there was fraud.Fraud in online research impacts the integrity and quality of online research. Urgent actions are needed to raise awareness of this issue within the research community and prevent further occurrences of scams.
{"title":"I have been scammed in my qualitative research.","authors":"Carole Bandiera, Kate Lowrie, Donna Thomas, Sabuj Kanti Mistry, Elizabeth Harris, Mark F Harris, Parisa Aslani","doi":"10.1186/s41073-025-00179-7","DOIUrl":"10.1186/s41073-025-00179-7","url":null,"abstract":"<p><p>We have been scammed in our online qualitative study by some fraudulent participants who falsely claimed to be pharmacists or community health workers. These participants were interviewed before we discovered that they were not who they claimed to be.In this commentary, we describe key indicators of potential imposters, such as the number of emails received in a short period of time, emails with similar content and address structure, participants having a keen interest in the reimbursement, camera switched off during the interview, and inconsistency in the participants' responses.We provide recommendations on how to prevent future fraud, such as promoting the study to a closed network or groups on social media, encouraging participants to provide sources that verify their identity, ensuring that the camera is switched on during the entire interview, discouraging the use of artificial intelligence (AI) to answer questions or generate content, unless when AI-based language tools are used to facilitate translation, understanding or communication, providing reimbursements with local vouchers rather than international ones, and where the participants are healthcare professionals, checking their registration number prior to the interview.It is important for Human Research Ethics Committee members to consider genuine measures to assess participant authenticity and reduce the risk of fraudulent participation. Additionally, universities and research institutions should develop guidance to educate researchers in this area. Published protocols, guidelines and checklists for online qualitative studies, and participant information statements and consent forms should be adapted to prevent and address potential fraud. For example, the COREQ checklist should be updated so that researchers report the actions undertaken to prevent and detect fraud and their experiences and actions if there was fraud.Fraud in online research impacts the integrity and quality of online research. Urgent actions are needed to raise awareness of this issue within the research community and prevent further occurrences of scams.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"18"},"PeriodicalIF":10.7,"publicationDate":"2025-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12398116/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144981940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-29DOI: 10.1186/s41073-025-00176-w
Sara Steele, Tom Lavrijssen, Thomas Steckler
Background: Historically, systematic review studies of nonclinical published research articles around the life sciences have shown that the overall reporting of information on measures against bias is low. Measures such as randomization, blinding and sample size estimation are mentioned in the minority of the studies. The present study aims to provide an overview of the recent reporting standards in a large sample of nonclinical articles with focus on statistical information.
Methods: Journals were randomly selected from Journal Citation Reports (Clarivate). Biomedical research articles published in 2020 from 10 journals were analyzed for their reporting standards using a checklist.
Results: In total 860 articles; 320 articles describing in vivo methods, 187 articles describing in vitro methods and 353 articles including both in vivo and in vitro methods, were included in the study. The reporting rate of "randomization" ranged from 0%-63% between journals for in vivo articles and 0%-4% for in vitro articles. The reporting rate of "blinded conduct of the experiments" ranged from 11%-71% between journals for in vivo articles and 0%-86% for in vitro articles.
Conclusion: The analysis showed that the reporting standards remained low, also when other statistical information is concerned. Additionally, our results suggest that the reporting in articles on in vivo experiments is better compared to articles on in vitro experiments. Furthermore, important differences in reporting standards between journals seem to exist.
{"title":"Reporting of measures against bias in nonclinical published research studies: a journal-based comparison.","authors":"Sara Steele, Tom Lavrijssen, Thomas Steckler","doi":"10.1186/s41073-025-00176-w","DOIUrl":"10.1186/s41073-025-00176-w","url":null,"abstract":"<p><strong>Background: </strong>Historically, systematic review studies of nonclinical published research articles around the life sciences have shown that the overall reporting of information on measures against bias is low. Measures such as randomization, blinding and sample size estimation are mentioned in the minority of the studies. The present study aims to provide an overview of the recent reporting standards in a large sample of nonclinical articles with focus on statistical information.</p><p><strong>Methods: </strong>Journals were randomly selected from Journal Citation Reports (Clarivate). Biomedical research articles published in 2020 from 10 journals were analyzed for their reporting standards using a checklist.</p><p><strong>Results: </strong>In total 860 articles; 320 articles describing in vivo methods, 187 articles describing in vitro methods and 353 articles including both in vivo and in vitro methods, were included in the study. The reporting rate of \"randomization\" ranged from 0%-63% between journals for in vivo articles and 0%-4% for in vitro articles. The reporting rate of \"blinded conduct of the experiments\" ranged from 11%-71% between journals for in vivo articles and 0%-86% for in vitro articles.</p><p><strong>Conclusion: </strong>The analysis showed that the reporting standards remained low, also when other statistical information is concerned. Additionally, our results suggest that the reporting in articles on in vivo experiments is better compared to articles on in vitro experiments. Furthermore, important differences in reporting standards between journals seem to exist.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"17"},"PeriodicalIF":10.7,"publicationDate":"2025-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12398162/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144981881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-08DOI: 10.1186/s41073-025-00172-0
Antonija Mijatović, Marija Franka Žuljević, Luka Ursić, Nensi Bralić, Miro Vuković, Marija Roguljić, Ana Marušić
Background: Inappropriate manipulations of digital images pose significant risks to research integrity. Here we assessed the capability of students and researchers to detect image duplications in biomedical images.
Methods: We conducted a pen-and-paper survey involving medical students who had been exposed to research paper images during their studies, as well as active researchers. We asked them to identify duplications in images of Western blots, cell cultures, and histological sections and evaluated their performance based on the number of correctly and incorrectly detected duplications.
Results: A total of 831 students and 26 researchers completed the survey during 2023/2024 academic year. Out of 34 duplications of 21 unique image parts, the students correctly identified a median of 10 duplications (interquartile range [IQR] = 8-13), and made 2 mistakes (IQR = 1-4), whereas the researchers identified a median of 11 duplications (IQR = 8-14) and made 1 mistake (IQR = 1-3). There were no significant differences between the two groups in either the number of correctly detected duplications (p = .271, Cliff's δ = 0.126) or the number of mistakes (p = .731, Cliff's δ = 0.039). Both students and researchers identified higer percentage of duplications in the Western blot images than cell or tissue images (p < .005 and Cohen's d = 0.72; p < .005 and Cohen's d = 1.01, respectively). For students, gender was a weak predictor of performance, with female participants finding slightly more duplications (p < .005, Cliff's δ = 0.158), but making more mistakes (p < .005, Cliff's δ = 0.239). The study year had no significant impact on student performance (p = .209; Cliff's δ = 0.085).
Conclusions: Despite differences in expertise, both students and researchers demonstrated limited proficiency in detecting duplications in digital images. Digital image manipulation may be better detected by automated screening tools, and researchers should have clear guidance on how to prepare digital images in scientific publications.
{"title":"How good are medical students and researchers in detecting duplications in digital images from research articles: a cross-sectional survey.","authors":"Antonija Mijatović, Marija Franka Žuljević, Luka Ursić, Nensi Bralić, Miro Vuković, Marija Roguljić, Ana Marušić","doi":"10.1186/s41073-025-00172-0","DOIUrl":"10.1186/s41073-025-00172-0","url":null,"abstract":"<p><strong>Background: </strong>Inappropriate manipulations of digital images pose significant risks to research integrity. Here we assessed the capability of students and researchers to detect image duplications in biomedical images.</p><p><strong>Methods: </strong>We conducted a pen-and-paper survey involving medical students who had been exposed to research paper images during their studies, as well as active researchers. We asked them to identify duplications in images of Western blots, cell cultures, and histological sections and evaluated their performance based on the number of correctly and incorrectly detected duplications.</p><p><strong>Results: </strong>A total of 831 students and 26 researchers completed the survey during 2023/2024 academic year. Out of 34 duplications of 21 unique image parts, the students correctly identified a median of 10 duplications (interquartile range [IQR] = 8-13), and made 2 mistakes (IQR = 1-4), whereas the researchers identified a median of 11 duplications (IQR = 8-14) and made 1 mistake (IQR = 1-3). There were no significant differences between the two groups in either the number of correctly detected duplications (p = .271, Cliff's δ = 0.126) or the number of mistakes (p = .731, Cliff's δ = 0.039). Both students and researchers identified higer percentage of duplications in the Western blot images than cell or tissue images (p < .005 and Cohen's d = 0.72; p < .005 and Cohen's d = 1.01, respectively). For students, gender was a weak predictor of performance, with female participants finding slightly more duplications (p < .005, Cliff's δ = 0.158), but making more mistakes (p < .005, Cliff's δ = 0.239). The study year had no significant impact on student performance (p = .209; Cliff's δ = 0.085).</p><p><strong>Conclusions: </strong>Despite differences in expertise, both students and researchers demonstrated limited proficiency in detecting duplications in digital images. Digital image manipulation may be better detected by automated screening tools, and researchers should have clear guidance on how to prepare digital images in scientific publications.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"14"},"PeriodicalIF":10.7,"publicationDate":"2025-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12333226/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144801184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-30DOI: 10.1186/s41073-025-00175-x
Ashley J Tsang, John Z Sadler, E Sherwood Brown, Elizabeth Heitman
{"title":"Correction: Evaluating psychiatry journals' adherence to informed consent guidelines for case reports.","authors":"Ashley J Tsang, John Z Sadler, E Sherwood Brown, Elizabeth Heitman","doi":"10.1186/s41073-025-00175-x","DOIUrl":"10.1186/s41073-025-00175-x","url":null,"abstract":"","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"16"},"PeriodicalIF":10.7,"publicationDate":"2025-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12309193/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144755332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}