Pub Date : 2025-10-01Epub Date: 2024-08-13DOI: 10.1080/08989621.2024.2383349
Maarten Derksen, Stephanie Meirmans, Jonna Brenninkmeijer, Jeannette Pols, Annemarijn de Boer, Hans van Eyghen, Surya Gayet, Rolf Groenwold, Dennis Hernaus, Pim Huijnen, Nienke Jonker, Renske de Kleijn, Charlotte F Kroll, Angelos-Miltiadis Krypotos, Nynke van der Laan, Kim Luijken, Ewout Meijer, Rachel S A Pear, Rik Peels, Robin Peeters, Charlotte C S Rulkens, Christin Scholz, Nienke Smit, Rombert Stapel, Joost de Winter
Drawing on our experiences conducting replications we describe the lessons we learned about replication studies and formulate recommendations for researchers, policy makers, and funders about the role of replication in science and how it should be supported and funded. We first identify a variety of benefits of doing replication studies. Next, we argue that it is often necessary to improve aspects of the original study, even if that means deviating from the original protocol. Thirdly, we argue that replication studies highlight the importance of and need for more transparency of the research process, but also make clear how difficult that is. Fourthly, we underline that it is worth trying out replication in the humanities. We finish by formulating recommendations regarding reproduction and replication research, aimed specifically at funders, editors and publishers, and universities and other research institutes.
{"title":"Replication studies in the Netherlands: Lessons learned and recommendations for funders, publishers and editors, and universities.","authors":"Maarten Derksen, Stephanie Meirmans, Jonna Brenninkmeijer, Jeannette Pols, Annemarijn de Boer, Hans van Eyghen, Surya Gayet, Rolf Groenwold, Dennis Hernaus, Pim Huijnen, Nienke Jonker, Renske de Kleijn, Charlotte F Kroll, Angelos-Miltiadis Krypotos, Nynke van der Laan, Kim Luijken, Ewout Meijer, Rachel S A Pear, Rik Peels, Robin Peeters, Charlotte C S Rulkens, Christin Scholz, Nienke Smit, Rombert Stapel, Joost de Winter","doi":"10.1080/08989621.2024.2383349","DOIUrl":"10.1080/08989621.2024.2383349","url":null,"abstract":"<p><p>Drawing on our experiences conducting replications we describe the lessons we learned about replication studies and formulate recommendations for researchers, policy makers, and funders about the role of replication in science and how it should be supported and funded. We first identify a variety of benefits of doing replication studies. Next, we argue that it is often necessary to improve aspects of the original study, even if that means deviating from the original protocol. Thirdly, we argue that replication studies highlight the importance of and need for more transparency of the research process, but also make clear how difficult that is. Fourthly, we underline that it is worth trying out replication in the humanities. We finish by formulating recommendations regarding reproduction and replication research, aimed specifically at funders, editors and publishers, and universities and other research institutes.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"1285-1303"},"PeriodicalIF":4.0,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141972304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2024-05-02DOI: 10.1080/08989621.2024.2345714
Bor Luen Tang
Scientific research requires objectivity, impartiality and stringency. However, scholarly literature is littered with preliminary and explorative findings that lack reproducibility or validity. Some low-quality papers with perceived high impact have become publicly notable. The collective effort of fellow researchers who follow these false leads down blind alleys and impasses is a waste of time and resources, and this is particularly damaging for early career researchers. Furthermore, the lay public might also be affected by socioeconomic repercussions associated with the findings. It is arguable that the nature of scientific research is such that its frontiers are moved and shaped by cycles of published claims inducing in turn rounds of validation by others. Using recent example cases of room-temperature superconducting materials research, I argue instead that publication of perceptibly important or spectacular claims that lack reproducibility or validity is epistemically and socially irresponsible. This is even more so if authors refuse to share research materials and raw data for verification by others. Such acts do not advance, but would instead corrupt science, and should be prohibited by consensual governing rules on material and data sharing within the research community, with malpractices appropriately sanctioned.
{"title":"Publishing important work that lacks validity or reproducibility - pushing frontiers or corrupting science?","authors":"Bor Luen Tang","doi":"10.1080/08989621.2024.2345714","DOIUrl":"10.1080/08989621.2024.2345714","url":null,"abstract":"<p><p>Scientific research requires objectivity, impartiality and stringency. However, scholarly literature is littered with preliminary and explorative findings that lack reproducibility or validity. Some low-quality papers with perceived high impact have become publicly notable. The collective effort of fellow researchers who follow these false leads down blind alleys and impasses is a waste of time and resources, and this is particularly damaging for early career researchers. Furthermore, the lay public might also be affected by socioeconomic repercussions associated with the findings. It is arguable that the nature of scientific research is such that its frontiers are moved and shaped by cycles of published claims inducing in turn rounds of validation by others. Using recent example cases of room-temperature superconducting materials research, I argue instead that publication of perceptibly important or spectacular claims that lack reproducibility or validity is epistemically and socially irresponsible. This is even more so if authors refuse to share research materials and raw data for verification by others. Such acts do not advance, but would instead corrupt science, and should be prohibited by consensual governing rules on material and data sharing within the research community, with malpractices appropriately sanctioned.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"1159-1179"},"PeriodicalIF":4.0,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140870956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2024-07-28DOI: 10.1080/08989621.2024.2382736
Jonathan R Kasstan, Geoff Pearson
Background: Qualitative Humanities research is perturbed by ethical review processes that routinely invoke epistemological assumptions skewed towards positivistic or deductive research, giving rise to several concerns, including increased risk aversion by University Research Ethics Committees (URECs) and the evaluation of qualitative research designs according to STEM standards.
Methods/materials: This paper presents findings from an AHRC-funded research network built to better understand how research ethics frameworks and processes might be reformed to more appropriately fit ethically challenging qualitative methodologies.
Results: There remains dissatisfaction with the current processes for awarding ethical approval and the subsequent management of ethical dimensions of projects. In spite of recent developments, UREC frameworks remain seriously flawed, with a wide divergence in the quality of expertise, procedures, and practices, leading to inconsistency in ethical approval awards.
Conclusions: These factors downgrade UK Higher Education research power in the Humanities and undermine our commitments to the researched. We propose a series of recommendations for reform.
{"title":"Ethical committee frameworks and processes used to evaluate humanities research require reform: Findings from a UK-wide network consultation.","authors":"Jonathan R Kasstan, Geoff Pearson","doi":"10.1080/08989621.2024.2382736","DOIUrl":"10.1080/08989621.2024.2382736","url":null,"abstract":"<p><strong>Background: </strong>Qualitative Humanities research is perturbed by ethical review processes that routinely invoke epistemological assumptions skewed towards positivistic or deductive research, giving rise to several concerns, including increased risk aversion by University Research Ethics Committees (URECs) and the evaluation of qualitative research designs according to STEM standards.</p><p><strong>Methods/materials: </strong>This paper presents findings from an AHRC-funded research network built to better understand how research ethics frameworks and processes might be reformed to more appropriately fit ethically challenging qualitative methodologies.</p><p><strong>Results: </strong>There remains dissatisfaction with the current processes for awarding ethical approval and the subsequent management of ethical dimensions of projects. In spite of recent developments, UREC frameworks remain seriously flawed, with a wide divergence in the quality of expertise, procedures, and practices, leading to inconsistency in ethical approval awards.</p><p><strong>Conclusions: </strong>These factors downgrade UK Higher Education research power in the Humanities and undermine our commitments to the researched. We propose a series of recommendations for reform.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"1265-1284"},"PeriodicalIF":4.0,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141789784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2024-03-17DOI: 10.1080/08989621.2024.2329265
Greg Samsa
The three steps of a typical forensic statistical analysis are (1) verify that the raw data file is correct; (2) verify that the statistical analysis file derived from the raw data file is correct; and (3) verify that the statistical analyses are appropriate. We illustrate applying these three steps to a manuscript which was subsequently retracted, focusing on step 1. In the absence of an external source for comparison, criteria for assessing the raw data file were internal consistency and plausibility. A forensic statistical analysis isn't like a murder mystery, and it many circumstances discovery of a mechanism for falsification or fabrication might not be realistic.
{"title":"Fabrication in a study about honesty: A lost episode of columbo illustrating how forensic statistics is performed.","authors":"Greg Samsa","doi":"10.1080/08989621.2024.2329265","DOIUrl":"10.1080/08989621.2024.2329265","url":null,"abstract":"<p><p>The three steps of a typical forensic statistical analysis are (1) verify that the raw data file is correct; (2) verify that the statistical analysis file derived from the raw data file is correct; and (3) verify that the statistical analyses are appropriate. We illustrate applying these three steps to a manuscript which was subsequently retracted, focusing on step 1. In the absence of an external source for comparison, criteria for assessing the raw data file were internal consistency and plausibility. A forensic statistical analysis isn't like a murder mystery, and it many circumstances discovery of a mechanism for falsification or fabrication might not be realistic.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"1055-1071"},"PeriodicalIF":4.0,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140144502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2024-05-01DOI: 10.1080/08989621.2024.2345713
Serhii Nazarovets, Jaime A Teixeira da Silva
Background: Following the 2023 surge in popularity of large language models like ChatGPT, significant ethical discussions emerged regarding their role in academic authorship. Notable ethics organizations, including the ICMJE and COPE, alongside leading publishers, have instituted ethics clauses explicitly stating that such models do not meet the criteria for authorship due to accountability issues.Objective: This study aims to assess the prevalence and ethical implications of listing ChatGPT as an author on academic papers, in violation of existing ethical guidelines set by the ICMJE and COPE.Methods: We conducted a comprehensive review using databases such as Web of Science and Scopus to identify instances where ChatGPT was credited as an author, co-author, or group author.Results: Our search identified 14 papers featuring ChatGPT in such roles. In four of those papers, ChatGPT was listed as an "author" alongside the journal's editor or editor-in-chief. Several of the ChatGPT-authored papers have accrued dozens, even hundreds of citations according to Scopus, Web of Science, and Google Scholar.Discussion: The inclusion of ChatGPT as an author on these papers raises critical questions about the definition of authorship and the accountability mechanisms in place for content produced by artificial intelligence. Despite the ethical guidelines, the widespread citation of these papers suggests a disconnect between ethical policy and academic practice.Conclusion: The findings suggest a need for corrective measures to address these discrepancies. Immediate review and amendment of the listed papers is advised, highlighting a significant oversight in the enforcement of ethical standards in academic publishing.
背景:继 2023 年像 ChatGPT 这样的大型语言模型大受欢迎之后,关于这些模型在学术作者身份中的作用的伦理问题出现了重大讨论。包括 ICMJE 和 COPE 在内的著名伦理组织以及主要出版商都制定了伦理条款,明确指出由于责任问题,此类模型不符合作者资格标准:本研究旨在评估将 ChatGPT 列为学术论文作者的普遍性和伦理影响,这种做法违反了 ICMJE 和 COPE 制定的现行伦理准则:我们使用 Web of Science 和 Scopus 等数据库进行了一次全面审查,以确定 ChatGPT 被列为作者、合著者或集体作者的情况:我们的搜索发现了 14 篇以 ChatGPT 为作者的论文。在其中四篇论文中,ChatGPT 与期刊编辑或主编一起被列为 "作者"。根据 Scopus、Web of Science 和 Google Scholar 的数据,其中几篇由 ChatGPT 撰写的论文被引用了几十次甚至上百次:将 ChatGPT 列为这些论文的作者引发了关于作者身份定义和人工智能内容问责机制的重要问题。尽管有伦理准则,但这些论文被广泛引用表明伦理政策与学术实践之间存在脱节:结论:研究结果表明,有必要采取纠正措施来解决这些差异。建议立即对所列论文进行审查和修改,这凸显了学术出版伦理标准执行过程中的重大疏忽。
{"title":"ChatGPT as an \"author\": Bibliometric analysis to assess the validity of authorship.","authors":"Serhii Nazarovets, Jaime A Teixeira da Silva","doi":"10.1080/08989621.2024.2345713","DOIUrl":"10.1080/08989621.2024.2345713","url":null,"abstract":"<p><p><b>Background</b>: Following the 2023 surge in popularity of large language models like ChatGPT, significant ethical discussions emerged regarding their role in academic authorship. Notable ethics organizations, including the ICMJE and COPE, alongside leading publishers, have instituted ethics clauses explicitly stating that such models do not meet the criteria for authorship due to accountability issues.<b>Objective</b>: This study aims to assess the prevalence and ethical implications of listing ChatGPT as an author on academic papers, in violation of existing ethical guidelines set by the ICMJE and COPE.<b>Methods</b>: We conducted a comprehensive review using databases such as Web of Science and Scopus to identify instances where ChatGPT was credited as an author, co-author, or group author.<b>Results</b>: Our search identified 14 papers featuring ChatGPT in such roles. In four of those papers, ChatGPT was listed as an \"author\" alongside the journal's editor or editor-in-chief. Several of the ChatGPT-authored papers have accrued dozens, even hundreds of citations according to Scopus, Web of Science, and Google Scholar.<b>Discussion</b>: The inclusion of ChatGPT as an author on these papers raises critical questions about the definition of authorship and the accountability mechanisms in place for content produced by artificial intelligence. Despite the ethical guidelines, the widespread citation of these papers suggests a disconnect between ethical policy and academic practice.<b>Conclusion</b>: The findings suggest a need for corrective measures to address these discrepancies. Immediate review and amendment of the listed papers is advised, highlighting a significant oversight in the enforcement of ethical standards in academic publishing.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"1148-1158"},"PeriodicalIF":4.0,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140872419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2024-04-11DOI: 10.1080/08989621.2024.2334722
Aditya K Panda
This letter addresses the significance of conducting and reporting systematic reviews and meta-analyses using the appropriate methods. It also highlights the importance of implementing the latest guidelines of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA)-2020, which ensures the maintenance of ethics, integrity, and accountability while reporting systematic reviews and meta-analyses.
这封信阐述了使用适当方法进行和报告系统综述和荟萃分析的重要性。信中还强调了执行《系统综述和荟萃分析首选报告项目》(Preferred Reporting Items for Systematic Reviews and Meta-Analyses,PRISMA)-2020 最新指南的重要性,该指南确保在报告系统综述和荟萃分析时维护道德、诚信和责任。
{"title":"Maintaining ethics, Integrity, and accountability: Best practices for reporting a meta-analysis.","authors":"Aditya K Panda","doi":"10.1080/08989621.2024.2334722","DOIUrl":"10.1080/08989621.2024.2334722","url":null,"abstract":"<p><p>This letter addresses the significance of conducting and reporting systematic reviews and meta-analyses using the appropriate methods. It also highlights the importance of implementing the latest guidelines of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA)-2020, which ensures the maintenance of ethics, integrity, and accountability while reporting systematic reviews and meta-analyses.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"1310-1312"},"PeriodicalIF":4.0,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140854584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-25DOI: 10.1080/08989621.2025.2560886
Robert Klitzman
Background: Responsible Conduct of Research (RCR) courses seek to heighten awareness of the importance of mentor/mentee interactions and other topics, but questions remain - e.g., how best to train mentors/mentees to establish such relationships.
Description of exercise: This paper proposes an approach as a model to strengthen RCR education by more fully, and actively, rather than passively, engaging trainees. A classroom activity was developed that can enhance instructors' abilities to improve mentor/mentee interactions. The instructor divided classes into groups of roughly four trainees, and had them think of a good mentor they have observed, and to list traits/behaviors they liked. Groups then summarized discussions for the class. The instructors recorded and integrated responses. Each group then considered bad mentors, answering the same questions, and repeating the process regarding bad mentees and good mentees. The class then compared the four discussions. Trainees have commonly had both formal and informal mentors, seen both good and bad mentors and mentees, and often themselves served as mentors. Mentees thus connect abstract principles concerning mentorship to personal experiences; and reflect on their own interactions/roles, preferences, and rights/responsibilities.
Conclusion: This exercise suggests some benefits of recognizing personal/emotional, not just intellectual components in RCR, and has important implications for education, practice, and research.
{"title":"A classroom exercise for improving mentor/mentee relationships.","authors":"Robert Klitzman","doi":"10.1080/08989621.2025.2560886","DOIUrl":"https://doi.org/10.1080/08989621.2025.2560886","url":null,"abstract":"<p><strong>Background: </strong>Responsible Conduct of Research (RCR) courses seek to heighten awareness of the importance of mentor/mentee interactions and other topics, but questions remain - e.g., how best to train mentors/mentees to establish such relationships.</p><p><strong>Description of exercise: </strong>This paper proposes an approach as a model to strengthen RCR education by more fully, and actively, rather than passively, engaging trainees. A classroom activity was developed that can enhance instructors' abilities to improve mentor/mentee interactions. The instructor divided classes into groups of roughly four trainees, and had them think of a good mentor they have observed, and to list traits/behaviors they liked. Groups then summarized discussions for the class. The instructors recorded and integrated responses. Each group then considered bad mentors, answering the same questions, and repeating the process regarding bad mentees and good mentees. The class then compared the four discussions. Trainees have commonly had both formal and informal mentors, seen both good and bad mentors and mentees, and often themselves served as mentors. Mentees thus connect abstract principles concerning mentorship to personal experiences; and reflect on their own interactions/roles, preferences, and rights/responsibilities.</p><p><strong>Conclusion: </strong>This exercise suggests some benefits of recognizing personal/emotional, not just intellectual components in RCR, and has important implications for education, practice, and research.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"1-5"},"PeriodicalIF":4.0,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145139241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-06DOI: 10.1080/08989621.2025.2551166
Justin N Nguyen, Christopher K Tuohino, Charlotte S Horowitz, Robert T Rubin
Objective: To compare self-publication rates by editors-in-chief (EICs) of psychiatry vs. medicine journals before, during, and after their editorships.
Methods: Frequency of self-publication by 25 psychiatry EICs and 22 medicine EICs across seven journals in each specialty was determined for 5 years before, the years during, and 5 years after their tenures. PubMed was used to identify original research and review articles subject to peer review. Two-way ANOVA with repeated measures was used to assess differences in articles published per year by specialty and time period.
Results: Mean self-publication rates before, during, and after editorship were 0.64, 1.46, and 0.66 articles/year for psychiatry EICs and 0.25, 0.31, and 0.13 articles/year for medicine EICs. ANOVA revealed significant main effects of journal type (psychiatry vs. medicine) (p = 0.003) and time period (before, during, after) (p = 0.003), and a significant interaction (p = 0.024).
Conclusion: Psychiatry EICs self-published discretionary articles significantly more frequently (4.7 times overall) than did their medicine counterparts. These findings do not necessarily imply abuse, but they highlight the need to further enhance editorial safeguards, increase transparency, and continue surveillance of adherence to publication guidelines, in order to further mitigate potential conflicts of interest in academic publishing.
{"title":"Psychiatry vs. medicine editor-in-chiefs' research publications in their own journals before, during, and after their tenures - An exploratory study.","authors":"Justin N Nguyen, Christopher K Tuohino, Charlotte S Horowitz, Robert T Rubin","doi":"10.1080/08989621.2025.2551166","DOIUrl":"https://doi.org/10.1080/08989621.2025.2551166","url":null,"abstract":"<p><strong>Objective: </strong>To compare self-publication rates by editors-in-chief (EICs) of psychiatry vs. medicine journals before, during, and after their editorships.</p><p><strong>Methods: </strong>Frequency of self-publication by 25 psychiatry EICs and 22 medicine EICs across seven journals in each specialty was determined for 5 years before, the years during, and 5 years after their tenures. PubMed was used to identify original research and review articles subject to peer review. Two-way ANOVA with repeated measures was used to assess differences in articles published per year by specialty and time period.</p><p><strong>Results: </strong>Mean self-publication rates before, during, and after editorship were 0.64, 1.46, and 0.66 articles/year for psychiatry EICs and 0.25, 0.31, and 0.13 articles/year for medicine EICs. ANOVA revealed significant main effects of journal type (psychiatry vs. medicine) (<i>p</i> = 0.003) and time period (before, during, after) (<i>p</i> = 0.003), and a significant interaction (<i>p</i> = 0.024).</p><p><strong>Conclusion: </strong>Psychiatry EICs self-published discretionary articles significantly more frequently (4.7 times overall) than did their medicine counterparts. These findings do not necessarily imply abuse, but they highlight the need to further enhance editorial safeguards, increase transparency, and continue surveillance of adherence to publication guidelines, in order to further mitigate potential conflicts of interest in academic publishing.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"1-9"},"PeriodicalIF":4.0,"publicationDate":"2025-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145006773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-05DOI: 10.1080/08989621.2025.2554696
Xiaoting Peng, Yufeng Cai, Dehua Hu, Yi Guo, Haixia Liu, Xusheng Wu, Qingyuan Hu
Background: Generative Artificial Intelligence(GenAI) significantly enhances medical research efficiency but raises ethical concerns regarding research integrity. The lack of systematic guidelines for its ethical use underscores the need to investigate GenAI's impact on researchers' awareness and behavior concerning integrity.
Methods/materials: A cross-sectional survey of 718 valid responses from Chinese medical researchers assessed GenAI's impact on research integrity using an extended Unified Theory of Acceptance and Use of Technology(UTAUT) model.
Results: The findings reveal that performance expectancy, effort expectancy, technical environment, trust in technology, and supporting conditions positively influence researchers' awareness of research integrity. Conversely, GenAI anxiety and perceived risks exert a significant negative impact. Furthermore, both supporting conditions and integrity awareness are positively associated with integrity behavior, while GenAI anxiety negatively affects such behavior.
Conclusion: The stakeholders in the medical research ecosystem should develop comprehensive guidelines for the responsible use of GenAI. Emphasis should be placed on optimizing the technical environment, enhancing trust and support structures, and embedding integrity safeguards, thereby promoting the synergistic development of technological innovation and ethical research practices.
{"title":"Assessing the influence of generative artificial intelligence (GenAI) on awareness and behavior in medical research integrity: An online survey study.","authors":"Xiaoting Peng, Yufeng Cai, Dehua Hu, Yi Guo, Haixia Liu, Xusheng Wu, Qingyuan Hu","doi":"10.1080/08989621.2025.2554696","DOIUrl":"https://doi.org/10.1080/08989621.2025.2554696","url":null,"abstract":"<p><strong>Background: </strong>Generative Artificial Intelligence(GenAI) significantly enhances medical research efficiency but raises ethical concerns regarding research integrity. The lack of systematic guidelines for its ethical use underscores the need to investigate GenAI's impact on researchers' awareness and behavior concerning integrity.</p><p><strong>Methods/materials: </strong>A cross-sectional survey of 718 valid responses from Chinese medical researchers assessed GenAI's impact on research integrity using an extended Unified Theory of Acceptance and Use of Technology(UTAUT) model.</p><p><strong>Results: </strong>The findings reveal that performance expectancy, effort expectancy, technical environment, trust in technology, and supporting conditions positively influence researchers' awareness of research integrity. Conversely, GenAI anxiety and perceived risks exert a significant negative impact. Furthermore, both supporting conditions and integrity awareness are positively associated with integrity behavior, while GenAI anxiety negatively affects such behavior.</p><p><strong>Conclusion: </strong>The stakeholders in the medical research ecosystem should develop comprehensive guidelines for the responsible use of GenAI. Emphasis should be placed on optimizing the technical environment, enhancing trust and support structures, and embedding integrity safeguards, thereby promoting the synergistic development of technological innovation and ethical research practices.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"1-32"},"PeriodicalIF":4.0,"publicationDate":"2025-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145001870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-28DOI: 10.1080/08989621.2025.2551890
Daniel Crean, Michał Wieczorek, Bert Gordijn, Alan J Kearns
Guided by Brey's Anticipatory Technology Ethics, we examined AI-based research mentors (AIRMs) through technology foresight as well as identification and evaluation of ethical issues. Scenario planning was employed to inform foresight, yielding four plausible future scenarios: 1) AIRMs are used solely for guidance, 2) AIRMs are used for guidance and monitoring, 3) AIRMs are banned, and 4) AIRMs are used solely for monitoring. Resnik's twelve principles informed the identification of ethical issues within these scenarios. Our analysis revealed that certain principles - openness, education, legality, and mutual respect - were violated in all scenarios. Others were contravened to varying degrees across the scenarios; for example, freedom was only violated in scenarios where AIRMs were used for monitoring. Furthermore, the guidance scenario showed that AIRM's responses could be manipulated to justify poor practice ("AIRMing"). In our evaluation, we weighed ethical issues against the benefits and found that the guidance-only scenario was the least problematic. While this scenario has benefits, such as providing expert guidance on research, ethical issues arise with regard to honesty, openness, credit, education, legality, and mutual respect. Therefore, policy must be developed to ensure that AIRMs are used solely for guidance while mitigating these issues.
{"title":"AI-based research mentors: Plausible scenarios and ethical issues.","authors":"Daniel Crean, Michał Wieczorek, Bert Gordijn, Alan J Kearns","doi":"10.1080/08989621.2025.2551890","DOIUrl":"https://doi.org/10.1080/08989621.2025.2551890","url":null,"abstract":"<p><p>Guided by Brey's Anticipatory Technology Ethics, we examined AI-based research mentors (AIRMs) through technology foresight as well as identification and evaluation of ethical issues. Scenario planning was employed to inform foresight, yielding four plausible future scenarios: 1) AIRMs are used solely for guidance, 2) AIRMs are used for guidance and monitoring, 3) AIRMs are banned, and 4) AIRMs are used solely for monitoring. Resnik's twelve principles informed the identification of ethical issues within these scenarios. Our analysis revealed that certain principles - openness, education, legality, and mutual respect - were violated in all scenarios. Others were contravened to varying degrees across the scenarios; for example, freedom was only violated in scenarios where AIRMs were used for monitoring. Furthermore, the guidance scenario showed that AIRM's responses could be manipulated to justify poor practice (\"AIRMing\"). In our evaluation, we weighed ethical issues against the benefits and found that the guidance-only scenario was the least problematic. While this scenario has benefits, such as providing expert guidance on research, ethical issues arise with regard to honesty, openness, credit, education, legality, and mutual respect. Therefore, policy must be developed to ensure that AIRMs are used solely for guidance while mitigating these issues.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"1-34"},"PeriodicalIF":4.0,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144977417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}