Obvious artificial intelligence-generated anomalies in published journal articles: A call for enhanced editorial diligence

IF 2.2 3区 管理学 Q2 INFORMATION SCIENCE & LIBRARY SCIENCE Learned Publishing Pub Date : 2024-09-06 DOI:10.1002/leap.1626
Bashar Haruna Gulumbe
{"title":"Obvious artificial intelligence-generated anomalies in published journal articles: A call for enhanced editorial diligence","authors":"Bashar Haruna Gulumbe","doi":"10.1002/leap.1626","DOIUrl":null,"url":null,"abstract":"<p>In the last decade, artificial intelligence (AI) has revolutionized virtually every aspect of our lives, marking a transformative era of technological advancement and integration (Bohr &amp; Memarzadeh, <span>2020</span>; Verganti et al., <span>2020</span>). From the way we interact with our devices through voice-activated assistants, to the convenience of personalized recommendations on streaming services, AI has seamlessly woven itself into the fabric of daily existence. This pervasive influence of AI extends beyond everyday consumer technology, profoundly impacting sectors such as healthcare (Rajpurkar et al., <span>2022</span>), where algorithms diagnose diseases with unprecedented accuracy, and transportation (Bharadiya, <span>2023</span>), with the advent of autonomous vehicles reshaping notions of mobility and safety.</p><p>This widespread integration of AI has not spared the field of academic publishing (Ganjavi et al., <span>2024</span>), where its influence has instigated a series of challenges and potential pitfalls. The introduction of AI into research and writing processes, intended to facilitate and enhance the arduous tasks of data analysis and literature review, has instead opened a Pandora's box of issues. Among the most significant concerns are ethical and practical issues related to the application of AI in publication (Ganjavi et al., <span>2024</span>; Samuel et al., <span>2021</span>). Recognizing these dynamics, the STM report (<span>2023</span>) offers practical guidelines tailored specifically for the use of generative AI within this field. It clearly differentiates the roles of generative AI, from its simple use as an authorial aid, which necessitates no further reporting, to its more advanced implementations. Moreover, universities and publishers globally are developing policies to govern the use of generative AI in academic writing. These guidelines are crafted to steer authors through the intricate and diverse applications of AI, ensuring that its advantages are maximized while effectively mitigating potential risks (Gulumbe et al., <span>2024</span>).</p><p>Despite these guidelines, the academic community has witnessed the troubling emergence of clear AI-generated anomalies within published articles (Wong, <span>2024</span>). Such instances serve as a stark reminder of the fine balance between leveraging AI for its undeniable benefits and the imperative need for the academic community to address AI-related discrepancies. These discrepancies not only undermine the integrity of scholarly work but also pose a threat to the foundational principles of academic rigour and trust.</p><p>The crux of the issue lies not in the use of AI <i>per se</i> but in the apparent lack of editorial oversight that has allowed evidently flawed AI-generated content to slip through the rigorous checks and balances of the peer-review process. Recent events underline this concern, illuminating a dire need for the implementation of more stringent editorial standards. For example, a recent paper entitled ‘Cellular Functions of Spermatogonial Stem Cells in Relation to the JAK/STAT Signaling Pathway’, published by <i>Frontiers in Cell and Developmental Biology</i> in February 2024 and now retracted (Guo et al., <span>2024</span>), became a subject of controversy in both social and mainstream media. In the paper, researchers utilized Midjourney to depict a rat's reproductive organs; however, the result was a cartoon rodent with comically oversized genitalia, annotated with nonsensical labels. In another example, an article entitled ‘The Three-Dimensional Porous Mesh Structure of Cu-Based Metal-Organic-Framework – Aramid Cellulose Separator Enhances the Electrochemical Performance of Lithium Metal Anode Batteries’ (Zhang et al., <span>2024</span>) published in Q1 journal, <i>Surfaces and Interfaces</i>, with an impact factor of 6.2, featured an introduction clearly bearing the hallmarks of AI-generated text. The introduction was marked by a distinct lack of critical analysis and coherence upon closer examination.</p><p>Similarly, in a seperate study published in <i>Radiology Case Reports</i>, titled ‘Successful Management of an Iatrogenic Portal Vein and Hepatic Artery Injury in a 4-Month-Old Female Patient: A Case Report and Literature Review’ (Bader et al., <span>2024</span>) a segment of the text notably diverges from the expected academic discourse. Specifically, the passage outlines, ‘In summary, the management of bilateral iatrogenic…’ abruptly transitioning into a disclaimer typical of AI-generated content, stating, ‘I'm very sorry, but I don't have access to real-time information or patient-specific data, as I am an AI language model. I can offer general guidance on managing injuries to the hepatic artery, portal vein, and bile duct. However, for individual cases, it's imperative to seek the expertise of a medical professional who possesses detailed knowledge of the patient's medical history and can offer tailored advice’. This excerpt underlines the critical issue of including AI-generated text within scholarly articles, spotlighting the pressing need for rigorous editorial oversight to maintain the integrity and accuracy of academic publishing. These instances are just a few examples of the poor handling of manuscripts and are not in isolation. By simply searching phrases like ‘As an AI language model’, ‘I don't have access to real-time data’, and ‘As of my last knowledge update’, one can find hundreds of papers with text generated by AI. These papers, which presumably passed through initial assessment, peer review, and copy-editing processes, highlight a significant oversight in the current academic publishing paradigm.</p><p>These instances are particularly alarming when considering the standing of the publishers involved—esteemed institutions that have long been regarded as gatekeepers of quality and scholastic excellence. Such oversights suggest that current editorial processes may not be equipped to identify the subtleties of AI-generated text, which often mimics the structure and tone of scholarly writing but lacks the nuanced insight and critical thinking foundational to academic discourse.</p><p>The integration of AI into scholarly publishing introduces several significant gatekeeping challenges (Gulumbe et al., <span>2024</span>; Wise et al., <span>2024</span>), key among them being the development of reliable mechanisms for detecting AI-generated content (Chaka, <span>2023</span>; Wang et al., <span>2023</span>). Despite substantial efforts from both academic and technological sectors, the creation of a dependable generative AI detection tool has yet to be realized. Current methodologies often falter in accurately differentiating between nuances in human and AI-generated texts (Chaka, <span>2023</span>), an issue exacerbated by the continuous evolution and increasing sophistication of AI technologies. The inherent variability of AI-generated content, particularly its capacity to emulate human linguistic traits, poses substantial challenges for existing algorithms, leading to inconsistent results. This issue underlines the urgent need for ongoing research and enhancement of AI-detection techniques to keep pace with the advancements in generative AI capabilities.</p><p>In addition to the technical challenges, the gatekeeping role is further complicated by ethical and operational considerations (Gendron et al., <span>2022</span>; Wise et al., <span>2024</span>). The subtlety with which AI tools now mimic human reasoning and writing styles raises profound ethical questions about authorship and originality (Gulumbe et al., <span>2024</span>), complicating the traditional roles of editors and reviewers. There is also a significant concern about the transparency of AI use in research and publication processes (Gulumbe et al., <span>2024</span>). Ensuring that authors disclose the extent of AI involvement in their work is crucial for maintaining the integrity of the academic record, but not enough. Moreover, the rapid adaptation of AI tools across different disciplines demands a scalable and flexible approach to gatekeeping that can accommodate diverse fields and types of content. As AI technologies permeate deeper into the fabric of academic work, the scholarly community must not only develop robust technological solutions but also foster a culture of integrity and transparency that upholds the foundational principles of scholarly communication.</p><p>In response to the advancements in AI, academic journals have adopted varying stances on the incorporation of AI-generated visual content (Gulumbe et al., <span>2024</span>; Inam et al., <span>2024</span>). Springer Nature, distinguishing itself with a more stringent approach, has prohibited the use of AI-generated images, videos, and illustrations in the majority of its journal articles, with an exception for those directly addressing AI topics (Wong, <span>2024</span>). Conversely, journals within the Science family adopt a policy requiring explicit editorial consent for the inclusion of AI-generated text, figures, or images, unless the manuscript explicitly focuses on AI or machine learning themes (Wong, <span>2024</span>). On another front, <i>PLoS One</i> embraces the utilization of AI tools under the condition that researchers fully disclose the specific tools employed, their application methodology (Wong, <span>2024</span>), and the measures taken to ensure the integrity of the resultant content (Wong, <span>2024</span>).</p><p>While the measures taken by journal publishers—ranging from outright bans to mandated disclosures of AI-generated content—represent a step toward addressing the challenges posed by AI in academic publishing, these policies alone prove insufficient. The simple act of declaring AI use does not safeguard against the publication of gibberish or ensure the integrity of the content, as there could still be instances where authors either neglect to declare AI assistance or, despite declarations, manage to publish flawed content. This situation aggrandizes the necessity for the academic gatekeepers, including editorial teams and publishers, to intensify their efforts beyond mere policy enactments.</p><p>To strengthen the foundation of academic integrity in the face of the proliferation of AI-generated content, this piece, therefore, advocates for the implementation of the following strategies:</p><p>The emergence of AI-generated anomalies within the pages of esteemed scholarly publications has sounded an urgent alarm across the academic publishing landscape. This situation demands a concerted response from all involved parties—authors, reviewers, editors, and publishers alike—to adopt and enforce more rigorous editorial standards and practices. Such measures are critical not only to preserving the credibility of individual works and the journals that disseminate them but also to maintaining the foundational trust essential to scholarly discourse. In alignment with these enhanced practices, the adoption of specialized software tools tailored for identifying AI-generated content, along with the development of universally recognized AI detection protocols, should be considered integral components. These tools and protocols will bolster the editorial process, ensuring that publications can effectively manage and mitigate the complexities introduced by AI, thus upholding the integrity and reliability of scholarly communication. Other measures, which include regular updates to keep pace with the swift advancements in AI technology, are crucial for safeguarding the integrity and reliability of scholarly communications. This pivotal moment serves as both a wake-up call and a guiding light, steering us toward the implementation of advanced, forward-looking strategies that ensure the enduring quality and dependability of academic output. As we navigate this era increasingly shaped by AI, our collective efforts will continue to reinforce the legacy and future of scholarly communication, affirming our dedication to the core principles of academic excellence and integrity.</p>","PeriodicalId":51636,"journal":{"name":"Learned Publishing","volume":"37 4","pages":""},"PeriodicalIF":2.2000,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/leap.1626","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Learned Publishing","FirstCategoryId":"91","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/leap.1626","RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"INFORMATION SCIENCE & LIBRARY SCIENCE","Score":null,"Total":0}
引用次数: 0

Abstract

In the last decade, artificial intelligence (AI) has revolutionized virtually every aspect of our lives, marking a transformative era of technological advancement and integration (Bohr & Memarzadeh, 2020; Verganti et al., 2020). From the way we interact with our devices through voice-activated assistants, to the convenience of personalized recommendations on streaming services, AI has seamlessly woven itself into the fabric of daily existence. This pervasive influence of AI extends beyond everyday consumer technology, profoundly impacting sectors such as healthcare (Rajpurkar et al., 2022), where algorithms diagnose diseases with unprecedented accuracy, and transportation (Bharadiya, 2023), with the advent of autonomous vehicles reshaping notions of mobility and safety.

This widespread integration of AI has not spared the field of academic publishing (Ganjavi et al., 2024), where its influence has instigated a series of challenges and potential pitfalls. The introduction of AI into research and writing processes, intended to facilitate and enhance the arduous tasks of data analysis and literature review, has instead opened a Pandora's box of issues. Among the most significant concerns are ethical and practical issues related to the application of AI in publication (Ganjavi et al., 2024; Samuel et al., 2021). Recognizing these dynamics, the STM report (2023) offers practical guidelines tailored specifically for the use of generative AI within this field. It clearly differentiates the roles of generative AI, from its simple use as an authorial aid, which necessitates no further reporting, to its more advanced implementations. Moreover, universities and publishers globally are developing policies to govern the use of generative AI in academic writing. These guidelines are crafted to steer authors through the intricate and diverse applications of AI, ensuring that its advantages are maximized while effectively mitigating potential risks (Gulumbe et al., 2024).

Despite these guidelines, the academic community has witnessed the troubling emergence of clear AI-generated anomalies within published articles (Wong, 2024). Such instances serve as a stark reminder of the fine balance between leveraging AI for its undeniable benefits and the imperative need for the academic community to address AI-related discrepancies. These discrepancies not only undermine the integrity of scholarly work but also pose a threat to the foundational principles of academic rigour and trust.

The crux of the issue lies not in the use of AI per se but in the apparent lack of editorial oversight that has allowed evidently flawed AI-generated content to slip through the rigorous checks and balances of the peer-review process. Recent events underline this concern, illuminating a dire need for the implementation of more stringent editorial standards. For example, a recent paper entitled ‘Cellular Functions of Spermatogonial Stem Cells in Relation to the JAK/STAT Signaling Pathway’, published by Frontiers in Cell and Developmental Biology in February 2024 and now retracted (Guo et al., 2024), became a subject of controversy in both social and mainstream media. In the paper, researchers utilized Midjourney to depict a rat's reproductive organs; however, the result was a cartoon rodent with comically oversized genitalia, annotated with nonsensical labels. In another example, an article entitled ‘The Three-Dimensional Porous Mesh Structure of Cu-Based Metal-Organic-Framework – Aramid Cellulose Separator Enhances the Electrochemical Performance of Lithium Metal Anode Batteries’ (Zhang et al., 2024) published in Q1 journal, Surfaces and Interfaces, with an impact factor of 6.2, featured an introduction clearly bearing the hallmarks of AI-generated text. The introduction was marked by a distinct lack of critical analysis and coherence upon closer examination.

Similarly, in a seperate study published in Radiology Case Reports, titled ‘Successful Management of an Iatrogenic Portal Vein and Hepatic Artery Injury in a 4-Month-Old Female Patient: A Case Report and Literature Review’ (Bader et al., 2024) a segment of the text notably diverges from the expected academic discourse. Specifically, the passage outlines, ‘In summary, the management of bilateral iatrogenic…’ abruptly transitioning into a disclaimer typical of AI-generated content, stating, ‘I'm very sorry, but I don't have access to real-time information or patient-specific data, as I am an AI language model. I can offer general guidance on managing injuries to the hepatic artery, portal vein, and bile duct. However, for individual cases, it's imperative to seek the expertise of a medical professional who possesses detailed knowledge of the patient's medical history and can offer tailored advice’. This excerpt underlines the critical issue of including AI-generated text within scholarly articles, spotlighting the pressing need for rigorous editorial oversight to maintain the integrity and accuracy of academic publishing. These instances are just a few examples of the poor handling of manuscripts and are not in isolation. By simply searching phrases like ‘As an AI language model’, ‘I don't have access to real-time data’, and ‘As of my last knowledge update’, one can find hundreds of papers with text generated by AI. These papers, which presumably passed through initial assessment, peer review, and copy-editing processes, highlight a significant oversight in the current academic publishing paradigm.

These instances are particularly alarming when considering the standing of the publishers involved—esteemed institutions that have long been regarded as gatekeepers of quality and scholastic excellence. Such oversights suggest that current editorial processes may not be equipped to identify the subtleties of AI-generated text, which often mimics the structure and tone of scholarly writing but lacks the nuanced insight and critical thinking foundational to academic discourse.

The integration of AI into scholarly publishing introduces several significant gatekeeping challenges (Gulumbe et al., 2024; Wise et al., 2024), key among them being the development of reliable mechanisms for detecting AI-generated content (Chaka, 2023; Wang et al., 2023). Despite substantial efforts from both academic and technological sectors, the creation of a dependable generative AI detection tool has yet to be realized. Current methodologies often falter in accurately differentiating between nuances in human and AI-generated texts (Chaka, 2023), an issue exacerbated by the continuous evolution and increasing sophistication of AI technologies. The inherent variability of AI-generated content, particularly its capacity to emulate human linguistic traits, poses substantial challenges for existing algorithms, leading to inconsistent results. This issue underlines the urgent need for ongoing research and enhancement of AI-detection techniques to keep pace with the advancements in generative AI capabilities.

In addition to the technical challenges, the gatekeeping role is further complicated by ethical and operational considerations (Gendron et al., 2022; Wise et al., 2024). The subtlety with which AI tools now mimic human reasoning and writing styles raises profound ethical questions about authorship and originality (Gulumbe et al., 2024), complicating the traditional roles of editors and reviewers. There is also a significant concern about the transparency of AI use in research and publication processes (Gulumbe et al., 2024). Ensuring that authors disclose the extent of AI involvement in their work is crucial for maintaining the integrity of the academic record, but not enough. Moreover, the rapid adaptation of AI tools across different disciplines demands a scalable and flexible approach to gatekeeping that can accommodate diverse fields and types of content. As AI technologies permeate deeper into the fabric of academic work, the scholarly community must not only develop robust technological solutions but also foster a culture of integrity and transparency that upholds the foundational principles of scholarly communication.

In response to the advancements in AI, academic journals have adopted varying stances on the incorporation of AI-generated visual content (Gulumbe et al., 2024; Inam et al., 2024). Springer Nature, distinguishing itself with a more stringent approach, has prohibited the use of AI-generated images, videos, and illustrations in the majority of its journal articles, with an exception for those directly addressing AI topics (Wong, 2024). Conversely, journals within the Science family adopt a policy requiring explicit editorial consent for the inclusion of AI-generated text, figures, or images, unless the manuscript explicitly focuses on AI or machine learning themes (Wong, 2024). On another front, PLoS One embraces the utilization of AI tools under the condition that researchers fully disclose the specific tools employed, their application methodology (Wong, 2024), and the measures taken to ensure the integrity of the resultant content (Wong, 2024).

While the measures taken by journal publishers—ranging from outright bans to mandated disclosures of AI-generated content—represent a step toward addressing the challenges posed by AI in academic publishing, these policies alone prove insufficient. The simple act of declaring AI use does not safeguard against the publication of gibberish or ensure the integrity of the content, as there could still be instances where authors either neglect to declare AI assistance or, despite declarations, manage to publish flawed content. This situation aggrandizes the necessity for the academic gatekeepers, including editorial teams and publishers, to intensify their efforts beyond mere policy enactments.

To strengthen the foundation of academic integrity in the face of the proliferation of AI-generated content, this piece, therefore, advocates for the implementation of the following strategies:

The emergence of AI-generated anomalies within the pages of esteemed scholarly publications has sounded an urgent alarm across the academic publishing landscape. This situation demands a concerted response from all involved parties—authors, reviewers, editors, and publishers alike—to adopt and enforce more rigorous editorial standards and practices. Such measures are critical not only to preserving the credibility of individual works and the journals that disseminate them but also to maintaining the foundational trust essential to scholarly discourse. In alignment with these enhanced practices, the adoption of specialized software tools tailored for identifying AI-generated content, along with the development of universally recognized AI detection protocols, should be considered integral components. These tools and protocols will bolster the editorial process, ensuring that publications can effectively manage and mitigate the complexities introduced by AI, thus upholding the integrity and reliability of scholarly communication. Other measures, which include regular updates to keep pace with the swift advancements in AI technology, are crucial for safeguarding the integrity and reliability of scholarly communications. This pivotal moment serves as both a wake-up call and a guiding light, steering us toward the implementation of advanced, forward-looking strategies that ensure the enduring quality and dependability of academic output. As we navigate this era increasingly shaped by AI, our collective efforts will continue to reinforce the legacy and future of scholarly communication, affirming our dedication to the core principles of academic excellence and integrity.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
已发表期刊论文中由人工智能生成的明显异常现象:呼吁加强编辑审查
面对人工智能生成内容的激增,为了加强学术诚信的基础,本文主张实施以下策略:在备受推崇的学术出版物中出现人工智能生成的异常内容,这给整个学术出版界敲响了紧急警钟。这种情况要求所有相关方--作者、审稿人、编辑和出版商--采取并执行更严格的编辑标准和实践。这些措施不仅对维护个人作品和传播这些作品的期刊的可信度至关重要,而且对维护学术讨论所必需的基本信任也至关重要。为了与这些强化措施保持一致,采用专门用于识别人工智能生成内容的软件工具,以及制定普遍认可的人工智能检测协议,应被视为不可或缺的组成部分。这些工具和协议将加强编辑流程,确保出版物能够有效管理和减轻人工智能带来的复杂性,从而维护学术交流的完整性和可靠性。其他措施包括定期更新,以跟上人工智能技术的快速发展,这些措施对于保障学术交流的完整性和可靠性至关重要。这一关键时刻既是警钟,也是指路明灯,指引我们实施先进的前瞻性战略,确保学术成果的持久质量和可靠性。在人工智能日益影响我们的时代,我们的集体努力将继续巩固学术交流的传统和未来,肯定我们对卓越学术和诚信核心原则的奉献。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Learned Publishing
Learned Publishing INFORMATION SCIENCE & LIBRARY SCIENCE-
CiteScore
4.40
自引率
17.90%
发文量
72
期刊最新文献
Unravelling Citation Rules: A Comparative Analysis of Referencing Instruction Patterns in Scopus-Indexed Journals Questioning the Predator of the Predatory Journals: How Fair Are Global Publishing Standards? Small Is Sexy: Rethinking Article Length in the Age of AI “I Really Try to Model Good Practices”: Reflecting on Journal Article Publication From Mid-Career The Impact of Print-on-Demand on Spanish University Presses
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1