{"title":"Is AI my co-author? The ethics of using artificial intelligence in scientific publishing.","authors":"Barton Moffatt, Alicia Hall","doi":"10.1080/08989621.2024.2386285","DOIUrl":null,"url":null,"abstract":"<p><p>The recent emergence of Large Language Models (LLMs) and other forms of Artificial Intelligence (AI) has led people to wonder whether they could act as an author on a scientific paper. This paper argues that AI systems should not be included on the author by-line. We agree with current commentators that LLMs are incapable of taking responsibility for their work and thus do not meet current authorship guidelines. We identify other problems with responsibility and authorship. In addition, the problems go deeper as AI tools also do not write in a meaningful sense nor do they have persistent identities. From a broader publication ethics perspective, adopting AI authorship would have detrimental effects on an already overly competitive and stressed publishing ecosystem. Deterrence is possible as backward-looking tools will likely be able to identify past AI usage. Finally, we question the value of using AI to produce more research simply for publication's sake.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"1-17"},"PeriodicalIF":2.8000,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Accountability in Research-Policies and Quality Assurance","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1080/08989621.2024.2386285","RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MEDICAL ETHICS","Score":null,"Total":0}
引用次数: 0
Abstract
The recent emergence of Large Language Models (LLMs) and other forms of Artificial Intelligence (AI) has led people to wonder whether they could act as an author on a scientific paper. This paper argues that AI systems should not be included on the author by-line. We agree with current commentators that LLMs are incapable of taking responsibility for their work and thus do not meet current authorship guidelines. We identify other problems with responsibility and authorship. In addition, the problems go deeper as AI tools also do not write in a meaningful sense nor do they have persistent identities. From a broader publication ethics perspective, adopting AI authorship would have detrimental effects on an already overly competitive and stressed publishing ecosystem. Deterrence is possible as backward-looking tools will likely be able to identify past AI usage. Finally, we question the value of using AI to produce more research simply for publication's sake.
期刊介绍:
Accountability in Research: Policies and Quality Assurance is devoted to the examination and critical analysis of systems for maximizing integrity in the conduct of research. It provides an interdisciplinary, international forum for the development of ethics, procedures, standards policies, and concepts to encourage the ethical conduct of research and to enhance the validity of research results.
The journal welcomes views on advancing the integrity of research in the fields of general and multidisciplinary sciences, medicine, law, economics, statistics, management studies, public policy, politics, sociology, history, psychology, philosophy, ethics, and information science.
All submitted manuscripts are subject to initial appraisal by the Editor, and if found suitable for further consideration, to peer review by independent, anonymous expert referees.