ChatGPT, et al … Artificial Intelligence, Authorship, and Medical Publishing.

Daniel H Solomon, Kelli D Allen, Patricia Katz, Amr H Sawalha, Ed Yelin
{"title":"ChatGPT, et al … Artificial Intelligence, Authorship, and Medical Publishing.","authors":"Daniel H Solomon, Kelli D Allen, Patricia Katz, Amr H Sawalha, Ed Yelin","doi":"10.1002/acr2.11538","DOIUrl":null,"url":null,"abstract":"If you have not yet heard of ChatGPT, you will! This artificial intelligence (AI)-based chatbot is making waves in medicine, education, academic publishing, and more widely. If you are a clinician and dismissed the idea that AI-powered care is part of our future, think again. AI-powered chatbots like ChatGPT are being put to the test on clinical scenarios and board examinations and fare pretty well (1). GPT, generative pretrained transformer, describes the next generation in AI-powered chatbots that not only construct full sentences on topic but now synthesize information from many fields, from many sources, and with tremendous nuance. We tried ChatGPT recently, asking it to create a patient-facing educational brochure on medications for gout. Almost instantaneously, ChatGPT spit out a brochure that was accurate, written at the correct reading level, and appropriate in its supportive tone. It is useful to explain a bit more about this type of AI. ChatGPT is just one of several large language model (LLM) interfaces for AI; many vendors are working on other interfaces that will have very similar capabilities. You might already be familiar with narrower forms of AI, which focus on one task, although you may not think of these applications as AI. These tasks might be as narrow as correcting grammar, detecting plagiarism, proof-reading insurance forms, interpreting radiology imaging, or telling us the weather. However, with the advent of LLM interfaces, AI has become a co-author on scientific papers (2). Can an LLM AI tool really co-author a scientific paper? At this stage, no one doubts that these tools can generate useful text that might accurately synthesize previously collected or original data. But authorship raises other questions about accountability. If the methods that LLM AI tools use to generate text are not transparent (they probably will never be), then who is accountable? One pillar of authorship according to the International Committee of Medical Journal Editors requires that authors agree “to be accountable for all aspects of the work...” (3). At this stage, it is not clear that LLM AI tools can be held accountable, so the American College of Rheumatology (ACR) journal editors and the ACR Committee on Journal Publications have agreed that co-authorship is not appropriate for these tools (see new Author Instructions; https://onlinelibrary.wiley.com/page/ journal/25785745/homepage/guide-to-authors). Another potential issue is that LLM AI tools are trained on existing literature that may be inaccurate and biased. Thus, we also have concerns that unintended biases may be magnified through these tools, often in ways that are not apparent. We acknowledge that there will likely be many instances when such tools will be used to perform analyses or to contribute to a scientific project. Narrow AI tools are currently widely used in imaging analyses (4). Such contributions should be reported by referring to the specific versions of the tools used by authors and ensuring that the tools are publicly available, even if a fee is required. But, as the Journal of the American Medical Association (JAMA) has appropriately stated, “Authors must take responsibility for the integrity of the content generated by these models and tools” (5). Some editors have wondered whether LLM AI tools could be used in the peer review process. The ACR journals use AI tools to check for plagiarism and image authenticity. Furthermore, our search tools use narrow AI to find appropriate peer reviewers. However, we have not put AI tools to use as actual “peer reviewers.” Although we do not anticipate substituting human","PeriodicalId":7084,"journal":{"name":"ACR Open Rheumatology","volume":"5 6","pages":"288-289"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10267801/pdf/","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACR Open Rheumatology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1002/acr2.11538","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

Abstract

If you have not yet heard of ChatGPT, you will! This artificial intelligence (AI)-based chatbot is making waves in medicine, education, academic publishing, and more widely. If you are a clinician and dismissed the idea that AI-powered care is part of our future, think again. AI-powered chatbots like ChatGPT are being put to the test on clinical scenarios and board examinations and fare pretty well (1). GPT, generative pretrained transformer, describes the next generation in AI-powered chatbots that not only construct full sentences on topic but now synthesize information from many fields, from many sources, and with tremendous nuance. We tried ChatGPT recently, asking it to create a patient-facing educational brochure on medications for gout. Almost instantaneously, ChatGPT spit out a brochure that was accurate, written at the correct reading level, and appropriate in its supportive tone. It is useful to explain a bit more about this type of AI. ChatGPT is just one of several large language model (LLM) interfaces for AI; many vendors are working on other interfaces that will have very similar capabilities. You might already be familiar with narrower forms of AI, which focus on one task, although you may not think of these applications as AI. These tasks might be as narrow as correcting grammar, detecting plagiarism, proof-reading insurance forms, interpreting radiology imaging, or telling us the weather. However, with the advent of LLM interfaces, AI has become a co-author on scientific papers (2). Can an LLM AI tool really co-author a scientific paper? At this stage, no one doubts that these tools can generate useful text that might accurately synthesize previously collected or original data. But authorship raises other questions about accountability. If the methods that LLM AI tools use to generate text are not transparent (they probably will never be), then who is accountable? One pillar of authorship according to the International Committee of Medical Journal Editors requires that authors agree “to be accountable for all aspects of the work...” (3). At this stage, it is not clear that LLM AI tools can be held accountable, so the American College of Rheumatology (ACR) journal editors and the ACR Committee on Journal Publications have agreed that co-authorship is not appropriate for these tools (see new Author Instructions; https://onlinelibrary.wiley.com/page/ journal/25785745/homepage/guide-to-authors). Another potential issue is that LLM AI tools are trained on existing literature that may be inaccurate and biased. Thus, we also have concerns that unintended biases may be magnified through these tools, often in ways that are not apparent. We acknowledge that there will likely be many instances when such tools will be used to perform analyses or to contribute to a scientific project. Narrow AI tools are currently widely used in imaging analyses (4). Such contributions should be reported by referring to the specific versions of the tools used by authors and ensuring that the tools are publicly available, even if a fee is required. But, as the Journal of the American Medical Association (JAMA) has appropriately stated, “Authors must take responsibility for the integrity of the content generated by these models and tools” (5). Some editors have wondered whether LLM AI tools could be used in the peer review process. The ACR journals use AI tools to check for plagiarism and image authenticity. Furthermore, our search tools use narrow AI to find appropriate peer reviewers. However, we have not put AI tools to use as actual “peer reviewers.” Although we do not anticipate substituting human
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
ChatGPT等…人工智能,作者身份和医学出版。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Redox Pathogenesis in Rheumatic Diseases. Treatment for Rheumatoid Arthritis Associated With Alterations in the Gastrointestinal Microbiota. CD14+ Dendritic-Shaped Cells Functioning as Dendritic Cells in Rheumatoid Arthritis Synovial Tissues. Patient-Led Urate Self-Monitoring to Improve Clinical Outcomes in People With Gout: A Feasibility Study. Detection and Grading of Radiographic Hand Osteoarthritis Using an Automated Machine Learning Platform.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1