韩国医学期刊中的大型语言模型使用指南:使用人类-人工智能协作的调查。

IF 1 Q3 MEDICINE, GENERAL & INTERNAL Journal of Yeungnam medical science Pub Date : 2025-01-01 Epub Date: 2024-12-11 DOI:10.12701/jyms.2024.00794
Sangzin Ahn
{"title":"韩国医学期刊中的大型语言模型使用指南:使用人类-人工智能协作的调查。","authors":"Sangzin Ahn","doi":"10.12701/jyms.2024.00794","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Large language models (LLMs), the most recent advancements in artificial intelligence (AI), have profoundly affected academic publishing and raised important ethical and practical concerns. This study examined the prevalence and content of AI guidelines in Korean medical journals to assess the current landscape and inform future policy implementation.</p><p><strong>Methods: </strong>The top 100 Korean medical journals determined by Hirsh index were surveyed. Author guidelines were collected and screened by a human researcher and AI chatbot to identify AI-related content. The key components of LLM policies were extracted and compared across journals. The journal characteristics associated with the adoption of AI guidelines were also analyzed.</p><p><strong>Results: </strong>Only 18% of the surveyed journals had LLM guidelines, which is much lower than previously reported in international journals. However, the adoption rates increased over time, reaching 57.1% in the first quarter of 2024. High-impact journals were more likely to have AI guidelines. All journals with LLM guidelines required authors to declare LLM tool use and 94.4% prohibited AI authorship. The key policy components included emphasizing human responsibility (72.2%), discouraging AI-generated content (44.4%), and exempting basic AI tools (38.9%).</p><p><strong>Conclusion: </strong>While the adoption of LLM guidelines among Korean medical journals is lower than the global trend, there has been a clear increase in implementation over time. The key components of these guidelines align with international standards, but greater standardization and collaboration are needed to ensure the responsible and ethical use of LLMs in medical research and writing.</p>","PeriodicalId":74020,"journal":{"name":"Journal of Yeungnam medical science","volume":" ","pages":"14"},"PeriodicalIF":1.0000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11812075/pdf/","citationCount":"0","resultStr":"{\"title\":\"Large language model usage guidelines in Korean medical journals: a survey using human-artificial intelligence collaboration.\",\"authors\":\"Sangzin Ahn\",\"doi\":\"10.12701/jyms.2024.00794\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Large language models (LLMs), the most recent advancements in artificial intelligence (AI), have profoundly affected academic publishing and raised important ethical and practical concerns. This study examined the prevalence and content of AI guidelines in Korean medical journals to assess the current landscape and inform future policy implementation.</p><p><strong>Methods: </strong>The top 100 Korean medical journals determined by Hirsh index were surveyed. Author guidelines were collected and screened by a human researcher and AI chatbot to identify AI-related content. The key components of LLM policies were extracted and compared across journals. The journal characteristics associated with the adoption of AI guidelines were also analyzed.</p><p><strong>Results: </strong>Only 18% of the surveyed journals had LLM guidelines, which is much lower than previously reported in international journals. However, the adoption rates increased over time, reaching 57.1% in the first quarter of 2024. High-impact journals were more likely to have AI guidelines. All journals with LLM guidelines required authors to declare LLM tool use and 94.4% prohibited AI authorship. The key policy components included emphasizing human responsibility (72.2%), discouraging AI-generated content (44.4%), and exempting basic AI tools (38.9%).</p><p><strong>Conclusion: </strong>While the adoption of LLM guidelines among Korean medical journals is lower than the global trend, there has been a clear increase in implementation over time. The key components of these guidelines align with international standards, but greater standardization and collaboration are needed to ensure the responsible and ethical use of LLMs in medical research and writing.</p>\",\"PeriodicalId\":74020,\"journal\":{\"name\":\"Journal of Yeungnam medical science\",\"volume\":\" \",\"pages\":\"14\"},\"PeriodicalIF\":1.0000,\"publicationDate\":\"2025-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11812075/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Yeungnam medical science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.12701/jyms.2024.00794\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/12/11 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q3\",\"JCRName\":\"MEDICINE, GENERAL & INTERNAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Yeungnam medical science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.12701/jyms.2024.00794","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/12/11 0:00:00","PubModel":"Epub","JCR":"Q3","JCRName":"MEDICINE, GENERAL & INTERNAL","Score":null,"Total":0}
引用次数: 0

摘要

背景:大型语言模型(llm)是人工智能(AI)的最新进展,深刻地影响了学术出版,并引起了重要的伦理和实践问题。本研究调查了韩国医学期刊中人工智能指南的流行程度和内容,以评估当前形势并为未来的政策实施提供信息。方法:对Hirsh指数确定的前100名韩国医学期刊进行调查。作者指南由人类研究员和人工智能聊天机器人收集和筛选,以识别人工智能相关的内容。提取法学硕士政策的关键组成部分,并在期刊之间进行比较。还分析了与采用人工智能指南相关的期刊特征。结果:仅有18%的被调查期刊有LLM指南,这一比例远低于此前国际期刊的报道。然而,采用率随着时间的推移而增加,在2024年第一季度达到57.1%。高影响力期刊更有可能有人工智能指南。所有有LLM指南的期刊都要求作者声明使用了LLM工具,94.4%的期刊禁止人工智能作者。关键的政策组成部分包括强调人类责任(72.2%)、阻止人工智能生成内容(44.4%)和豁免基本人工智能工具(38.9%)。结论:虽然韩国医学期刊中LLM指南的采用率低于全球趋势,但随着时间的推移,实施情况明显增加。这些准则的关键部分与国际标准保持一致,但需要加强标准化和协作,以确保在医学研究和写作中负责任和合乎道德地使用法学硕士。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Large language model usage guidelines in Korean medical journals: a survey using human-artificial intelligence collaboration.

Background: Large language models (LLMs), the most recent advancements in artificial intelligence (AI), have profoundly affected academic publishing and raised important ethical and practical concerns. This study examined the prevalence and content of AI guidelines in Korean medical journals to assess the current landscape and inform future policy implementation.

Methods: The top 100 Korean medical journals determined by Hirsh index were surveyed. Author guidelines were collected and screened by a human researcher and AI chatbot to identify AI-related content. The key components of LLM policies were extracted and compared across journals. The journal characteristics associated with the adoption of AI guidelines were also analyzed.

Results: Only 18% of the surveyed journals had LLM guidelines, which is much lower than previously reported in international journals. However, the adoption rates increased over time, reaching 57.1% in the first quarter of 2024. High-impact journals were more likely to have AI guidelines. All journals with LLM guidelines required authors to declare LLM tool use and 94.4% prohibited AI authorship. The key policy components included emphasizing human responsibility (72.2%), discouraging AI-generated content (44.4%), and exempting basic AI tools (38.9%).

Conclusion: While the adoption of LLM guidelines among Korean medical journals is lower than the global trend, there has been a clear increase in implementation over time. The key components of these guidelines align with international standards, but greater standardization and collaboration are needed to ensure the responsible and ethical use of LLMs in medical research and writing.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
0.80
自引率
0.00%
发文量
0
期刊最新文献
Clinical significance of exosomal noncoding RNAs in hepatocellular carcinoma: a narrative review. Comparison of ganglion cell-inner plexiform layer thickness among patients with intermittent exotropia according to fixation preference: a retrospective observational study. Pathology and diagnostic approaches to well-differentiated hepatocellular lesions: a narrative review. Advances in hepatocellular carcinoma: hepatocarcinogenesis, role of exosomal noncoding RNAs, and diagnostic pathology. Ischemic monomelic neuropathy following arteriovenous fistula surgery: a case report.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1