大型语言模型未来在健康领域的应用取决于监管机构是否执行安全标准

IF 23.8 1区 医学 Q1 MEDICAL INFORMATICS Lancet Digital Health Pub Date : 2024-08-21 DOI:10.1016/S2589-7500(24)00124-9
Oscar Freyer , Isabella Catharina Wiest Dr med , Prof Jakob Nikolas Kather Dr med , Stephen Gilbert PhD
{"title":"大型语言模型未来在健康领域的应用取决于监管机构是否执行安全标准","authors":"Oscar Freyer ,&nbsp;Isabella Catharina Wiest Dr med ,&nbsp;Prof Jakob Nikolas Kather Dr med ,&nbsp;Stephen Gilbert PhD","doi":"10.1016/S2589-7500(24)00124-9","DOIUrl":null,"url":null,"abstract":"<div><p>Among the rapid integration of artificial intelligence in clinical settings, large language models (LLMs), such as Generative Pre-trained Transformer-4, have emerged as multifaceted tools that have potential for health-care delivery, diagnosis, and patient care. However, deployment of LLMs raises substantial regulatory and safety concerns. Due to their high output variability, poor inherent explainability, and the risk of so-called AI hallucinations, LLM-based health-care applications that serve a medical purpose face regulatory challenges for approval as medical devices under US and EU laws, including the recently passed EU Artificial Intelligence Act. Despite unaddressed risks for patients, including misdiagnosis and unverified medical advice, such applications are available on the market. The regulatory ambiguity surrounding these tools creates an urgent need for frameworks that accommodate their unique capabilities and limitations. Alongside the development of these frameworks, existing regulations should be enforced. If regulators fear enforcing the regulations in a market dominated by supply or development by large technology companies, the consequences of layperson harm will force belated action, damaging the potentiality of LLM-based applications for layperson medical advice.</p></div>","PeriodicalId":48534,"journal":{"name":"Lancet Digital Health","volume":"6 9","pages":"Pages e662-e672"},"PeriodicalIF":23.8000,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589750024001249/pdfft?md5=2df13b013a0e89af3fe332b6bcb83ed0&pid=1-s2.0-S2589750024001249-main.pdf","citationCount":"0","resultStr":"{\"title\":\"A future role for health applications of large language models depends on regulators enforcing safety standards\",\"authors\":\"Oscar Freyer ,&nbsp;Isabella Catharina Wiest Dr med ,&nbsp;Prof Jakob Nikolas Kather Dr med ,&nbsp;Stephen Gilbert PhD\",\"doi\":\"10.1016/S2589-7500(24)00124-9\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Among the rapid integration of artificial intelligence in clinical settings, large language models (LLMs), such as Generative Pre-trained Transformer-4, have emerged as multifaceted tools that have potential for health-care delivery, diagnosis, and patient care. However, deployment of LLMs raises substantial regulatory and safety concerns. Due to their high output variability, poor inherent explainability, and the risk of so-called AI hallucinations, LLM-based health-care applications that serve a medical purpose face regulatory challenges for approval as medical devices under US and EU laws, including the recently passed EU Artificial Intelligence Act. Despite unaddressed risks for patients, including misdiagnosis and unverified medical advice, such applications are available on the market. The regulatory ambiguity surrounding these tools creates an urgent need for frameworks that accommodate their unique capabilities and limitations. Alongside the development of these frameworks, existing regulations should be enforced. If regulators fear enforcing the regulations in a market dominated by supply or development by large technology companies, the consequences of layperson harm will force belated action, damaging the potentiality of LLM-based applications for layperson medical advice.</p></div>\",\"PeriodicalId\":48534,\"journal\":{\"name\":\"Lancet Digital Health\",\"volume\":\"6 9\",\"pages\":\"Pages e662-e672\"},\"PeriodicalIF\":23.8000,\"publicationDate\":\"2024-08-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2589750024001249/pdfft?md5=2df13b013a0e89af3fe332b6bcb83ed0&pid=1-s2.0-S2589750024001249-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Lancet Digital Health\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2589750024001249\",\"RegionNum\":1,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MEDICAL INFORMATICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Lancet Digital Health","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2589750024001249","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MEDICAL INFORMATICS","Score":null,"Total":0}
引用次数: 0

摘要

在人工智能与临床环境的快速融合中,大型语言模型(LLMs),如生成预训练转换器-4,已成为一种多方面的工具,在医疗保健服务、诊断和病人护理方面具有潜力。然而,LLMs 的部署引发了大量的监管和安全问题。由于其输出可变性高、内在可解释性差以及所谓的人工智能幻觉风险,根据美国和欧盟法律(包括最近通过的《欧盟人工智能法案》),基于 LLM 的医疗保健应用在作为医疗设备获得批准时面临监管挑战。尽管患者面临的风险(包括误诊和未经验证的医疗建议)尚未得到解决,但市场上仍有此类应用程序。围绕这些工具的监管模糊性导致迫切需要制定框架,以适应其独特的能力和局限性。在制定这些框架的同时,应执行现有法规。如果监管者害怕在由大型技术公司主导供应或开发的市场中执行法规,那么外行人受到伤害的后果将迫使监管者迟迟不采取行动,从而损害基于 LLM 的外行人医疗建议应用的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A future role for health applications of large language models depends on regulators enforcing safety standards

Among the rapid integration of artificial intelligence in clinical settings, large language models (LLMs), such as Generative Pre-trained Transformer-4, have emerged as multifaceted tools that have potential for health-care delivery, diagnosis, and patient care. However, deployment of LLMs raises substantial regulatory and safety concerns. Due to their high output variability, poor inherent explainability, and the risk of so-called AI hallucinations, LLM-based health-care applications that serve a medical purpose face regulatory challenges for approval as medical devices under US and EU laws, including the recently passed EU Artificial Intelligence Act. Despite unaddressed risks for patients, including misdiagnosis and unverified medical advice, such applications are available on the market. The regulatory ambiguity surrounding these tools creates an urgent need for frameworks that accommodate their unique capabilities and limitations. Alongside the development of these frameworks, existing regulations should be enforced. If regulators fear enforcing the regulations in a market dominated by supply or development by large technology companies, the consequences of layperson harm will force belated action, damaging the potentiality of LLM-based applications for layperson medical advice.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
41.20
自引率
1.60%
发文量
232
审稿时长
13 weeks
期刊介绍: The Lancet Digital Health publishes important, innovative, and practice-changing research on any topic connected with digital technology in clinical medicine, public health, and global health. The journal’s open access content crosses subject boundaries, building bridges between health professionals and researchers.By bringing together the most important advances in this multidisciplinary field,The Lancet Digital Health is the most prominent publishing venue in digital health. We publish a range of content types including Articles,Review, Comment, and Correspondence, contributing to promoting digital technologies in health practice worldwide.
期刊最新文献
Attitudes and perceptions of medical researchers towards the use of artificial intelligence chatbots in the scientific process: an international cross-sectional survey. Building robust, proportionate, and timely approaches to regulation and evaluation of digital mental health technologies. Advancing the management of maternal, fetal, and neonatal infection through harnessing digital health innovations. Innovative diagnostic technologies: navigating regulatory frameworks through advances, challenges, and future prospects. Using digital health technologies to optimise antimicrobial use globally.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1