Medical malpractice liability in large language model artificial intelligence: legal review and policy recommendations.

IF 1.4 Q2 MEDICINE, GENERAL & INTERNAL Journal of Osteopathic Medicine Pub Date : 2024-01-31 eCollection Date: 2024-07-01 DOI:10.1515/jom-2023-0229
David O Shumway, Hayes J Hartman
{"title":"Medical malpractice liability in large language model artificial intelligence: legal review and policy recommendations.","authors":"David O Shumway, Hayes J Hartman","doi":"10.1515/jom-2023-0229","DOIUrl":null,"url":null,"abstract":"<p><p>The emergence of generative large language model (LLM) artificial intelligence (AI) represents one of the most profound developments in healthcare in decades, with the potential to create revolutionary and seismic changes in the practice of medicine as we know it. However, significant concerns have arisen over questions of liability for bad outcomes associated with LLM AI-influenced medical decision making. Although the authors were not able to identify a case in the United States that has been adjudicated on medical malpractice in the context of LLM AI at this time, sufficient precedent exists to interpret how analogous situations might be applied to these cases when they inevitably come to trial in the future. This commentary will discuss areas of potential legal vulnerability for clinicians utilizing LLM AI through review of past case law pertaining to third-party medical guidance and review the patchwork of current regulations relating to medical malpractice liability in AI. Finally, we will propose proactive policy recommendations including creating an enforcement duty at the US Food and Drug Administration (FDA) to require algorithmic transparency, recommend reliance on peer-reviewed data and rigorous validation testing when LLMs are utilized in clinical settings, and encourage tort reform to share liability between physicians and LLM developers.</p>","PeriodicalId":36050,"journal":{"name":"Journal of Osteopathic Medicine","volume":" ","pages":"287-290"},"PeriodicalIF":1.4000,"publicationDate":"2024-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Osteopathic Medicine","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1515/jom-2023-0229","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/7/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"MEDICINE, GENERAL & INTERNAL","Score":null,"Total":0}
引用次数: 0

Abstract

The emergence of generative large language model (LLM) artificial intelligence (AI) represents one of the most profound developments in healthcare in decades, with the potential to create revolutionary and seismic changes in the practice of medicine as we know it. However, significant concerns have arisen over questions of liability for bad outcomes associated with LLM AI-influenced medical decision making. Although the authors were not able to identify a case in the United States that has been adjudicated on medical malpractice in the context of LLM AI at this time, sufficient precedent exists to interpret how analogous situations might be applied to these cases when they inevitably come to trial in the future. This commentary will discuss areas of potential legal vulnerability for clinicians utilizing LLM AI through review of past case law pertaining to third-party medical guidance and review the patchwork of current regulations relating to medical malpractice liability in AI. Finally, we will propose proactive policy recommendations including creating an enforcement duty at the US Food and Drug Administration (FDA) to require algorithmic transparency, recommend reliance on peer-reviewed data and rigorous validation testing when LLMs are utilized in clinical settings, and encourage tort reform to share liability between physicians and LLM developers.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
大型语言模型人工智能中的医疗事故责任:法律审查和政策建议。
生成式大型语言模型(LLM)人工智能(AI)的出现是几十年来医疗保健领域最深远的发展之一,有可能给我们所熟知的医疗实践带来革命性的巨变。然而,人们对与受 LLM 人工智能影响的医疗决策相关的不良后果的责任问题产生了极大的担忧。虽然作者目前无法在美国找到一个在 LLM 人工智能背景下对医疗事故进行判决的案例,但已有足够的先例可以解释,当这些案例将来不可避免地进入审判阶段时,类似的情况可能会如何适用于这些案例。本评论将通过回顾过去与第三方医疗指导相关的判例法,讨论临床医生在使用 LLM 人工智能时可能存在法律漏洞的领域,并审查与人工智能中医疗事故责任相关的现行法规。最后,我们将提出积极的政策建议,包括在美国食品和药物管理局(FDA)设立执法职责,要求算法透明化;建议在临床环境中使用 LLM 时依赖同行评审数据和严格的验证测试;鼓励侵权改革,在医生和 LLM 开发者之间分担责任。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Journal of Osteopathic Medicine
Journal of Osteopathic Medicine Health Professions-Complementary and Manual Therapy
CiteScore
2.20
自引率
13.30%
发文量
118
期刊最新文献
Osteopathic approach to injuries of the overhead thrower's shoulder. Elbow injuries in overhead throwing athletes: clinical evaluation, treatment, and osteopathic considerations. The role of osteopathic manipulative treatment for dystonia: a literature review. Improving peripheral artery disease screening and treatment: a screening, diagnosis, and treatment tool for use across multiple care settings. Effects of the Strong Hearts program at two years post program completion.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1