大型医学语言模型在伦理和监管方面的挑战。

IF 23.8 1区 医学 Q1 MEDICAL INFORMATICS Lancet Digital Health Pub Date : 2024-04-23 DOI:10.1016/S2589-7500(24)00061-X
Jasmine Chiat Ling Ong PharmD , Shelley Yin-Hsi Chang MD , Wasswa William PhD , Prof Atul J Butte PhD , Prof Nigam H Shah PhD , Lita Sui Tjien Chew MMedSc , Nan Liu PhD , Prof Finale Doshi-Velez PhD , Wei Lu PhD , Prof Julian Savulescu PhD , Daniel Shu Wei Ting PhD
{"title":"大型医学语言模型在伦理和监管方面的挑战。","authors":"Jasmine Chiat Ling Ong PharmD ,&nbsp;Shelley Yin-Hsi Chang MD ,&nbsp;Wasswa William PhD ,&nbsp;Prof Atul J Butte PhD ,&nbsp;Prof Nigam H Shah PhD ,&nbsp;Lita Sui Tjien Chew MMedSc ,&nbsp;Nan Liu PhD ,&nbsp;Prof Finale Doshi-Velez PhD ,&nbsp;Wei Lu PhD ,&nbsp;Prof Julian Savulescu PhD ,&nbsp;Daniel Shu Wei Ting PhD","doi":"10.1016/S2589-7500(24)00061-X","DOIUrl":null,"url":null,"abstract":"<div><p>With the rapid growth of interest in and use of large language models (LLMs) across various industries, we are facing some crucial and profound ethical concerns, especially in the medical field. The unique technical architecture and purported emergent abilities of LLMs differentiate them substantially from other artificial intelligence (AI) models and natural language processing techniques used, necessitating a nuanced understanding of LLM ethics. In this Viewpoint, we highlight ethical concerns stemming from the perspectives of users, developers, and regulators, notably focusing on data privacy and rights of use, data provenance, intellectual property contamination, and broad applications and plasticity of LLMs. A comprehensive framework and mitigating strategies will be imperative for the responsible integration of LLMs into medical practice, ensuring alignment with ethical principles and safeguarding against potential societal risks.</p></div>","PeriodicalId":48534,"journal":{"name":"Lancet Digital Health","volume":"6 6","pages":"Pages e428-e432"},"PeriodicalIF":23.8000,"publicationDate":"2024-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S258975002400061X/pdfft?md5=39a73cb24f24224e1864903fab51b512&pid=1-s2.0-S258975002400061X-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Ethical and regulatory challenges of large language models in medicine\",\"authors\":\"Jasmine Chiat Ling Ong PharmD ,&nbsp;Shelley Yin-Hsi Chang MD ,&nbsp;Wasswa William PhD ,&nbsp;Prof Atul J Butte PhD ,&nbsp;Prof Nigam H Shah PhD ,&nbsp;Lita Sui Tjien Chew MMedSc ,&nbsp;Nan Liu PhD ,&nbsp;Prof Finale Doshi-Velez PhD ,&nbsp;Wei Lu PhD ,&nbsp;Prof Julian Savulescu PhD ,&nbsp;Daniel Shu Wei Ting PhD\",\"doi\":\"10.1016/S2589-7500(24)00061-X\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>With the rapid growth of interest in and use of large language models (LLMs) across various industries, we are facing some crucial and profound ethical concerns, especially in the medical field. The unique technical architecture and purported emergent abilities of LLMs differentiate them substantially from other artificial intelligence (AI) models and natural language processing techniques used, necessitating a nuanced understanding of LLM ethics. In this Viewpoint, we highlight ethical concerns stemming from the perspectives of users, developers, and regulators, notably focusing on data privacy and rights of use, data provenance, intellectual property contamination, and broad applications and plasticity of LLMs. A comprehensive framework and mitigating strategies will be imperative for the responsible integration of LLMs into medical practice, ensuring alignment with ethical principles and safeguarding against potential societal risks.</p></div>\",\"PeriodicalId\":48534,\"journal\":{\"name\":\"Lancet Digital Health\",\"volume\":\"6 6\",\"pages\":\"Pages e428-e432\"},\"PeriodicalIF\":23.8000,\"publicationDate\":\"2024-04-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S258975002400061X/pdfft?md5=39a73cb24f24224e1864903fab51b512&pid=1-s2.0-S258975002400061X-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Lancet Digital Health\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S258975002400061X\",\"RegionNum\":1,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MEDICAL INFORMATICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Lancet Digital Health","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S258975002400061X","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MEDICAL INFORMATICS","Score":null,"Total":0}
引用次数: 0

摘要

随着各行各业对大型语言模型(LLM)的兴趣和使用的快速增长,我们正面临着一些关键而深刻的伦理问题,尤其是在医疗领域。LLM 独特的技术架构和所谓的新兴能力使其与其他人工智能(AI)模型和自然语言处理技术大相径庭,因此有必要对 LLM 的伦理问题进行细致入微的了解。在本 "观点 "中,我们将从用户、开发者和监管者的角度强调伦理问题,尤其关注数据隐私和使用权、数据出处、知识产权污染以及 LLM 的广泛应用和可塑性。要想负责任地将 LLM 融入医疗实践,确保符合伦理原则并防范潜在的社会风险,就必须制定全面的框架和缓解策略。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Ethical and regulatory challenges of large language models in medicine

With the rapid growth of interest in and use of large language models (LLMs) across various industries, we are facing some crucial and profound ethical concerns, especially in the medical field. The unique technical architecture and purported emergent abilities of LLMs differentiate them substantially from other artificial intelligence (AI) models and natural language processing techniques used, necessitating a nuanced understanding of LLM ethics. In this Viewpoint, we highlight ethical concerns stemming from the perspectives of users, developers, and regulators, notably focusing on data privacy and rights of use, data provenance, intellectual property contamination, and broad applications and plasticity of LLMs. A comprehensive framework and mitigating strategies will be imperative for the responsible integration of LLMs into medical practice, ensuring alignment with ethical principles and safeguarding against potential societal risks.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
41.20
自引率
1.60%
发文量
232
审稿时长
13 weeks
期刊介绍: The Lancet Digital Health publishes important, innovative, and practice-changing research on any topic connected with digital technology in clinical medicine, public health, and global health. The journal’s open access content crosses subject boundaries, building bridges between health professionals and researchers.By bringing together the most important advances in this multidisciplinary field,The Lancet Digital Health is the most prominent publishing venue in digital health. We publish a range of content types including Articles,Review, Comment, and Correspondence, contributing to promoting digital technologies in health practice worldwide.
期刊最新文献
Attitudes and perceptions of medical researchers towards the use of artificial intelligence chatbots in the scientific process: an international cross-sectional survey. Building robust, proportionate, and timely approaches to regulation and evaluation of digital mental health technologies. Advancing the management of maternal, fetal, and neonatal infection through harnessing digital health innovations. Innovative diagnostic technologies: navigating regulatory frameworks through advances, challenges, and future prospects. Using digital health technologies to optimise antimicrobial use globally.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1