医疗保健领域实施人工智能的伦理评估工具:CARE-AI

IF 58.7 1区 医学 Q1 BIOCHEMISTRY & MOLECULAR BIOLOGY Nature Medicine Pub Date : 2024-10-18 DOI:10.1038/s41591-024-03310-1
Yilin Ning, Xiaoxuan Liu, Gary S. Collins, Karel G. M. Moons, Melissa McCradden, Daniel Shu Wei Ting, Jasmine Chiat Ling Ong, Benjamin Alan Goldstein, Siegfried K. Wagner, Pearse A. Keane, Eric J. Topol, Nan Liu
{"title":"医疗保健领域实施人工智能的伦理评估工具:CARE-AI","authors":"Yilin Ning, Xiaoxuan Liu, Gary S. Collins, Karel G. M. Moons, Melissa McCradden, Daniel Shu Wei Ting, Jasmine Chiat Ling Ong, Benjamin Alan Goldstein, Siegfried K. Wagner, Pearse A. Keane, Eric J. Topol, Nan Liu","doi":"10.1038/s41591-024-03310-1","DOIUrl":null,"url":null,"abstract":"<p>The deployment of artificial intelligence (AI)-powered prediction models in healthcare can lead to ethical concerns about their implementation and upscaling. For example, AI prediction models can hinder clinical decision-making if they advise different diagnoses or treatments by sex and gender or by race and ethnicity without clear justification. Recent guidance (such as the WHO guidance on ethics and governance of AI for health and the Dutch guideline on AI for healthcare) and legislation (such as the European Union AI Act and the White House Executive Order on Safe, Secure, and Trustworthy Development and Use of AI in United States) have outlined important principles for the implementation of AI, including ethical considerations<sup>1,2</sup>. Health systems have responded by establishing governance committees and processes to ensure the safe and equitable implementation of AI tools<sup>3</sup>. However, there is currently no assessment tool that can identify and mitigate ethical issues during the implementation of AI prediction models in healthcare practice, including for public health.</p><p>The development and validation of AI prediction models has benefited from detailed reporting and risk-of-bias tools, such as TRIPOD+AI<sup>4</sup> and PROBAST (with its forthcoming AI extension) for fairness and bias control and CLAIM<sup>5</sup> for data privacy, security and interpretability of AI imaging studies. However, when planning the implementation of a rigorously developed and well-performing AI prediction model in healthcare practice, existing recommendations and guidance on ethics are sparse and lack operational detail. For example, the DECIDE-AI reporting guideline<sup>6</sup> contains a small number of ethics-related recommendations for early clinical evaluation of AI concerning equity, safety and human-AI interaction, and FUTURE-AI<sup>7</sup> provides recommendations based on six principles (fairness, universality, traceability, usability, robustness and explainability) in model design, development, validation and deployment. A bioethics-centric delivery science toolkit for responsible AI implementation in healthcare is needed<sup>8</sup>.</p>","PeriodicalId":19037,"journal":{"name":"Nature Medicine","volume":null,"pages":null},"PeriodicalIF":58.7000,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An ethics assessment tool for artificial intelligence implementation in healthcare: CARE-AI\",\"authors\":\"Yilin Ning, Xiaoxuan Liu, Gary S. Collins, Karel G. M. Moons, Melissa McCradden, Daniel Shu Wei Ting, Jasmine Chiat Ling Ong, Benjamin Alan Goldstein, Siegfried K. Wagner, Pearse A. Keane, Eric J. Topol, Nan Liu\",\"doi\":\"10.1038/s41591-024-03310-1\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>The deployment of artificial intelligence (AI)-powered prediction models in healthcare can lead to ethical concerns about their implementation and upscaling. For example, AI prediction models can hinder clinical decision-making if they advise different diagnoses or treatments by sex and gender or by race and ethnicity without clear justification. Recent guidance (such as the WHO guidance on ethics and governance of AI for health and the Dutch guideline on AI for healthcare) and legislation (such as the European Union AI Act and the White House Executive Order on Safe, Secure, and Trustworthy Development and Use of AI in United States) have outlined important principles for the implementation of AI, including ethical considerations<sup>1,2</sup>. Health systems have responded by establishing governance committees and processes to ensure the safe and equitable implementation of AI tools<sup>3</sup>. However, there is currently no assessment tool that can identify and mitigate ethical issues during the implementation of AI prediction models in healthcare practice, including for public health.</p><p>The development and validation of AI prediction models has benefited from detailed reporting and risk-of-bias tools, such as TRIPOD+AI<sup>4</sup> and PROBAST (with its forthcoming AI extension) for fairness and bias control and CLAIM<sup>5</sup> for data privacy, security and interpretability of AI imaging studies. However, when planning the implementation of a rigorously developed and well-performing AI prediction model in healthcare practice, existing recommendations and guidance on ethics are sparse and lack operational detail. For example, the DECIDE-AI reporting guideline<sup>6</sup> contains a small number of ethics-related recommendations for early clinical evaluation of AI concerning equity, safety and human-AI interaction, and FUTURE-AI<sup>7</sup> provides recommendations based on six principles (fairness, universality, traceability, usability, robustness and explainability) in model design, development, validation and deployment. A bioethics-centric delivery science toolkit for responsible AI implementation in healthcare is needed<sup>8</sup>.</p>\",\"PeriodicalId\":19037,\"journal\":{\"name\":\"Nature Medicine\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":58.7000,\"publicationDate\":\"2024-10-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Nature Medicine\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1038/s41591-024-03310-1\",\"RegionNum\":1,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"BIOCHEMISTRY & MOLECULAR BIOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Nature Medicine","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1038/s41591-024-03310-1","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"BIOCHEMISTRY & MOLECULAR BIOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

在医疗保健领域部署由人工智能(AI)驱动的预测模型可能会引发对其实施和推广的伦理问题。例如,如果人工智能预测模型在没有明确理由的情况下按性别或种族和民族提出不同的诊断或治疗建议,可能会妨碍临床决策。最近的指南(如世界卫生组织关于人工智能用于医疗卫生的伦理和治理指南以及荷兰关于人工智能用于医疗卫生的指南)和立法(如欧盟人工智能法案和美国白宫关于安全、可靠和值得信赖地开发和使用人工智能的行政命令)概述了实施人工智能的重要原则,包括伦理方面的考虑1,2。医疗系统已通过建立管理委员会和流程来确保安全、公平地实施人工智能工具3。人工智能预测模型的开发和验证得益于详细的报告和偏倚风险工具,如用于公平性和偏倚控制的 TRIPOD+AI4 和 PROBAST(及其即将推出的人工智能扩展),以及用于人工智能成像研究的数据隐私、安全性和可解释性的 CLAIM5。然而,当计划在医疗实践中实施一个经过严格开发且性能良好的人工智能预测模型时,现有的伦理建议和指导并不多,也缺乏操作细节。例如,DECIDE-AI 报告指南6 包含了少量与伦理相关的建议,适用于人工智能的早期临床评估,涉及公平性、安全性和人与人工智能的互动,而 FUTURE-AI7 则根据模型设计、开发、验证和部署中的六项原则(公平性、普遍性、可追溯性、可用性、稳健性和可解释性)提出了建议。需要一个以生物伦理为中心的交付科学工具包,以便在医疗保健领域负责任地实施人工智能8。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
An ethics assessment tool for artificial intelligence implementation in healthcare: CARE-AI

The deployment of artificial intelligence (AI)-powered prediction models in healthcare can lead to ethical concerns about their implementation and upscaling. For example, AI prediction models can hinder clinical decision-making if they advise different diagnoses or treatments by sex and gender or by race and ethnicity without clear justification. Recent guidance (such as the WHO guidance on ethics and governance of AI for health and the Dutch guideline on AI for healthcare) and legislation (such as the European Union AI Act and the White House Executive Order on Safe, Secure, and Trustworthy Development and Use of AI in United States) have outlined important principles for the implementation of AI, including ethical considerations1,2. Health systems have responded by establishing governance committees and processes to ensure the safe and equitable implementation of AI tools3. However, there is currently no assessment tool that can identify and mitigate ethical issues during the implementation of AI prediction models in healthcare practice, including for public health.

The development and validation of AI prediction models has benefited from detailed reporting and risk-of-bias tools, such as TRIPOD+AI4 and PROBAST (with its forthcoming AI extension) for fairness and bias control and CLAIM5 for data privacy, security and interpretability of AI imaging studies. However, when planning the implementation of a rigorously developed and well-performing AI prediction model in healthcare practice, existing recommendations and guidance on ethics are sparse and lack operational detail. For example, the DECIDE-AI reporting guideline6 contains a small number of ethics-related recommendations for early clinical evaluation of AI concerning equity, safety and human-AI interaction, and FUTURE-AI7 provides recommendations based on six principles (fairness, universality, traceability, usability, robustness and explainability) in model design, development, validation and deployment. A bioethics-centric delivery science toolkit for responsible AI implementation in healthcare is needed8.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Nature Medicine
Nature Medicine 医学-生化与分子生物学
CiteScore
100.90
自引率
0.70%
发文量
525
审稿时长
1 months
期刊介绍: Nature Medicine is a monthly journal publishing original peer-reviewed research in all areas of medicine. The publication focuses on originality, timeliness, interdisciplinary interest, and the impact on improving human health. In addition to research articles, Nature Medicine also publishes commissioned content such as News, Reviews, and Perspectives. This content aims to provide context for the latest advances in translational and clinical research, reaching a wide audience of M.D. and Ph.D. readers. All editorial decisions for the journal are made by a team of full-time professional editors. Nature Medicine consider all types of clinical research, including: -Case-reports and small case series -Clinical trials, whether phase 1, 2, 3 or 4 -Observational studies -Meta-analyses -Biomarker studies -Public and global health studies Nature Medicine is also committed to facilitating communication between translational and clinical researchers. As such, we consider “hybrid” studies with preclinical and translational findings reported alongside data from clinical studies.
期刊最新文献
An ethics assessment tool for artificial intelligence implementation in healthcare: CARE-AI Systems education can train the next generation of scientists and clinicians How to prepare for the next inevitable Ebola outbreak: lessons from West Africa Helicobacter pylori, gastric cancer and the screening conundrum Health AI needs meaningful human involvement: lessons from war
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1