在医疗保健领域有选择地部署人工智能:算法的道德算法。

IF 16.4 1区 化学 Q1 CHEMISTRY, MULTIDISCIPLINARY Accounts of Chemical Research Pub Date : 2024-03-30 DOI:10.1111/bioe.13281
Robert Vandersluis, Julian Savulescu
{"title":"在医疗保健领域有选择地部署人工智能:算法的道德算法。","authors":"Robert Vandersluis,&nbsp;Julian Savulescu","doi":"10.1111/bioe.13281","DOIUrl":null,"url":null,"abstract":"<p>Machine-learning algorithms have the potential to revolutionise diagnostic and prognostic tasks in health care, yet algorithmic performance levels can be materially worse for subgroups that have been underrepresented in algorithmic training data. Given this epistemic deficit, the inclusion of underrepresented groups in algorithmic processes can result in harm. Yet delaying the deployment of algorithmic systems until more equitable results can be achieved would avoidably and foreseeably lead to a significant number of unnecessary deaths in well-represented populations. Faced with this dilemma between equity and utility, we draw on two case studies involving breast cancer and melanoma to argue for the selective deployment of diagnostic and prognostic tools for some well-represented groups, even if this results in the temporary exclusion of underrepresented patients from algorithmic approaches. We argue that this approach is justifiable when the inclusion of underrepresented patients would cause them to be harmed. While the context of historic injustice poses a considerable challenge for the ethical acceptability of selective algorithmic deployment strategies, we argue that, at least for the case studies addressed in this article, the issue of historic injustice is better addressed through nonalgorithmic measures, including being transparent with patients about the nature of the current epistemic deficits, providing additional services to algorithmically excluded populations, and through urgent commitments to gather additional algorithmic training data from excluded populations, paving the way for universal algorithmic deployment that is accurate for all patient groups. These commitments should be supported by regulation and, where necessary, government funding to ensure that any delays for excluded groups are kept to the minimum. We offer an ethical algorithm for algorithms—showing when to ethically delay, expedite, or selectively deploy algorithmic systems in healthcare settings.</p>","PeriodicalId":1,"journal":{"name":"Accounts of Chemical Research","volume":null,"pages":null},"PeriodicalIF":16.4000,"publicationDate":"2024-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/bioe.13281","citationCount":"0","resultStr":"{\"title\":\"The selective deployment of AI in healthcare\",\"authors\":\"Robert Vandersluis,&nbsp;Julian Savulescu\",\"doi\":\"10.1111/bioe.13281\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Machine-learning algorithms have the potential to revolutionise diagnostic and prognostic tasks in health care, yet algorithmic performance levels can be materially worse for subgroups that have been underrepresented in algorithmic training data. Given this epistemic deficit, the inclusion of underrepresented groups in algorithmic processes can result in harm. Yet delaying the deployment of algorithmic systems until more equitable results can be achieved would avoidably and foreseeably lead to a significant number of unnecessary deaths in well-represented populations. Faced with this dilemma between equity and utility, we draw on two case studies involving breast cancer and melanoma to argue for the selective deployment of diagnostic and prognostic tools for some well-represented groups, even if this results in the temporary exclusion of underrepresented patients from algorithmic approaches. We argue that this approach is justifiable when the inclusion of underrepresented patients would cause them to be harmed. While the context of historic injustice poses a considerable challenge for the ethical acceptability of selective algorithmic deployment strategies, we argue that, at least for the case studies addressed in this article, the issue of historic injustice is better addressed through nonalgorithmic measures, including being transparent with patients about the nature of the current epistemic deficits, providing additional services to algorithmically excluded populations, and through urgent commitments to gather additional algorithmic training data from excluded populations, paving the way for universal algorithmic deployment that is accurate for all patient groups. These commitments should be supported by regulation and, where necessary, government funding to ensure that any delays for excluded groups are kept to the minimum. We offer an ethical algorithm for algorithms—showing when to ethically delay, expedite, or selectively deploy algorithmic systems in healthcare settings.</p>\",\"PeriodicalId\":1,\"journal\":{\"name\":\"Accounts of Chemical Research\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":16.4000,\"publicationDate\":\"2024-03-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1111/bioe.13281\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Accounts of Chemical Research\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/bioe.13281\",\"RegionNum\":1,\"RegionCategory\":\"化学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"CHEMISTRY, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Accounts of Chemical Research","FirstCategoryId":"98","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/bioe.13281","RegionNum":1,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"CHEMISTRY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

摘要

机器学习算法有可能彻底改变医疗保健领域的诊断和预后任务,但对于在算法训练数据中代表性不足的亚群体而言,算法性能水平可能会大打折扣。鉴于这种认识上的缺陷,将代表性不足的群体纳入算法过程可能会造成伤害。然而,在实现更公平的结果之前推迟算法系统的部署,将不可避免地、可预见地导致大量代表性强的人群不必要的死亡。面对公平与效用之间的两难选择,我们借鉴了涉及乳腺癌和黑色素瘤的两个案例研究,主张有选择性地为一些代表性强的群体部署诊断和预后工具,即使这会导致代表性不足的患者暂时被排除在算法方法之外。我们认为,当纳入代表性不足的患者会导致他们受到伤害时,这种方法是合理的。虽然历史性不公正的背景对选择性算法部署策略的伦理可接受性提出了巨大挑战,但我们认为,至少对本文所涉及的案例研究而言,历史性不公正的问题最好通过非算法措施来解决,包括向患者公开当前认识论缺陷的性质,向被算法排除的人群提供额外服务,以及紧急承诺从被排除的人群中收集额外的算法训练数据,为所有患者群体准确的通用算法部署铺平道路。这些承诺应得到法规的支持,必要时还应得到政府资金的支持,以确保将被排除人群的任何延误降到最低。我们为算法提供了一种道德算法--说明在医疗保健环境中,何时应合乎道德地延迟、加快或有选择地部署算法系统。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
The selective deployment of AI in healthcare

Machine-learning algorithms have the potential to revolutionise diagnostic and prognostic tasks in health care, yet algorithmic performance levels can be materially worse for subgroups that have been underrepresented in algorithmic training data. Given this epistemic deficit, the inclusion of underrepresented groups in algorithmic processes can result in harm. Yet delaying the deployment of algorithmic systems until more equitable results can be achieved would avoidably and foreseeably lead to a significant number of unnecessary deaths in well-represented populations. Faced with this dilemma between equity and utility, we draw on two case studies involving breast cancer and melanoma to argue for the selective deployment of diagnostic and prognostic tools for some well-represented groups, even if this results in the temporary exclusion of underrepresented patients from algorithmic approaches. We argue that this approach is justifiable when the inclusion of underrepresented patients would cause them to be harmed. While the context of historic injustice poses a considerable challenge for the ethical acceptability of selective algorithmic deployment strategies, we argue that, at least for the case studies addressed in this article, the issue of historic injustice is better addressed through nonalgorithmic measures, including being transparent with patients about the nature of the current epistemic deficits, providing additional services to algorithmically excluded populations, and through urgent commitments to gather additional algorithmic training data from excluded populations, paving the way for universal algorithmic deployment that is accurate for all patient groups. These commitments should be supported by regulation and, where necessary, government funding to ensure that any delays for excluded groups are kept to the minimum. We offer an ethical algorithm for algorithms—showing when to ethically delay, expedite, or selectively deploy algorithmic systems in healthcare settings.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Accounts of Chemical Research
Accounts of Chemical Research 化学-化学综合
CiteScore
31.40
自引率
1.10%
发文量
312
审稿时长
2 months
期刊介绍: Accounts of Chemical Research presents short, concise and critical articles offering easy-to-read overviews of basic research and applications in all areas of chemistry and biochemistry. These short reviews focus on research from the author’s own laboratory and are designed to teach the reader about a research project. In addition, Accounts of Chemical Research publishes commentaries that give an informed opinion on a current research problem. Special Issues online are devoted to a single topic of unusual activity and significance. Accounts of Chemical Research replaces the traditional article abstract with an article "Conspectus." These entries synopsize the research affording the reader a closer look at the content and significance of an article. Through this provision of a more detailed description of the article contents, the Conspectus enhances the article's discoverability by search engines and the exposure for the research.
期刊最新文献
Management of Cholesteatoma: Hearing Rehabilitation. Congenital Cholesteatoma. Evaluation of Cholesteatoma. Management of Cholesteatoma: Extension Beyond Middle Ear/Mastoid. Recidivism and Recurrence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1