Equity, autonomy, and the ethical risks and opportunities of generalist medical AI

Reuben Sass
{"title":"Equity, autonomy, and the ethical risks and opportunities of generalist medical AI","authors":"Reuben Sass","doi":"10.1007/s43681-023-00380-8","DOIUrl":null,"url":null,"abstract":"<div><p>This paper considers the ethical risks and opportunities presented by generalist medical artificial intelligence (GMAI), a kind of dynamic, multimodal AI proposed by Moor et al. (2023) for use in health care. The research objective is to apply widely accepted principles of biomedical ethics to analyze the possible consequences of GMAI, while emphasizing the distinctions between GMAI and current-generation, task-specific medical AI. The principles of autonomy and health equity in particular provide useful guidance for the ethical risks and opportunities of novel AI systems in health care. The ethics of two applications of GMAI are examined: enabling decision aids that inform and educate patients about certain treatments and conditions, and expanding AI-driven diagnosis and treatment recommendation. Emphasis is placed on the potential of GMAI to improve shared decision-making between patients and providers, which supports patient autonomy. Another focus is on health equity, or the reduction of health and access disparities facing underserved populations. Although GMAI presents opportunities to improve patient autonomy, health literacy, and health equity, premature or inadequately regulated adoption of GMAI has the potential to compromise both health equity and patient autonomy. On the other hand, there are significant risks to health equity and autonomy that may arise from not adopting GMAI that has been thoroughly validated and tested. A careful balancing of these risks and benefits will be required to secure the best ethical outcome, if GMAI is ever employed at scale.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"567 - 577"},"PeriodicalIF":0.0000,"publicationDate":"2023-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI and ethics","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s43681-023-00380-8","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This paper considers the ethical risks and opportunities presented by generalist medical artificial intelligence (GMAI), a kind of dynamic, multimodal AI proposed by Moor et al. (2023) for use in health care. The research objective is to apply widely accepted principles of biomedical ethics to analyze the possible consequences of GMAI, while emphasizing the distinctions between GMAI and current-generation, task-specific medical AI. The principles of autonomy and health equity in particular provide useful guidance for the ethical risks and opportunities of novel AI systems in health care. The ethics of two applications of GMAI are examined: enabling decision aids that inform and educate patients about certain treatments and conditions, and expanding AI-driven diagnosis and treatment recommendation. Emphasis is placed on the potential of GMAI to improve shared decision-making between patients and providers, which supports patient autonomy. Another focus is on health equity, or the reduction of health and access disparities facing underserved populations. Although GMAI presents opportunities to improve patient autonomy, health literacy, and health equity, premature or inadequately regulated adoption of GMAI has the potential to compromise both health equity and patient autonomy. On the other hand, there are significant risks to health equity and autonomy that may arise from not adopting GMAI that has been thoroughly validated and tested. A careful balancing of these risks and benefits will be required to secure the best ethical outcome, if GMAI is ever employed at scale.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
公平、自主以及全科医学人工智能的伦理风险和机遇
本文考虑了通才医疗人工智能(GMAI)所带来的伦理风险和机遇,GMAI是Moor等人(2023)提出的一种用于医疗保健的动态、多模态人工智能。研究目标是应用广泛接受的生物医学伦理原则来分析GMAI可能产生的后果,同时强调GMAI与当前一代特定任务的医疗AI之间的区别。自主权和卫生公平原则尤其为新型人工智能系统在卫生保健中的伦理风险和机会提供了有用的指导。研究了GMAI的两种应用的伦理:使决策辅助能够告知和教育患者某些治疗和条件,以及扩大人工智能驱动的诊断和治疗建议。重点放在GMAI的潜力,以改善患者和提供者之间的共同决策,这支持患者的自主权。另一个重点是卫生公平,或减少服务不足人群面临的卫生和获取差距。尽管GMAI提供了改善患者自主、健康素养和健康公平的机会,但过早或监管不充分地采用GMAI有可能损害健康公平和患者自主。另一方面,如果不采用经过彻底验证和测试的GMAI,可能会对卫生公平和自主权造成重大风险。如果大规模使用GMAI,需要仔细平衡这些风险和收益,以确保最佳的道德结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Algorithms, language, and poetry: a phenomenological perspective Why AI might not gain moral standing: lessons from animal ethics From optimization to inquiry: a Deweyan criterion for machine intelligence Conversational AI agents in education: an umbrella review of current utilization, challenges, and future directions for ethical and responsible use Fostering an enabling environment for health AI innovation and scale: The need for tailored ethics training for innovators in low- and middle-income countries
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1