人工智能和(向患者)说明理由的必要性。

IF 3.4 2区 哲学 Q1 ETHICS Ethics and Information Technology Pub Date : 2024-01-01 Epub Date: 2024-03-04 DOI:10.1007/s10676-024-09754-w
Anantharaman Muralidharan, Julian Savulescu, G Owen Schaefer
{"title":"人工智能和(向患者)说明理由的必要性。","authors":"Anantharaman Muralidharan, Julian Savulescu, G Owen Schaefer","doi":"10.1007/s10676-024-09754-w","DOIUrl":null,"url":null,"abstract":"<p><p>This paper argues that one problem that besets black-box AI is that it lacks algorithmic justifiability. We argue that the norm of shared decision making in medical care presupposes that treatment decisions ought to be justifiable to the patient. Medical decisions are justifiable to the patient only if they are compatible with the patient's values and preferences and the patient is able to see that this is so. Patient-directed justifiability is threatened by black-box AIs because the lack of rationale provided for the decision makes it difficult for patients to ascertain whether there is adequate fit between the decision and the patient's values. This paper argues that achieving algorithmic transparency does not help patients bridge the gap between their medical decisions and values. We introduce a hypothetical model we call Justifiable AI to illustrate this argument. Justifiable AI aims at modelling normative and evaluative considerations in an explicit way so as to provide a stepping stone for patient and physician to jointly decide on a course of treatment. If our argument succeeds, we should prefer these justifiable models over alternatives if the former are available and aim to develop said models if not.</p>","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"26 1","pages":"16"},"PeriodicalIF":3.4000,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10912120/pdf/","citationCount":"0","resultStr":"{\"title\":\"AI and the need for justification (to the patient).\",\"authors\":\"Anantharaman Muralidharan, Julian Savulescu, G Owen Schaefer\",\"doi\":\"10.1007/s10676-024-09754-w\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>This paper argues that one problem that besets black-box AI is that it lacks algorithmic justifiability. We argue that the norm of shared decision making in medical care presupposes that treatment decisions ought to be justifiable to the patient. Medical decisions are justifiable to the patient only if they are compatible with the patient's values and preferences and the patient is able to see that this is so. Patient-directed justifiability is threatened by black-box AIs because the lack of rationale provided for the decision makes it difficult for patients to ascertain whether there is adequate fit between the decision and the patient's values. This paper argues that achieving algorithmic transparency does not help patients bridge the gap between their medical decisions and values. We introduce a hypothetical model we call Justifiable AI to illustrate this argument. Justifiable AI aims at modelling normative and evaluative considerations in an explicit way so as to provide a stepping stone for patient and physician to jointly decide on a course of treatment. If our argument succeeds, we should prefer these justifiable models over alternatives if the former are available and aim to develop said models if not.</p>\",\"PeriodicalId\":51495,\"journal\":{\"name\":\"Ethics and Information Technology\",\"volume\":\"26 1\",\"pages\":\"16\"},\"PeriodicalIF\":3.4000,\"publicationDate\":\"2024-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10912120/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Ethics and Information Technology\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://doi.org/10.1007/s10676-024-09754-w\",\"RegionNum\":2,\"RegionCategory\":\"哲学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/3/4 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q1\",\"JCRName\":\"ETHICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ethics and Information Technology","FirstCategoryId":"91","ListUrlMain":"https://doi.org/10.1007/s10676-024-09754-w","RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/3/4 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"ETHICS","Score":null,"Total":0}
引用次数: 0

摘要

本文认为,困扰黑盒人工智能的一个问题是它缺乏算法上的合理性。我们认为,医疗保健中的共同决策规范预先假定治疗决定对患者而言应该是合理的。只有当医疗决策符合患者的价值观和偏好,并且患者能够看到这一点时,医疗决策对患者来说才是合理的。以患者为导向的合理性受到了黑盒人工智能的威胁,因为人工智能决策缺乏合理性,患者很难确定决策与患者的价值观之间是否有足够的契合点。本文认为,实现算法透明并不能帮助患者弥合医疗决策与价值观之间的差距。为了说明这一论点,我们引入了一个假设模型,我们称之为 "正当的人工智能"(Justifiable AI)。合理的人工智能旨在以明确的方式模拟规范性和评价性考虑因素,从而为病人和医生共同决定治疗方案提供一块垫脚石。如果我们的论证成功,我们就应该优先选择这些合理的模型,而不是替代品(如果有的话),如果没有的话,我们就应该致力于开发上述模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
AI and the need for justification (to the patient).

This paper argues that one problem that besets black-box AI is that it lacks algorithmic justifiability. We argue that the norm of shared decision making in medical care presupposes that treatment decisions ought to be justifiable to the patient. Medical decisions are justifiable to the patient only if they are compatible with the patient's values and preferences and the patient is able to see that this is so. Patient-directed justifiability is threatened by black-box AIs because the lack of rationale provided for the decision makes it difficult for patients to ascertain whether there is adequate fit between the decision and the patient's values. This paper argues that achieving algorithmic transparency does not help patients bridge the gap between their medical decisions and values. We introduce a hypothetical model we call Justifiable AI to illustrate this argument. Justifiable AI aims at modelling normative and evaluative considerations in an explicit way so as to provide a stepping stone for patient and physician to jointly decide on a course of treatment. If our argument succeeds, we should prefer these justifiable models over alternatives if the former are available and aim to develop said models if not.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
8.20
自引率
5.60%
发文量
46
期刊介绍: Ethics and Information Technology is a peer-reviewed journal dedicated to advancing the dialogue between moral philosophy and the field of information and communication technology (ICT). The journal aims to foster and promote reflection and analysis which is intended to make a constructive contribution to answering the ethical, social and political questions associated with the adoption, use, and development of ICT. Within the scope of the journal are also conceptual analysis and discussion of ethical ICT issues which arise in the context of technology assessment, cultural studies, public policy analysis and public administration, cognitive science, social and anthropological studies in technology, mass-communication, and legal studies.
期刊最新文献
Engineers on responsibility: feminist approaches to who’s responsible for ethical AI AI and the need for justification (to the patient). Trustworthiness of voting advice applications in Europe. Large language models and their big bullshit potential. How to teach responsible AI in Higher Education: challenges and opportunities
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1