Anantharaman Muralidharan, Julian Savulescu, G Owen Schaefer
{"title":"人工智能和(向患者)说明理由的必要性。","authors":"Anantharaman Muralidharan, Julian Savulescu, G Owen Schaefer","doi":"10.1007/s10676-024-09754-w","DOIUrl":null,"url":null,"abstract":"<p><p>This paper argues that one problem that besets black-box AI is that it lacks algorithmic justifiability. We argue that the norm of shared decision making in medical care presupposes that treatment decisions ought to be justifiable to the patient. Medical decisions are justifiable to the patient only if they are compatible with the patient's values and preferences and the patient is able to see that this is so. Patient-directed justifiability is threatened by black-box AIs because the lack of rationale provided for the decision makes it difficult for patients to ascertain whether there is adequate fit between the decision and the patient's values. This paper argues that achieving algorithmic transparency does not help patients bridge the gap between their medical decisions and values. We introduce a hypothetical model we call Justifiable AI to illustrate this argument. Justifiable AI aims at modelling normative and evaluative considerations in an explicit way so as to provide a stepping stone for patient and physician to jointly decide on a course of treatment. If our argument succeeds, we should prefer these justifiable models over alternatives if the former are available and aim to develop said models if not.</p>","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"26 1","pages":"16"},"PeriodicalIF":3.4000,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10912120/pdf/","citationCount":"0","resultStr":"{\"title\":\"AI and the need for justification (to the patient).\",\"authors\":\"Anantharaman Muralidharan, Julian Savulescu, G Owen Schaefer\",\"doi\":\"10.1007/s10676-024-09754-w\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>This paper argues that one problem that besets black-box AI is that it lacks algorithmic justifiability. We argue that the norm of shared decision making in medical care presupposes that treatment decisions ought to be justifiable to the patient. Medical decisions are justifiable to the patient only if they are compatible with the patient's values and preferences and the patient is able to see that this is so. Patient-directed justifiability is threatened by black-box AIs because the lack of rationale provided for the decision makes it difficult for patients to ascertain whether there is adequate fit between the decision and the patient's values. This paper argues that achieving algorithmic transparency does not help patients bridge the gap between their medical decisions and values. We introduce a hypothetical model we call Justifiable AI to illustrate this argument. Justifiable AI aims at modelling normative and evaluative considerations in an explicit way so as to provide a stepping stone for patient and physician to jointly decide on a course of treatment. If our argument succeeds, we should prefer these justifiable models over alternatives if the former are available and aim to develop said models if not.</p>\",\"PeriodicalId\":51495,\"journal\":{\"name\":\"Ethics and Information Technology\",\"volume\":\"26 1\",\"pages\":\"16\"},\"PeriodicalIF\":3.4000,\"publicationDate\":\"2024-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10912120/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Ethics and Information Technology\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://doi.org/10.1007/s10676-024-09754-w\",\"RegionNum\":2,\"RegionCategory\":\"哲学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/3/4 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q1\",\"JCRName\":\"ETHICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ethics and Information Technology","FirstCategoryId":"91","ListUrlMain":"https://doi.org/10.1007/s10676-024-09754-w","RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/3/4 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"ETHICS","Score":null,"Total":0}
AI and the need for justification (to the patient).
This paper argues that one problem that besets black-box AI is that it lacks algorithmic justifiability. We argue that the norm of shared decision making in medical care presupposes that treatment decisions ought to be justifiable to the patient. Medical decisions are justifiable to the patient only if they are compatible with the patient's values and preferences and the patient is able to see that this is so. Patient-directed justifiability is threatened by black-box AIs because the lack of rationale provided for the decision makes it difficult for patients to ascertain whether there is adequate fit between the decision and the patient's values. This paper argues that achieving algorithmic transparency does not help patients bridge the gap between their medical decisions and values. We introduce a hypothetical model we call Justifiable AI to illustrate this argument. Justifiable AI aims at modelling normative and evaluative considerations in an explicit way so as to provide a stepping stone for patient and physician to jointly decide on a course of treatment. If our argument succeeds, we should prefer these justifiable models over alternatives if the former are available and aim to develop said models if not.
期刊介绍:
Ethics and Information Technology is a peer-reviewed journal dedicated to advancing the dialogue between moral philosophy and the field of information and communication technology (ICT). The journal aims to foster and promote reflection and analysis which is intended to make a constructive contribution to answering the ethical, social and political questions associated with the adoption, use, and development of ICT. Within the scope of the journal are also conceptual analysis and discussion of ethical ICT issues which arise in the context of technology assessment, cultural studies, public policy analysis and public administration, cognitive science, social and anthropological studies in technology, mass-communication, and legal studies.