{"title":"Defending explicability as a principle for the ethics of artificial intelligence in medicine.","authors":"Jonathan Adams","doi":"10.1007/s11019-023-10175-7","DOIUrl":null,"url":null,"abstract":"<p><p>The difficulty of explaining the outputs of artificial intelligence (AI) models and what has led to them is a notorious ethical problem wherever these technologies are applied, including in the medical domain, and one that has no obvious solution. This paper examines the proposal, made by Luciano Floridi and colleagues, to include a new 'principle of explicability' alongside the traditional four principles of bioethics that make up the theory of 'principlism'. It specifically responds to a recent set of criticisms that challenge the supposed need for such a principle to perform an enabling role in relation to the traditional four principles and therefore suggest that these four are sufficient without the addition of explicability. The paper challenges the critics' premise that explicability cannot be an ethical principle like the classic four because it is explicitly subordinate to them. It argues instead that principlism in its original formulation locates the justification for ethical principles in a midlevel position such that they mediate between the most general moral norms and the contextual requirements of medicine. This conception of an ethical principle then provides a mold for an approach to explicability on which it functions as an enabling principle that unifies technical/epistemic demands on AI and the requirements of high-level ethical theories. The paper finishes by anticipating an objection that decision-making by clinicians and AI fall equally, but implausibly, under the principle of explicability's scope, which it rejects on the grounds that human decisions, unlike AI's, can be explained by their social environments.</p>","PeriodicalId":47449,"journal":{"name":"Medicine Health Care and Philosophy","volume":" ","pages":"615-623"},"PeriodicalIF":2.3000,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10725847/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medicine Health Care and Philosophy","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1007/s11019-023-10175-7","RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/8/29 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"ETHICS","Score":null,"Total":0}
引用次数: 0
Abstract
The difficulty of explaining the outputs of artificial intelligence (AI) models and what has led to them is a notorious ethical problem wherever these technologies are applied, including in the medical domain, and one that has no obvious solution. This paper examines the proposal, made by Luciano Floridi and colleagues, to include a new 'principle of explicability' alongside the traditional four principles of bioethics that make up the theory of 'principlism'. It specifically responds to a recent set of criticisms that challenge the supposed need for such a principle to perform an enabling role in relation to the traditional four principles and therefore suggest that these four are sufficient without the addition of explicability. The paper challenges the critics' premise that explicability cannot be an ethical principle like the classic four because it is explicitly subordinate to them. It argues instead that principlism in its original formulation locates the justification for ethical principles in a midlevel position such that they mediate between the most general moral norms and the contextual requirements of medicine. This conception of an ethical principle then provides a mold for an approach to explicability on which it functions as an enabling principle that unifies technical/epistemic demands on AI and the requirements of high-level ethical theories. The paper finishes by anticipating an objection that decision-making by clinicians and AI fall equally, but implausibly, under the principle of explicability's scope, which it rejects on the grounds that human decisions, unlike AI's, can be explained by their social environments.
期刊介绍:
Medicine, Health Care and Philosophy: A European Journal is the official journal of the European Society for Philosophy of Medicine and Health Care. It provides a forum for international exchange of research data, theories, reports and opinions in bioethics and philosophy of medicine. The journal promotes interdisciplinary studies, and stimulates philosophical analysis centered on a common object of reflection: health care, the human effort to deal with disease, illness, death as well as health, well-being and life. Particular attention is paid to developing contributions from all European countries, and to making accessible scientific work and reports on the practice of health care ethics, from all nations, cultures and language areas in Europe.