{"title":"Scoping Review Shows the Dynamics and Complexities Inherent to the Notion of “Responsibility” in Artificial Intelligence within the Healthcare Context","authors":"Sarah Bouhouita-Guermech, Hazar Haidar","doi":"10.1007/s41649-024-00292-7","DOIUrl":null,"url":null,"abstract":"<div><p>The increasing integration of artificial intelligence (AI) in healthcare presents a host of ethical, legal, social, and political challenges involving various stakeholders. These challenges prompt various studies proposing frameworks and guidelines to tackle these issues, emphasizing distinct phases of AI development, deployment, and oversight. As a result, the notion of responsible AI has become widespread, incorporating ethical principles such as transparency, fairness, responsibility, and privacy. This paper explores the existing literature on AI use in healthcare to examine how it addresses, defines, and discusses the concept of responsibility. We conducted a scoping review of literature related to AI responsibility in healthcare, searching databases and reference lists between January 2017 and January 2022 for terms related to “responsibility” and “AI in healthcare”, and their derivatives. Following screening, 136 articles were included. Data were grouped into four thematic categories: (1) the variety of terminology used to describe and address responsibility; (2) principles and concepts associated with responsibility; (3) stakeholders’ responsibilities in AI clinical development, use, and deployment; and (4) recommendations for addressing responsibility concerns. The results show the lack of a clear definition of AI responsibility in healthcare and highlight the importance of ensuring responsible development and implementation of AI in healthcare. Further research is necessary to clarify this notion to contribute to developing frameworks regarding the type of responsibility (ethical/moral/professional, legal, and causal) of various stakeholders involved in the AI lifecycle.</p></div>","PeriodicalId":44520,"journal":{"name":"Asian Bioethics Review","volume":null,"pages":null},"PeriodicalIF":1.3000,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Asian Bioethics Review","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s41649-024-00292-7","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ETHICS","Score":null,"Total":0}
引用次数: 0
Abstract
The increasing integration of artificial intelligence (AI) in healthcare presents a host of ethical, legal, social, and political challenges involving various stakeholders. These challenges prompt various studies proposing frameworks and guidelines to tackle these issues, emphasizing distinct phases of AI development, deployment, and oversight. As a result, the notion of responsible AI has become widespread, incorporating ethical principles such as transparency, fairness, responsibility, and privacy. This paper explores the existing literature on AI use in healthcare to examine how it addresses, defines, and discusses the concept of responsibility. We conducted a scoping review of literature related to AI responsibility in healthcare, searching databases and reference lists between January 2017 and January 2022 for terms related to “responsibility” and “AI in healthcare”, and their derivatives. Following screening, 136 articles were included. Data were grouped into four thematic categories: (1) the variety of terminology used to describe and address responsibility; (2) principles and concepts associated with responsibility; (3) stakeholders’ responsibilities in AI clinical development, use, and deployment; and (4) recommendations for addressing responsibility concerns. The results show the lack of a clear definition of AI responsibility in healthcare and highlight the importance of ensuring responsible development and implementation of AI in healthcare. Further research is necessary to clarify this notion to contribute to developing frameworks regarding the type of responsibility (ethical/moral/professional, legal, and causal) of various stakeholders involved in the AI lifecycle.
期刊介绍:
Asian Bioethics Review (ABR) is an international academic journal, based in Asia, providing a forum to express and exchange original ideas on all aspects of bioethics, especially those relevant to the region. Published quarterly, the journal seeks to promote collaborative research among scholars in Asia or with an interest in Asia, as well as multi-cultural and multi-disciplinary bioethical studies more generally. It will appeal to all working on bioethical issues in biomedicine, healthcare, caregiving and patient support, genetics, law and governance, health systems and policy, science studies and research. ABR provides analyses, perspectives and insights into new approaches in bioethics, recent changes in biomedical law and policy, developments in capacity building and professional training, and voices or essays from a student’s perspective. The journal includes articles, research studies, target articles, case evaluations and commentaries. It also publishes book reviews and correspondence to the editor. ABR welcomes original papers from all countries, particularly those that relate to Asia. ABR is the flagship publication of the Centre for Biomedical Ethics, Yong Loo Lin School of Medicine, National University of Singapore. The Centre for Biomedical Ethics is a collaborating centre on bioethics of the World Health Organization.