{"title":"A Causal Analysis of Harm","authors":"Sander Beckers, Hana Chockler, Joseph Y. Halpern","doi":"10.1007/s11023-024-09689-7","DOIUrl":null,"url":null,"abstract":"<p>As autonomous systems rapidly become ubiquitous, there is a growing need for a legal and regulatory framework that addresses when and how such a system harms someone. There have been several attempts within the philosophy literature to define harm, but none of them has proven capable of dealing with the many examples that have been presented, leading some to suggest that the notion of harm should be abandoned and “replaced by more well-behaved notions”. As harm is generally something that is caused, most of these definitions have involved causality at some level. Yet surprisingly, none of them makes use of causal models and the definitions of actual causality that they can express. In this paper, which is an expanded version of the conference paper Beckers et al. (Adv Neural Inform Process Syst 35:2365–2376, 2022), we formally define a qualitative notion of harm that uses causal models and is based on a well-known definition of actual causality. The key features of our definition are that it is based on <i>contrastive</i> causation and uses a default utility to which the utility of actual outcomes is compared. We show that our definition is able to handle the examples from the literature, and illustrate its importance for reasoning about situations involving autonomous systems.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"39 1","pages":""},"PeriodicalIF":4.2000,"publicationDate":"2024-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Minds and Machines","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s11023-024-09689-7","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
As autonomous systems rapidly become ubiquitous, there is a growing need for a legal and regulatory framework that addresses when and how such a system harms someone. There have been several attempts within the philosophy literature to define harm, but none of them has proven capable of dealing with the many examples that have been presented, leading some to suggest that the notion of harm should be abandoned and “replaced by more well-behaved notions”. As harm is generally something that is caused, most of these definitions have involved causality at some level. Yet surprisingly, none of them makes use of causal models and the definitions of actual causality that they can express. In this paper, which is an expanded version of the conference paper Beckers et al. (Adv Neural Inform Process Syst 35:2365–2376, 2022), we formally define a qualitative notion of harm that uses causal models and is based on a well-known definition of actual causality. The key features of our definition are that it is based on contrastive causation and uses a default utility to which the utility of actual outcomes is compared. We show that our definition is able to handle the examples from the literature, and illustrate its importance for reasoning about situations involving autonomous systems.
期刊介绍:
Minds and Machines, affiliated with the Society for Machines and Mentality, serves as a platform for fostering critical dialogue between the AI and philosophical communities. With a focus on problems of shared interest, the journal actively encourages discussions on the philosophical aspects of computer science.
Offering a global forum, Minds and Machines provides a space to debate and explore important and contentious issues within its editorial focus. The journal presents special editions dedicated to specific topics, invites critical responses to previously published works, and features review essays addressing current problem scenarios.
By facilitating a diverse range of perspectives, Minds and Machines encourages a reevaluation of the status quo and the development of new insights. Through this collaborative approach, the journal aims to bridge the gap between AI and philosophy, fostering a tradition of critique and ensuring these fields remain connected and relevant.