{"title":"The Moral Psychology of Artificial Intelligence.","authors":"Jean-François Bonnefon, Iyad Rahwan, Azim Shariff","doi":"10.1146/annurev-psych-030123-113559","DOIUrl":null,"url":null,"abstract":"<p><p>Moral psychology was shaped around three categories of agents and patients: humans, other animals, and supernatural beings. Rapid progress in artificial intelligence has introduced a fourth category for our moral psychology to deal with: intelligent machines. Machines can perform as moral agents, making decisions that affect the outcomes of human patients or solving moral dilemmas without human supervision. Machines can be perceived as moral patients, whose outcomes can be affected by human decisions, with important consequences for human-machine cooperation. Machines can be moral proxies that human agents and patients send as their delegates to moral interactions or use as a disguise in these interactions. Here we review the experimental literature on machines as moral agents, moral patients, and moral proxies, with a focus on recent findings and the open questions that they suggest.</p>","PeriodicalId":8010,"journal":{"name":"Annual review of psychology","volume":" ","pages":"653-675"},"PeriodicalIF":23.6000,"publicationDate":"2024-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Annual review of psychology","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1146/annurev-psych-030123-113559","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/9/18 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"PSYCHOLOGY","Score":null,"Total":0}
引用次数: 2
Abstract
Moral psychology was shaped around three categories of agents and patients: humans, other animals, and supernatural beings. Rapid progress in artificial intelligence has introduced a fourth category for our moral psychology to deal with: intelligent machines. Machines can perform as moral agents, making decisions that affect the outcomes of human patients or solving moral dilemmas without human supervision. Machines can be perceived as moral patients, whose outcomes can be affected by human decisions, with important consequences for human-machine cooperation. Machines can be moral proxies that human agents and patients send as their delegates to moral interactions or use as a disguise in these interactions. Here we review the experimental literature on machines as moral agents, moral patients, and moral proxies, with a focus on recent findings and the open questions that they suggest.
期刊介绍:
The Annual Review of Psychology, a publication that has been available since 1950, provides comprehensive coverage of the latest advancements in psychological research. It encompasses a wide range of topics, including the biological underpinnings of human behavior, the intricacies of our senses and perception, the functioning of the mind, animal behavior and learning, human development, psychopathology, clinical and counseling psychology, social psychology, personality, environmental psychology, community psychology, and much more. In a recent development, the current volume of this esteemed journal has transitioned from a subscription-based model to an open access format as part of the Annual Reviews' Subscribe to Open initiative. As a result, all articles published in this volume are now freely accessible to the public under a Creative Commons Attribution (CC BY) license.