Morality on the road: Should machine drivers be more utilitarian than human drivers?

IF 2.8 1区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Cognition Pub Date : 2024-11-18 DOI:10.1016/j.cognition.2024.106011
Peng Liu, Yueying Chu, Siming Zhai, Tingru Zhang, Edmond Awad
{"title":"Morality on the road: Should machine drivers be more utilitarian than human drivers?","authors":"Peng Liu, Yueying Chu, Siming Zhai, Tingru Zhang, Edmond Awad","doi":"10.1016/j.cognition.2024.106011","DOIUrl":null,"url":null,"abstract":"<p><p>Machines powered by artificial intelligence have the potential to replace or collaborate with human decision-makers in moral settings. In these roles, machines would face moral tradeoffs, such as automated vehicles (AVs) distributing inevitable risks among road users. Do people believe that machines should make moral decisions differently from humans? If so, why? To address these questions, we conducted six studies (N = 6805) to examine how people, as observers, believe human drivers and AVs should act in similar moral dilemmas and how they judge their moral decisions. In pedestrian-only dilemmas where the two agents had to sacrifice one pedestrian to save more pedestrians, participants held them to similar utilitarian norms (Study 1). In occupant dilemmas where the agents needed to weigh the in-vehicle occupant against more pedestrians, participants were less accepting of AVs sacrificing their passenger compared to human drivers sacrificing themselves (Studies 1-3) or another passenger (Studies 5-6). The difference was not driven by reduced occupant agency in AVs (Study 4) or by non-voluntary occupant sacrifice in AVs (Study 5), but rather by the perceived social relationship between AVs and their users (Study 6). Thus, even when people adopt an impartial stance as observers, they are more likely to believe that AVs should prioritize serving their users in moral dilemmas. We discuss the theoretical and practical implications for AV morality.</p>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"254 ","pages":"106011"},"PeriodicalIF":2.8000,"publicationDate":"2024-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognition","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1016/j.cognition.2024.106011","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0

Abstract

Machines powered by artificial intelligence have the potential to replace or collaborate with human decision-makers in moral settings. In these roles, machines would face moral tradeoffs, such as automated vehicles (AVs) distributing inevitable risks among road users. Do people believe that machines should make moral decisions differently from humans? If so, why? To address these questions, we conducted six studies (N = 6805) to examine how people, as observers, believe human drivers and AVs should act in similar moral dilemmas and how they judge their moral decisions. In pedestrian-only dilemmas where the two agents had to sacrifice one pedestrian to save more pedestrians, participants held them to similar utilitarian norms (Study 1). In occupant dilemmas where the agents needed to weigh the in-vehicle occupant against more pedestrians, participants were less accepting of AVs sacrificing their passenger compared to human drivers sacrificing themselves (Studies 1-3) or another passenger (Studies 5-6). The difference was not driven by reduced occupant agency in AVs (Study 4) or by non-voluntary occupant sacrifice in AVs (Study 5), but rather by the perceived social relationship between AVs and their users (Study 6). Thus, even when people adopt an impartial stance as observers, they are more likely to believe that AVs should prioritize serving their users in moral dilemmas. We discuss the theoretical and practical implications for AV morality.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
道路上的道德:机器驾驶员是否应该比人类驾驶员更加功利?
人工智能驱动的机器有可能在道德领域取代人类决策者或与人类决策者合作。在这些角色中,机器将面临道德权衡,例如自动驾驶汽车(AV)在道路使用者之间分配不可避免的风险。人们是否认为机器在做出道德决定时应与人类有所不同?如果是,为什么?为了解决这些问题,我们进行了六项研究(N = 6805),以考察作为观察者的人们认为人类驾驶员和自动驾驶汽车在类似的道德困境中应如何行动,以及他们如何判断自己的道德决策。在只有行人的困境中,两个驾驶员必须牺牲一名行人以拯救更多行人,参与者认为他们的功利准则相似(研究 1)。在乘员困境中,驾驶员需要权衡车内乘员和更多行人,与人类驾驶员牺牲自己(研究 1-3)或另一名乘客(研究 5-6)相比,参与者对自动驾驶汽车牺牲乘客的接受度较低。造成这种差异的原因并不是自动驾驶汽车中乘员代理权的降低(研究 4),也不是自动驾驶汽车中乘员的非自愿牺牲(研究 5),而是自动驾驶汽车与其使用者之间的社会关系(研究 6)。因此,即使人们作为观察者采取公正的立场,他们也更有可能认为,在道德困境中,自动驾驶汽车应优先为其用户服务。我们将讨论 AV 道德的理论和实践意义。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Cognition
Cognition PSYCHOLOGY, EXPERIMENTAL-
CiteScore
6.40
自引率
5.90%
发文量
283
期刊介绍: Cognition is an international journal that publishes theoretical and experimental papers on the study of the mind. It covers a wide variety of subjects concerning all the different aspects of cognition, ranging from biological and experimental studies to formal analysis. Contributions from the fields of psychology, neuroscience, linguistics, computer science, mathematics, ethology and philosophy are welcome in this journal provided that they have some bearing on the functioning of the mind. In addition, the journal serves as a forum for discussion of social and political aspects of cognitive science.
期刊最新文献
Partisan language in a polarized world: In-group language provides reputational benefits to speakers while polarizing audiences. What's left of the leftward bias in scene viewing? Lateral asymmetries in information processing during early search guidance. Language enables the acquisition of distinct sensorimotor memories for speech. Morality on the road: Should machine drivers be more utilitarian than human drivers? Relative source credibility affects the continued influence effect: Evidence of rationality in the CIE.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1