The moral decision machine: a challenge for artificial moral agency based on moral deference

Zacharus Gudmunsen
{"title":"The moral decision machine: a challenge for artificial moral agency based on moral deference","authors":"Zacharus Gudmunsen","doi":"10.1007/s43681-024-00444-3","DOIUrl":null,"url":null,"abstract":"<div><p>Humans are responsible moral agents in part because they can competently respond to moral reasons. Several philosophers have argued that artificial agents cannot do this and therefore cannot be responsible moral agents. I present a counterexample to these arguments: the ‘Moral Decision Machine’. I argue that the ‘Moral Decision Machine’ responds to moral reasons just as competently as humans do. However, I suggest that, while a hopeful development, this does not warrant strong optimism about ‘artificial moral agency’. The ‘Moral Decision Machine’ (and similar agents) can only respond to moral reasons by deferring to others, and there are good reasons to think this is incompatible with responsible moral agency. While the challenge to artificial moral agency based on moral reasons-responsiveness can be satisfactorily addressed; the challenge based on moral deference remains an open question. The right way to understand the challenge, I argue, is as a route to the claim that artificial agents are unlikely to be responsible moral agents because they cannot be authentic.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 2","pages":"1033 - 1045"},"PeriodicalIF":0.0000,"publicationDate":"2024-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-024-00444-3.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI and ethics","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s43681-024-00444-3","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Humans are responsible moral agents in part because they can competently respond to moral reasons. Several philosophers have argued that artificial agents cannot do this and therefore cannot be responsible moral agents. I present a counterexample to these arguments: the ‘Moral Decision Machine’. I argue that the ‘Moral Decision Machine’ responds to moral reasons just as competently as humans do. However, I suggest that, while a hopeful development, this does not warrant strong optimism about ‘artificial moral agency’. The ‘Moral Decision Machine’ (and similar agents) can only respond to moral reasons by deferring to others, and there are good reasons to think this is incompatible with responsible moral agency. While the challenge to artificial moral agency based on moral reasons-responsiveness can be satisfactorily addressed; the challenge based on moral deference remains an open question. The right way to understand the challenge, I argue, is as a route to the claim that artificial agents are unlikely to be responsible moral agents because they cannot be authentic.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
道德决策机器:对基于道德服从的人工道德机构的挑战
人类是负责任的道德行为者,部分原因是他们能够对道德原因做出恰当的反应。一些哲学家认为,人工代理人不能做到这一点,因此不能成为负责任的道德代理人。我提出了一个反例:“道德决策机器”。我认为“道德决策机器”对道德原因的反应和人类一样有能力。然而,我认为,虽然这是一个充满希望的发展,但这并不能保证对“人工道德代理”的强烈乐观。“道德决策机器”(以及类似的代理人)只能通过服从他人来回应道德原因,有充分的理由认为这与负责任的道德行为是不相容的。而对基于道德理性-回应性的人为道德代理的挑战则可以得到圆满解决;基于道德尊重的挑战仍然是一个悬而未决的问题。我认为,理解这一挑战的正确方法是,将其视为一种主张,即人工智能体不太可能是负责任的道德行为体,因为它们不可能是真实的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Beyond black-box medicine: a bioethical considerations for informed consent in AI-driven endoscopy Rectifying illusion: a Buddhist–Confucian framework for LLM hallucinations A dynamic contextual responsibility framework for evaluating large language models in socio-technical contexts Political fantasies of fairness: artificial intelligence, law, and the myth of sovereign reason A critical analysis of the ethical benefits and challenges related to the development and use of wearable AI devices
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1