自主机器人伤害的自发预期道德责任性比较辩护。

IF 2.7 2区 哲学 Q1 ENGINEERING, MULTIDISCIPLINARY Science and Engineering Ethics Pub Date : 2023-07-13 DOI:10.1007/s11948-023-00449-x
Marc Champagne, Ryan Tonkens
{"title":"自主机器人伤害的自发预期道德责任性比较辩护。","authors":"Marc Champagne, Ryan Tonkens","doi":"10.1007/s11948-023-00449-x","DOIUrl":null,"url":null,"abstract":"<p><p>As artificial intelligence becomes more sophisticated and robots approach autonomous decision-making, debates about how to assign moral responsibility have gained importance, urgency, and sophistication. Answering Stenseke's (2022a) call for scaffolds that can help us classify views and commitments, we think the current debate space can be represented hierarchically, as answers to key questions. We use the resulting taxonomy of five stances to differentiate-and defend-what is known as the \"blank check\" proposal. According to this proposal, a person activating a robot could willingly make themselves answerable for whatever events ensue, even if those events stem from the robot's autonomous decision(s). This blank check solution was originally proposed in the context of automated warfare (Champagne & Tonkens, 2015), but we extend it to cover all robots. We argue that, because moral answerability in the blank check is accepted voluntarily and before bad outcomes are known, it proves superior to alternative ways of assigning blame. We end by highlighting how, in addition to being just, this self-initiated and prospective moral answerability for robot harm provides deterrence that the four other stances cannot match.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":null,"pages":null},"PeriodicalIF":2.7000,"publicationDate":"2023-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Comparative Defense of Self-initiated Prospective Moral Answerability for Autonomous Robot harm.\",\"authors\":\"Marc Champagne, Ryan Tonkens\",\"doi\":\"10.1007/s11948-023-00449-x\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>As artificial intelligence becomes more sophisticated and robots approach autonomous decision-making, debates about how to assign moral responsibility have gained importance, urgency, and sophistication. Answering Stenseke's (2022a) call for scaffolds that can help us classify views and commitments, we think the current debate space can be represented hierarchically, as answers to key questions. We use the resulting taxonomy of five stances to differentiate-and defend-what is known as the \\\"blank check\\\" proposal. According to this proposal, a person activating a robot could willingly make themselves answerable for whatever events ensue, even if those events stem from the robot's autonomous decision(s). This blank check solution was originally proposed in the context of automated warfare (Champagne & Tonkens, 2015), but we extend it to cover all robots. We argue that, because moral answerability in the blank check is accepted voluntarily and before bad outcomes are known, it proves superior to alternative ways of assigning blame. We end by highlighting how, in addition to being just, this self-initiated and prospective moral answerability for robot harm provides deterrence that the four other stances cannot match.</p>\",\"PeriodicalId\":49564,\"journal\":{\"name\":\"Science and Engineering Ethics\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.7000,\"publicationDate\":\"2023-07-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Science and Engineering Ethics\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://doi.org/10.1007/s11948-023-00449-x\",\"RegionNum\":2,\"RegionCategory\":\"哲学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Science and Engineering Ethics","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1007/s11948-023-00449-x","RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

摘要

随着人工智能越来越复杂,机器人越来越接近自主决策,关于如何分配道德责任的辩论也变得越来越重要、紧迫和复杂。为了响应斯坦塞克(2022a)关于建立脚手架以帮助我们对观点和承诺进行分类的呼吁,我们认为当前的辩论空间可以分层表示,作为对关键问题的回答。我们使用由此产生的五种立场分类法来区分并捍卫所谓的 "空白支票 "提议。根据这一提议,启动机器人的人可以自愿为随后发生的任何事件负责,即使这些事件源自机器人的自主决策。这一空白支票解决方案最初是在自动化战争的背景下提出的(Champagne & Tonkens, 2015),但我们将其扩展到了所有机器人。我们认为,由于 "空白支票 "中的道德责任是自愿接受的,而且是在知道坏结果之前接受的,因此它优于其他责任分配方式。最后,我们强调,除了公正之外,这种对机器人伤害的自发和预期道德责任感还能提供其他四种立场所无法比拟的威慑力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A Comparative Defense of Self-initiated Prospective Moral Answerability for Autonomous Robot harm.

As artificial intelligence becomes more sophisticated and robots approach autonomous decision-making, debates about how to assign moral responsibility have gained importance, urgency, and sophistication. Answering Stenseke's (2022a) call for scaffolds that can help us classify views and commitments, we think the current debate space can be represented hierarchically, as answers to key questions. We use the resulting taxonomy of five stances to differentiate-and defend-what is known as the "blank check" proposal. According to this proposal, a person activating a robot could willingly make themselves answerable for whatever events ensue, even if those events stem from the robot's autonomous decision(s). This blank check solution was originally proposed in the context of automated warfare (Champagne & Tonkens, 2015), but we extend it to cover all robots. We argue that, because moral answerability in the blank check is accepted voluntarily and before bad outcomes are known, it proves superior to alternative ways of assigning blame. We end by highlighting how, in addition to being just, this self-initiated and prospective moral answerability for robot harm provides deterrence that the four other stances cannot match.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Science and Engineering Ethics
Science and Engineering Ethics 综合性期刊-工程:综合
CiteScore
10.70
自引率
5.40%
发文量
54
审稿时长
>12 weeks
期刊介绍: Science and Engineering Ethics is an international multidisciplinary journal dedicated to exploring ethical issues associated with science and engineering, covering professional education, research and practice as well as the effects of technological innovations and research findings on society. While the focus of this journal is on science and engineering, contributions from a broad range of disciplines, including social sciences and humanities, are welcomed. Areas of interest include, but are not limited to, ethics of new and emerging technologies, research ethics, computer ethics, energy ethics, animals and human subjects ethics, ethics education in science and engineering, ethics in design, biomedical ethics, values in technology and innovation. We welcome contributions that deal with these issues from an international perspective, particularly from countries that are underrepresented in these discussions.
期刊最新文献
Authorship and Citizen Science: Seven Heuristic Rules. A Confucian Algorithm for Autonomous Vehicles. A Rubik's Cube-Inspired Pedagogical Tool for Teaching and Learning Engineering Ethics. Patient Preferences Concerning Humanoid Features in Healthcare Robots. Responsibility Gaps and Retributive Dispositions: Evidence from the US, Japan and Germany.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1