“举起手来,别开枪!”

P. Asaro
{"title":"“举起手来,别开枪!”","authors":"P. Asaro","doi":"10.5898/JHRI.5.3.Asaro","DOIUrl":null,"url":null,"abstract":"This paper considers the ethical challenges facing the development of robotic systems that deploy violent and lethal force against humans. While the use of violent and lethal force is not usually acceptable for humans or robots, police officers are authorized by the state to use violent and lethal force in certain circumstances in order to keep the peace and protect individuals and the community from an immediate threat. With the increased interest in developing and deploying robots for law enforcement tasks, including robots armed with weapons, the question arises as to how to design human-robot interactions (HRIs) in which violent and lethal force might be among the actions taken by the robot, or whether to preclude such actions altogether. This is what I call the \"deadly design problem\" for HRI. While it might be possible to design a system to recognize various gestures, such as \"Hands up, don't shoot!,\" there are many more challenging and subtle aspects to the problem of implementing existing legal guidelines for the use of force in law enforcement robots. After examining the key legal and technical challenges of designing interactions involving violence, this paper concludes with some reflections on the ethics of HRI design raised by automating the use of force in policing. In light of the serious challenges in automating violence, it calls upon HRI researchers to adopt a moratorium on designing any robotic systems that deploy violent and lethal force against humans, and to consider ethical codes and laws to prohibit such systems in the future.","PeriodicalId":92076,"journal":{"name":"Journal of human-robot interaction","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2016-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.5898/JHRI.5.3.Asaro","citationCount":"23","resultStr":"{\"title\":\"\\\"Hands up, don't shoot!\\\"\",\"authors\":\"P. Asaro\",\"doi\":\"10.5898/JHRI.5.3.Asaro\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper considers the ethical challenges facing the development of robotic systems that deploy violent and lethal force against humans. While the use of violent and lethal force is not usually acceptable for humans or robots, police officers are authorized by the state to use violent and lethal force in certain circumstances in order to keep the peace and protect individuals and the community from an immediate threat. With the increased interest in developing and deploying robots for law enforcement tasks, including robots armed with weapons, the question arises as to how to design human-robot interactions (HRIs) in which violent and lethal force might be among the actions taken by the robot, or whether to preclude such actions altogether. This is what I call the \\\"deadly design problem\\\" for HRI. While it might be possible to design a system to recognize various gestures, such as \\\"Hands up, don't shoot!,\\\" there are many more challenging and subtle aspects to the problem of implementing existing legal guidelines for the use of force in law enforcement robots. After examining the key legal and technical challenges of designing interactions involving violence, this paper concludes with some reflections on the ethics of HRI design raised by automating the use of force in policing. In light of the serious challenges in automating violence, it calls upon HRI researchers to adopt a moratorium on designing any robotic systems that deploy violent and lethal force against humans, and to consider ethical codes and laws to prohibit such systems in the future.\",\"PeriodicalId\":92076,\"journal\":{\"name\":\"Journal of human-robot interaction\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-12-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.5898/JHRI.5.3.Asaro\",\"citationCount\":\"23\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of human-robot interaction\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5898/JHRI.5.3.Asaro\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of human-robot interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5898/JHRI.5.3.Asaro","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 23

摘要

本文考虑了机器人系统发展所面临的道德挑战,这些系统对人类使用暴力和致命武力。虽然人类或机器人使用暴力和致命武力通常是不可接受的,但国家授权警察在某些情况下使用暴力和致命武力,以维持和平,保护个人和社区免受直接威胁。随着人们对开发和部署用于执法任务的机器人(包括携带武器的机器人)越来越感兴趣,问题出现了,即如何设计人机交互(HRIs),其中暴力和致命的武力可能是机器人采取的行动之一,或者是否完全排除这种行动。这就是我所说的人力资源研究所的“致命设计问题”。虽然有可能设计一个系统来识别各种手势,比如“举起手来,不要开枪!”“在执法机器人中使用武力的现有法律指导方针的实施问题上,还有许多更具挑战性和微妙的方面。在研究了设计涉及暴力的互动的关键法律和技术挑战之后,本文最后对警务中自动化使用武力所引发的人力资源研究所设计的道德问题进行了一些反思。鉴于自动化暴力面临的严峻挑战,它呼吁人力资源研究所的研究人员暂停设计任何对人类使用暴力和致命武力的机器人系统,并考虑在未来禁止此类系统的道德规范和法律。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
"Hands up, don't shoot!"
This paper considers the ethical challenges facing the development of robotic systems that deploy violent and lethal force against humans. While the use of violent and lethal force is not usually acceptable for humans or robots, police officers are authorized by the state to use violent and lethal force in certain circumstances in order to keep the peace and protect individuals and the community from an immediate threat. With the increased interest in developing and deploying robots for law enforcement tasks, including robots armed with weapons, the question arises as to how to design human-robot interactions (HRIs) in which violent and lethal force might be among the actions taken by the robot, or whether to preclude such actions altogether. This is what I call the "deadly design problem" for HRI. While it might be possible to design a system to recognize various gestures, such as "Hands up, don't shoot!," there are many more challenging and subtle aspects to the problem of implementing existing legal guidelines for the use of force in law enforcement robots. After examining the key legal and technical challenges of designing interactions involving violence, this paper concludes with some reflections on the ethics of HRI design raised by automating the use of force in policing. In light of the serious challenges in automating violence, it calls upon HRI researchers to adopt a moratorium on designing any robotic systems that deploy violent and lethal force against humans, and to consider ethical codes and laws to prohibit such systems in the future.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Are You Reviewer 2: Three Ideas for Better Reviewing Understanding agency in interactions between children with autism and socially assistive robots Touching a mechanical body How should a robot approach two people? Supporting situation awareness through robot-to-human information exchanges under conditions of visuospatial perspective taking
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1