人工智能武器能做出合乎道德的决定吗?

Q2 Social Sciences Criminal Justice Ethics Pub Date : 2021-05-04 DOI:10.1080/0731129X.2021.1951459
Ross W. Bellaby
{"title":"人工智能武器能做出合乎道德的决定吗?","authors":"Ross W. Bellaby","doi":"10.1080/0731129X.2021.1951459","DOIUrl":null,"url":null,"abstract":"The ability of machines to make truly independent and autonomous decisions is a goal of many, not least of military leaders who wish to take the human out of the loop as much as possible, claiming that autonomous military weaponry—most notably drones—can make decisions more quickly and with greater accuracy. However, there is no clear understanding of how autonomous weapons should be conceptualized and of the implications that their “autonomous” nature has on them as ethical agents. It will be argued that autonomous weapons are not full ethical agents due to the restrictions of their coding. However, the highly complex machine-learning nature gives the impression that they are making their own decisions and creates the illusion that their human operators are protected from the responsibility of the harm they cause. Therefore, it is important to distinguish between autonomous AI weapons and an AI with autonomy, a distinction that creates two different ethical problems for their use. For autonomous weapons, their limited agency combined with machine-learning means their human counterparts are still responsible for their actions while having no ability to control or intercede in the actual decisions made. If, on the other hand, an AI could reach the point of autonomy, the level of critical reflection would make its decisions unpredictable and dangerous in a weapon.","PeriodicalId":35931,"journal":{"name":"Criminal Justice Ethics","volume":"40 1","pages":"86 - 107"},"PeriodicalIF":0.0000,"publicationDate":"2021-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0731129X.2021.1951459","citationCount":"4","resultStr":"{\"title\":\"Can AI Weapons Make Ethical Decisions?\",\"authors\":\"Ross W. Bellaby\",\"doi\":\"10.1080/0731129X.2021.1951459\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The ability of machines to make truly independent and autonomous decisions is a goal of many, not least of military leaders who wish to take the human out of the loop as much as possible, claiming that autonomous military weaponry—most notably drones—can make decisions more quickly and with greater accuracy. However, there is no clear understanding of how autonomous weapons should be conceptualized and of the implications that their “autonomous” nature has on them as ethical agents. It will be argued that autonomous weapons are not full ethical agents due to the restrictions of their coding. However, the highly complex machine-learning nature gives the impression that they are making their own decisions and creates the illusion that their human operators are protected from the responsibility of the harm they cause. Therefore, it is important to distinguish between autonomous AI weapons and an AI with autonomy, a distinction that creates two different ethical problems for their use. For autonomous weapons, their limited agency combined with machine-learning means their human counterparts are still responsible for their actions while having no ability to control or intercede in the actual decisions made. If, on the other hand, an AI could reach the point of autonomy, the level of critical reflection would make its decisions unpredictable and dangerous in a weapon.\",\"PeriodicalId\":35931,\"journal\":{\"name\":\"Criminal Justice Ethics\",\"volume\":\"40 1\",\"pages\":\"86 - 107\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-05-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1080/0731129X.2021.1951459\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Criminal Justice Ethics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/0731129X.2021.1951459\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"Social Sciences\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Criminal Justice Ethics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/0731129X.2021.1951459","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Social Sciences","Score":null,"Total":0}
引用次数: 4

摘要

机器做出真正独立和自主决策的能力是许多人的目标,尤其是那些希望尽可能让人类脱离循环的军事领导人,他们声称自主军事武器——尤其是无人机——可以更快、更准确地做出决策。然而,对于自主武器应该如何概念化,以及它们的“自主”性质对它们作为道德代理人的影响,还没有明确的理解。有人会说,由于其编码的限制,自主武器并不是完全合乎道德的代理人。然而,高度复杂的机器学习本质给人的印象是,他们正在做出自己的决定,并产生了一种错觉,即他们的人类操作员受到保护,不承担他们造成的伤害的责任。因此,区分自主人工智能武器和具有自主性的人工智能很重要,这种区别会给它们的使用带来两个不同的道德问题。对于自主武器来说,其有限的能动性与机器学习相结合,意味着它们的人类对手仍然对自己的行为负责,而没有能力控制或干预实际决策。另一方面,如果人工智能能够达到自主的程度,那么批判性反思的程度将使其决策在武器中变得不可预测和危险。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Can AI Weapons Make Ethical Decisions?
The ability of machines to make truly independent and autonomous decisions is a goal of many, not least of military leaders who wish to take the human out of the loop as much as possible, claiming that autonomous military weaponry—most notably drones—can make decisions more quickly and with greater accuracy. However, there is no clear understanding of how autonomous weapons should be conceptualized and of the implications that their “autonomous” nature has on them as ethical agents. It will be argued that autonomous weapons are not full ethical agents due to the restrictions of their coding. However, the highly complex machine-learning nature gives the impression that they are making their own decisions and creates the illusion that their human operators are protected from the responsibility of the harm they cause. Therefore, it is important to distinguish between autonomous AI weapons and an AI with autonomy, a distinction that creates two different ethical problems for their use. For autonomous weapons, their limited agency combined with machine-learning means their human counterparts are still responsible for their actions while having no ability to control or intercede in the actual decisions made. If, on the other hand, an AI could reach the point of autonomy, the level of critical reflection would make its decisions unpredictable and dangerous in a weapon.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Criminal Justice Ethics
Criminal Justice Ethics Social Sciences-Law
CiteScore
1.10
自引率
0.00%
发文量
11
期刊最新文献
Exposing, Reversing, and Inheriting Crimes as Traumas from the Neurosciences to Epigenetics: Why Criminal Law Cannot Yet Afford A(nother) Biology-induced Overhaul Institutional Corruption, Institutional Corrosion and Collective Responsibility Sentencing, Artificial Intelligence, and Condemnation: A Reply to Taylor Double Jeopardy, Autrefois Acquit and the Legal Ethics of the Rule Against Unreasonably Splitting a Case Ethical Resource Allocation in Policing: Why Policing Requires a Different Approach from Healthcare
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1