禁止自主武器:法律与伦理的使命

IF 1.3 3区 哲学 Q3 ETHICS Ethics & International Affairs Pub Date : 2023-12-01 DOI:10.1017/S0892679423000357
Mary Ellen O'Connell
{"title":"禁止自主武器:法律与伦理的使命","authors":"Mary Ellen O'Connell","doi":"10.1017/S0892679423000357","DOIUrl":null,"url":null,"abstract":"Abstract ChatGPT launched in November 2022, triggering a global debate on the use of artificial intelligence (AI). A debate on AI-enabled lethal autonomous weapon systems (LAWS) has been underway far longer. Two sides have emerged: one in favor and one opposed to an international law ban on LAWS. This essay explains the position of advocates of a ban without attempting to persuade opponents. Supporters of a ban believe LAWS are already unlawful and immoral to use without the need of a new treaty or protocol. They nevertheless seek an express prohibition to educate and publicize the threats these weapons pose. Foremost among their concerns is the “black box” problem. Programmers cannot know what a computer operating a weapons system empowered with AI will “learn” from the algorithm they use. They cannot know at the time of deployment if the system will comply with the prohibition on the use of force or the human right to life that applies in both war and peace. Even if they could, mechanized killing affronts human dignity. Ban supporters have long known that “AI models are not safe and no one knows how to reliably make them safe” or morally acceptable in taking human life.","PeriodicalId":11772,"journal":{"name":"Ethics & International Affairs","volume":null,"pages":null},"PeriodicalIF":1.3000,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Banning Autonomous Weapons: A Legal and Ethical Mandate\",\"authors\":\"Mary Ellen O'Connell\",\"doi\":\"10.1017/S0892679423000357\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract ChatGPT launched in November 2022, triggering a global debate on the use of artificial intelligence (AI). A debate on AI-enabled lethal autonomous weapon systems (LAWS) has been underway far longer. Two sides have emerged: one in favor and one opposed to an international law ban on LAWS. This essay explains the position of advocates of a ban without attempting to persuade opponents. Supporters of a ban believe LAWS are already unlawful and immoral to use without the need of a new treaty or protocol. They nevertheless seek an express prohibition to educate and publicize the threats these weapons pose. Foremost among their concerns is the “black box” problem. Programmers cannot know what a computer operating a weapons system empowered with AI will “learn” from the algorithm they use. They cannot know at the time of deployment if the system will comply with the prohibition on the use of force or the human right to life that applies in both war and peace. Even if they could, mechanized killing affronts human dignity. Ban supporters have long known that “AI models are not safe and no one knows how to reliably make them safe” or morally acceptable in taking human life.\",\"PeriodicalId\":11772,\"journal\":{\"name\":\"Ethics & International Affairs\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.3000,\"publicationDate\":\"2023-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Ethics & International Affairs\",\"FirstCategoryId\":\"90\",\"ListUrlMain\":\"https://doi.org/10.1017/S0892679423000357\",\"RegionNum\":3,\"RegionCategory\":\"哲学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"ETHICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ethics & International Affairs","FirstCategoryId":"90","ListUrlMain":"https://doi.org/10.1017/S0892679423000357","RegionNum":3,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ETHICS","Score":null,"Total":0}
引用次数: 0

摘要

ChatGPT于2022年11月启动,引发了一场关于人工智能(AI)使用的全球辩论。关于人工智能致命自主武器系统(LAWS)的争论已经持续了很长时间。出现了两派:一方赞成,另一方反对在国际法上禁止LAWS。这篇文章在不试图说服反对者的情况下解释了禁令倡导者的立场。禁令的支持者认为,在不需要新的条约或议定书的情况下使用LAWS已经是非法和不道德的。然而,它们寻求明确禁止教育和宣传这些武器所构成的威胁。他们最关心的是“黑匣子”问题。程序员无法知道一台操作人工智能武器系统的计算机将从他们使用的算法中“学到”什么。他们在部署时无法知道该系统是否会遵守在战争与和平时期都适用的禁止使用武力或人权的规定。即使他们可以,机械化的杀戮也侮辱了人类的尊严。潘基文的支持者早就知道,“人工智能模型不安全,没有人知道如何可靠地使它们安全”,也不知道在夺人性命方面是否能被道德接受。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Banning Autonomous Weapons: A Legal and Ethical Mandate
Abstract ChatGPT launched in November 2022, triggering a global debate on the use of artificial intelligence (AI). A debate on AI-enabled lethal autonomous weapon systems (LAWS) has been underway far longer. Two sides have emerged: one in favor and one opposed to an international law ban on LAWS. This essay explains the position of advocates of a ban without attempting to persuade opponents. Supporters of a ban believe LAWS are already unlawful and immoral to use without the need of a new treaty or protocol. They nevertheless seek an express prohibition to educate and publicize the threats these weapons pose. Foremost among their concerns is the “black box” problem. Programmers cannot know what a computer operating a weapons system empowered with AI will “learn” from the algorithm they use. They cannot know at the time of deployment if the system will comply with the prohibition on the use of force or the human right to life that applies in both war and peace. Even if they could, mechanized killing affronts human dignity. Ban supporters have long known that “AI models are not safe and no one knows how to reliably make them safe” or morally acceptable in taking human life.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
1.90
自引率
0.00%
发文量
29
期刊最新文献
Accountability for the Taking of Human Life with LAWS in War Toward a Balanced Approach: Bridging the Military, Policy, and Technical Communities How to End a War: Essays on Justice, Peace, and Repair, Graham Parsons and Mark A. Wilson, eds. (Cambridge, U.K.: Cambridge University Press, 2023), 207 pp., cloth $110, eBook $110. Backfire: How Sanctions Reshape the World Against U.S. Interests, Agathe Demarais (New York: Columbia University Press, 2022) 304 pp., cloth $30, eBook $29.99. The Hegemon's Tool Kit: US Leadership and the Politics of the Nuclear Nonproliferation Regime, Rebecca Davis Gibbons (Ithaca, N.Y.: Cornell University Press, 2022), 240 pp., cloth $49.95, eBook $32.99.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1