Regulating autonomous and AI-enabled weapon systems: the dangers of hype

Nathan Gabriel Wood
{"title":"Regulating autonomous and AI-enabled weapon systems: the dangers of hype","authors":"Nathan Gabriel Wood","doi":"10.1007/s43681-024-00448-z","DOIUrl":null,"url":null,"abstract":"<div><p>In many debates surrounding autonomous weapon systems (AWS) or AI-enabled platforms in the military, critics present both over- and under-hyped presentations of the capabilities of such systems, creating a risk of derailing critical debates on how best to regulate these in the military. In particular, in this article, I show that critics utilize over-hype to generate fear about the capabilities of such systems or to create objections that do not hold for more realistically viewed platforms, and they use under-hype to sell AWS and military AI short, creating an image of these as far less capable than is in actuality the case. The hyped presentations in this debate also gloss over many core realities of how modern militaries function, what sorts of platforms they are seeking to develop and use, and what actual combatants are likely to be willing to deploy in real warfighting scenarios. More critically for the regulatory debates themselves, hype (both over and under) forces genuine but subtle arguments on issues with autonomous and AI-enabled systems to be sidelined as scholars deal with the more politically divisive topics brought to the fore by critics. Finally, over- and under-hype creates grave risks of skewing the regulatory debates far enough from the realities of AWS and military AI development and deployment that central state actors may lose willingness to support any eventual treaties established. Thus, in their fervor to generate objections and force rapid regulation of AWS and military AI, critics risk alienating those key players most necessary for such regulation to be globally meaningful and effective.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"4 3","pages":"805 - 817"},"PeriodicalIF":0.0000,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI and ethics","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s43681-024-00448-z","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In many debates surrounding autonomous weapon systems (AWS) or AI-enabled platforms in the military, critics present both over- and under-hyped presentations of the capabilities of such systems, creating a risk of derailing critical debates on how best to regulate these in the military. In particular, in this article, I show that critics utilize over-hype to generate fear about the capabilities of such systems or to create objections that do not hold for more realistically viewed platforms, and they use under-hype to sell AWS and military AI short, creating an image of these as far less capable than is in actuality the case. The hyped presentations in this debate also gloss over many core realities of how modern militaries function, what sorts of platforms they are seeking to develop and use, and what actual combatants are likely to be willing to deploy in real warfighting scenarios. More critically for the regulatory debates themselves, hype (both over and under) forces genuine but subtle arguments on issues with autonomous and AI-enabled systems to be sidelined as scholars deal with the more politically divisive topics brought to the fore by critics. Finally, over- and under-hype creates grave risks of skewing the regulatory debates far enough from the realities of AWS and military AI development and deployment that central state actors may lose willingness to support any eventual treaties established. Thus, in their fervor to generate objections and force rapid regulation of AWS and military AI, critics risk alienating those key players most necessary for such regulation to be globally meaningful and effective.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
规范自主和人工智能武器系统:炒作的危险
在围绕军事自主武器系统(AWS)或人工智能平台的许多辩论中,批评人士对此类系统的能力进行了过度和低估的介绍,从而有可能使有关如何最好地监管这些系统的关键辩论偏离轨道。特别是,在这篇文章中,我展示了批评者利用过度炒作来产生对这些系统能力的恐惧,或者制造对更现实的平台不成立的反对意见,他们利用过度炒作来出售AWS和军事AI,创造了一个比实际情况差得多的形象。在这场辩论中大肆宣传的演讲也掩盖了许多核心现实,如现代军队如何运作,他们正在寻求开发和使用什么样的平台,以及实际战斗人员可能愿意在实际作战场景中部署什么。对于监管辩论本身来说,更关键的是,炒作(无论是过度炒作还是过度炒作)迫使有关自主和人工智能系统问题的真实但微妙的争论被搁置一边,因为学者们要处理批评者提出的更具政治分歧的话题。最后,过度和不充分的炒作会造成严重的风险,使监管辩论偏离AWS和军事人工智能开发和部署的现实,以至于中央国家行为体可能会失去支持任何最终建立的条约的意愿。因此,在他们热衷于提出反对意见并迫使对AWS和军事人工智能进行快速监管的过程中,批评者可能会疏远那些对此类监管具有全球意义和有效性最必要的关键参与者。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Against AI ethics: challenging the conventional narratives Legitimate expectations in the age of innovation Dehumanising education: AI and the capitalist capture of teaching An overview of AI ethics: moral concerns through the lens of principles, lived realities and power structures Justification optional: ChatGPT’s advice can still influence human judgments about moral dilemmas
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1