On the Ethics of Employing Artificial Intelligent Automation in Military Operational Contexts

Wolfgang Koch;Dierk Spreen;Kairi Talves;Wolfgang Wagner;Eleri Lillemäe;Matthias Klaus;Auli Viidalepp;Camilla Guldahl Cooper;Janar Pekarev
{"title":"On the Ethics of Employing Artificial Intelligent Automation in Military Operational Contexts","authors":"Wolfgang Koch;Dierk Spreen;Kairi Talves;Wolfgang Wagner;Eleri Lillemäe;Matthias Klaus;Auli Viidalepp;Camilla Guldahl Cooper;Janar Pekarev","doi":"10.1109/TTS.2024.3405309","DOIUrl":null,"url":null,"abstract":"In this paper, we explore the ethical dimension of artificial intelligent automation (often called AI) in military systems engineering, and present conclusions. Morality, ethics, and ethos, as well as technical excellence, need to be strengthened in both the developers and users of artificial intelligent automation. Only then can critical innovations like cognitive and volitive assistance systems or automated weapon systems be wielded efficiently and beneficially within the given legal constraints. Meaningful human control takes center stage here, which we understand in a broad sense as involving both technical controllability and accountability for outcomes. Explainable AI is essential for this task and requires rigorous testing to ensure deliberate decision making by the user. The military and industrial communities must work together to ensure adequate training for responsible use of AI-automation. Finally, these developments need to be accompanied by a politically supported open discourse, involving as many stakeholders from diverse backgrounds as possible. This serves as an extensive approach to both manage the risks of these new technologies and prevent exaggerated risk avoidance impeding necessary development.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"5 2","pages":"231-241"},"PeriodicalIF":0.0000,"publicationDate":"2024-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on technology and society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10538398/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In this paper, we explore the ethical dimension of artificial intelligent automation (often called AI) in military systems engineering, and present conclusions. Morality, ethics, and ethos, as well as technical excellence, need to be strengthened in both the developers and users of artificial intelligent automation. Only then can critical innovations like cognitive and volitive assistance systems or automated weapon systems be wielded efficiently and beneficially within the given legal constraints. Meaningful human control takes center stage here, which we understand in a broad sense as involving both technical controllability and accountability for outcomes. Explainable AI is essential for this task and requires rigorous testing to ensure deliberate decision making by the user. The military and industrial communities must work together to ensure adequate training for responsible use of AI-automation. Finally, these developments need to be accompanied by a politically supported open discourse, involving as many stakeholders from diverse backgrounds as possible. This serves as an extensive approach to both manage the risks of these new technologies and prevent exaggerated risk avoidance impeding necessary development.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
论在军事行动中使用人工智能自动化的伦理问题
本文探讨了军事系统工程中人工智能自动化(通常称为 AI)的伦理层面,并提出了结论。无论是人工智能自动化的开发者还是使用者,都需要加强道德、伦理和风气,以及精益求精的技术。只有这样,认知和意志辅助系统或自动化武器系统等关键创新技术才能在既定的法律约束条件下得到有效和有益的应用。有意义的人类控制在这里占据中心位置,我们从广义上理解,这既包括技术上的可控性,也包括对结果的问责。可解释的人工智能对这项任务至关重要,需要进行严格的测试,以确保用户做出深思熟虑的决策。军事和工业界必须共同努力,确保为负责任地使用人工智能自动化提供充分的培训。最后,在进行这些开发的同时,还需要开展有政治支持的公开讨论,让尽可能多的来自不同背景的利益相关者参与进来。这是一种广泛的方法,既能管理这些新技术的风险,又能防止夸大的风险规避阻碍必要的发展。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
2024 Index IEEE Transactions on Technology and Society Vol. 5 Front Cover Table of Contents IEEE Transactions on Technology and Society Publication Information In This Special: Co-Designing Consumer Technology With Society
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1