首页 > 最新文献

Lethal Autonomous Weapons最新文献

英文 中文
Empirical Data on Attitudes Toward Autonomous Systems 对自治系统态度的实证数据
Pub Date : 1900-01-01 DOI: 10.1093/oso/9780197546048.003.0010
Jai C. Galliott, Bianca Baggiarini, Sean Rupka
Combat automation, enabled by rapid technological advancements in artificial intelligence and machine learning, is a guiding principle in the conduct of war today. Yet, empirical data on the impact of algorithmic combat on military personnel remains limited. This chapter draws on data from a historically unprecedented survey of Australian Defence Force Academy cadets. Given that this generation of trainees will be the first to deploy autonomous systems (AS) in a systematic way, their views are especially important. This chapter focuses its analysis on five themes: the dynamics of human-machine teams; the perceived risks, benefits, and capabilities of AS; the changing nature of (and respect for) military labor and incentives; preferences to oversee a robot, versus carrying out a mission themselves; and the changing meaning of soldiering. We utilize the survey data to explore the interconnected consequences of neoliberal governing for cadets’ attitudes toward AS, and citizen-soldiering more broadly. Overall, this chapter argues that Australian cadets are open to working with and alongside AS, but under the right conditions. Armed forces, in an attempt to capitalize on these technologically savvy cadets, have shifted from institutional to occupational employers. However, in our concluding remarks, we caution against unchecked technological fetishism, highlighting the need to critically question the risks of AS on moral deskilling, and the application of market-based notions of freedom to the military domain.
人工智能和机器学习的快速技术进步使战斗自动化成为当今战争行为的指导原则。然而,关于算法战斗对军事人员影响的经验数据仍然有限。本章借鉴了一项前所未有的对澳大利亚国防军事学院学员的调查数据。考虑到这一代学员将是第一批以系统的方式部署自主系统(AS)的人,他们的观点尤为重要。本章重点分析了五个主题:人机团队的动态;AS的风险、收益和能力;不断变化的性质(和尊重)军事劳动和激励;更喜欢监督机器人,还是自己执行任务;以及当兵意义的变化。我们利用调查数据来探索新自由主义统治对学员对AS和更广泛的公民士兵态度的相互关联的后果。总的来说,本章认为澳大利亚的军校学员在适当的条件下,对与AS一起工作持开放态度。为了利用这些精通技术的学员,军队已经从机构雇主转变为职业雇主。然而,在我们的结束语中,我们警告不要对技术盲目崇拜,强调有必要批判性地质疑人工智能对道德技能的风险,以及将基于市场的自由概念应用于军事领域。
{"title":"Empirical Data on Attitudes Toward Autonomous Systems","authors":"Jai C. Galliott, Bianca Baggiarini, Sean Rupka","doi":"10.1093/oso/9780197546048.003.0010","DOIUrl":"https://doi.org/10.1093/oso/9780197546048.003.0010","url":null,"abstract":"Combat automation, enabled by rapid technological advancements in artificial intelligence and machine learning, is a guiding principle in the conduct of war today. Yet, empirical data on the impact of algorithmic combat on military personnel remains limited. This chapter draws on data from a historically unprecedented survey of Australian Defence Force Academy cadets. Given that this generation of trainees will be the first to deploy autonomous systems (AS) in a systematic way, their views are especially important. This chapter focuses its analysis on five themes: the dynamics of human-machine teams; the perceived risks, benefits, and capabilities of AS; the changing nature of (and respect for) military labor and incentives; preferences to oversee a robot, versus carrying out a mission themselves; and the changing meaning of soldiering. We utilize the survey data to explore the interconnected consequences of neoliberal governing for cadets’ attitudes toward AS, and citizen-soldiering more broadly. Overall, this chapter argues that Australian cadets are open to working with and alongside AS, but under the right conditions. Armed forces, in an attempt to capitalize on these technologically savvy cadets, have shifted from institutional to occupational employers. However, in our concluding remarks, we caution against unchecked technological fetishism, highlighting the need to critically question the risks of AS on moral deskilling, and the application of market-based notions of freedom to the military domain.","PeriodicalId":145178,"journal":{"name":"Lethal Autonomous Weapons","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115339110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The Robot Dogs of War 战争机器狗
Pub Date : 1900-01-01 DOI: 10.1093/oso/9780197546048.003.0003
D. Baker
The prospect of robotic warriors striding the battlefield has, somewhat unsurprisingly, been shaped by perceptions drawn from science fiction. While illustrative, such comparisons are largely unhelpful for those considering potential ethical implications of autonomous weapons systems. In this chapter, I offer two alternative sources for ethical comparison. Drawing from military history and current practice for guidance, this chapter highlights the parallels that make mercenaries—the ‘dogs of war’—and military working dogs—the actual dogs of war—useful lenses through which to consider Lethal Autonomous Weapons Systems—the robot dogs of war. Through these comparisons, I demonstrate that some of the most commonly raised ethical objections to autonomous weapon systems are overstated, misguided, or otherwise dependent on outside circumstance.
毫无疑问,机器人战士在战场上驰骋的前景是由科幻小说中的观念塑造的。这种比较虽然能说明问题,但对那些考虑自主武器系统潜在伦理影响的人来说,基本上没有帮助。在本章中,我提供了两个可供选择的伦理比较来源。以军事历史和当前实践为指导,本章强调了雇佣兵(“战争之犬”)和军事工作犬(真正的战争之犬)之间的相似之处,这是考虑致命自主武器系统(战争机器狗)的有用视角。通过这些比较,我证明了对自主武器系统提出的一些最常见的道德异议被夸大、误导或依赖于外部环境。
{"title":"The Robot Dogs of War","authors":"D. Baker","doi":"10.1093/oso/9780197546048.003.0003","DOIUrl":"https://doi.org/10.1093/oso/9780197546048.003.0003","url":null,"abstract":"The prospect of robotic warriors striding the battlefield has, somewhat unsurprisingly, been shaped by perceptions drawn from science fiction. While illustrative, such comparisons are largely unhelpful for those considering potential ethical implications of autonomous weapons systems. In this chapter, I offer two alternative sources for ethical comparison. Drawing from military history and current practice for guidance, this chapter highlights the parallels that make mercenaries—the ‘dogs of war’—and military working dogs—the actual dogs of war—useful lenses through which to consider Lethal Autonomous Weapons Systems—the robot dogs of war. Through these comparisons, I demonstrate that some of the most commonly raised ethical objections to autonomous weapon systems are overstated, misguided, or otherwise dependent on outside circumstance.","PeriodicalId":145178,"journal":{"name":"Lethal Autonomous Weapons","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126273824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Better Instincts of Humanity: Humanitarian Arguments in Defense of International Arms Control 人性更好的本能:为国际军备控制辩护的人道主义论点
Pub Date : 1900-01-01 DOI: 10.1093/oso/9780197546048.003.0008
Natalia Jevglevskaja, Rain Liivoja
Disagreements about the humanitarian risk-benefit balance of weapons technology are not new. The history of arms control negotiations offers many examples of weaponry that was regarded ‘inhumane’ by some, while hailed by others as a means to reduce injury or suffering in conflict. The debate about autonomous weapons systems reflects this dynamic, yet also stands out in some respects, notably largely hypothetical nature of concerns raised in regard to these systems as well as ostensible disparities in States’ approaches to conceptualizing autonomy. This chapter considers how misconceptions surrounding autonomous weapons technology impede the progress of the deliberations of the Group of Governmental Experts on Lethal Autonomous Weapons Systems. An obvious tendency to focus on the perceived risks posed by these systems, much more so than potential operational and humanitarian advantages they offer, is likely to jeopardize the prospect of finding a meaningful resolution to the debate.
关于武器技术的人道主义风险-利益平衡的分歧并不新鲜。军备控制谈判的历史提供了许多被一些人视为“不人道”的武器的例子,而另一些人则将其视为减少冲突中伤害或痛苦的手段。关于自主武器系统的辩论反映了这一动态,但在某些方面也很突出,特别是就这些系统提出的关切在很大程度上是假设性的,以及各国对自主概念的做法表面上存在差异。本章考虑了围绕自主武器技术的误解如何阻碍致命自主武器系统政府专家组的审议进展。一种明显的倾向是,把注意力集中在这些系统所造成的风险上,而不是它们所提供的潜在的业务和人道主义优势上,这可能会危及为辩论找到有意义的解决办法的前景。
{"title":"The Better Instincts of Humanity: Humanitarian Arguments in Defense of International Arms Control","authors":"Natalia Jevglevskaja, Rain Liivoja","doi":"10.1093/oso/9780197546048.003.0008","DOIUrl":"https://doi.org/10.1093/oso/9780197546048.003.0008","url":null,"abstract":"Disagreements about the humanitarian risk-benefit balance of weapons technology are not new. The history of arms control negotiations offers many examples of weaponry that was regarded ‘inhumane’ by some, while hailed by others as a means to reduce injury or suffering in conflict. The debate about autonomous weapons systems reflects this dynamic, yet also stands out in some respects, notably largely hypothetical nature of concerns raised in regard to these systems as well as ostensible disparities in States’ approaches to conceptualizing autonomy. This chapter considers how misconceptions surrounding autonomous weapons technology impede the progress of the deliberations of the Group of Governmental Experts on Lethal Autonomous Weapons Systems. An obvious tendency to focus on the perceived risks posed by these systems, much more so than potential operational and humanitarian advantages they offer, is likely to jeopardize the prospect of finding a meaningful resolution to the debate.","PeriodicalId":145178,"journal":{"name":"Lethal Autonomous Weapons","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114527202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
May Machines Take Lives to Save Lives? Human Perceptions of Autonomous Robots (with the Capacity to Kill) 机器会夺走生命来拯救生命吗?人类对自主机器人的感知(具有杀人能力)
Pub Date : 1900-01-01 DOI: 10.1093/oso/9780197546048.003.0007
Matthias Scheutz, B. Malle
In the future, artificial agents are likely to make life-and-death decisions about humans. Ordinary people are the likely arbiters of whether these decisions are morally acceptable. We summarize research on how ordinary people evaluate artificial (compared to human) agents that make life-and-death decisions. The results suggest that many people are inclined to morally evaluate artificial agents’ decisions, and when asked how the artificial and human agents should decide, they impose the same norms on them. However, when confronted with how the agents did in fact decide, people judge the artificial agents’ decisions differently from those of humans. This difference is best explained by justifications people grant the human agents (imagining their experience of the decision situation) but do not grant the artificial agent (whose experience they cannot imagine). If people fail to infer the decision processes and justifications of artificial agents, these agents will have to explicitly communicate such justifications to people, so they can understand and accept their decisions.
在未来,人工智能可能会对人类做出生死攸关的决定。普通人可能是这些决定在道德上是否可以接受的仲裁者。我们总结了普通人如何评估人工(与人类相比)做出生死决定的研究。结果表明,许多人倾向于从道德上评价人工智能体的决定,当被问及人工智能体和人类智能体应该如何决定时,他们会对它们施加相同的规范。然而,当面对智能体的实际决策时,人们对人工智能体决策的判断与人类不同。这种差异最好的解释是,人们给予人类代理(想象他们对决策情况的经验)的理由,而不给予人工代理(他们无法想象其经验)。如果人们无法推断人工代理的决策过程和理由,这些代理将不得不明确地向人们传达这些理由,以便他们能够理解并接受他们的决定。
{"title":"May Machines Take Lives to Save Lives? Human Perceptions of Autonomous Robots (with the Capacity to Kill)","authors":"Matthias Scheutz, B. Malle","doi":"10.1093/oso/9780197546048.003.0007","DOIUrl":"https://doi.org/10.1093/oso/9780197546048.003.0007","url":null,"abstract":"In the future, artificial agents are likely to make life-and-death decisions about humans. Ordinary people are the likely arbiters of whether these decisions are morally acceptable. We summarize research on how ordinary people evaluate artificial (compared to human) agents that make life-and-death decisions. The results suggest that many people are inclined to morally evaluate artificial agents’ decisions, and when asked how the artificial and human agents should decide, they impose the same norms on them. However, when confronted with how the agents did in fact decide, people judge the artificial agents’ decisions differently from those of humans. This difference is best explained by justifications people grant the human agents (imagining their experience of the decision situation) but do not grant the artificial agent (whose experience they cannot imagine). If people fail to infer the decision processes and justifications of artificial agents, these agents will have to explicitly communicate such justifications to people, so they can understand and accept their decisions.","PeriodicalId":145178,"journal":{"name":"Lethal Autonomous Weapons","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121900190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
期刊
Lethal Autonomous Weapons
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1