{"title":"实现平衡方法:连接军事、政策和技术界","authors":"Arun Seraphin, Wilson Miles","doi":"10.1017/S0892679423000321","DOIUrl":null,"url":null,"abstract":"Abstract The development of new technologies that enable autonomous weapon systems poses a challenge to policymakers and technologists trying to balance military requirements with international obligations and ethical norms. Some have called for new international agreements to restrict or ban lethal autonomous weapon systems. Given the tactical and strategic value of the technologies and the proliferation of threats, the military continues to explore the development of new autonomous technologies to execute national security missions. The rapid global diffusion and dual-use nature of autonomous systems necessitate a proactive approach and a shared understanding of the technical realities, threats, military relevance, and strategic implications of these technologies from these communities. Ultimately, developing AI-enabled defense systems that adhere to global norms and relevant treaty obligations, leverage emerging technologies, and provide operational advantages is possible. The development of a workable and realistic regulatory framework governing the use of lethal autonomous weapons and the artificial intelligence that underpins autonomy will be best supported through a coordinated effort of the regulatory community, technologists, and military to create requirements that reflect the global proliferation and rapidly evolving threat of autonomous weapon systems. This essay seeks to demonstrate that: (1) the lack of coherent dialogue between the technical and policy communities can create security, ethical, and legal dilemmas; and (2) bridging the military, technical, and policy communities can lead to technology with constraints that balance the needs of military, technical, and policy communities. It uses case studies to show why mechanisms are needed to enable early and continuous engagement across the technical, policymaking, and operational communities. The essay then uses twelve interviews with AI and autonomy experts, which provide insight into what the technical and policymaking communities consider fundamental to the progression of responsible autonomous development. It also recommends practical steps for connecting the relevant stakeholders. The goal is to provide the Department of Defense with concrete steps for building organizational structures or processes that create incentives for engagement across communities.","PeriodicalId":11772,"journal":{"name":"Ethics & International Affairs","volume":" September","pages":"272 - 286"},"PeriodicalIF":1.3000,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Toward a Balanced Approach: Bridging the Military, Policy, and Technical Communities\",\"authors\":\"Arun Seraphin, Wilson Miles\",\"doi\":\"10.1017/S0892679423000321\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract The development of new technologies that enable autonomous weapon systems poses a challenge to policymakers and technologists trying to balance military requirements with international obligations and ethical norms. Some have called for new international agreements to restrict or ban lethal autonomous weapon systems. Given the tactical and strategic value of the technologies and the proliferation of threats, the military continues to explore the development of new autonomous technologies to execute national security missions. The rapid global diffusion and dual-use nature of autonomous systems necessitate a proactive approach and a shared understanding of the technical realities, threats, military relevance, and strategic implications of these technologies from these communities. Ultimately, developing AI-enabled defense systems that adhere to global norms and relevant treaty obligations, leverage emerging technologies, and provide operational advantages is possible. The development of a workable and realistic regulatory framework governing the use of lethal autonomous weapons and the artificial intelligence that underpins autonomy will be best supported through a coordinated effort of the regulatory community, technologists, and military to create requirements that reflect the global proliferation and rapidly evolving threat of autonomous weapon systems. This essay seeks to demonstrate that: (1) the lack of coherent dialogue between the technical and policy communities can create security, ethical, and legal dilemmas; and (2) bridging the military, technical, and policy communities can lead to technology with constraints that balance the needs of military, technical, and policy communities. It uses case studies to show why mechanisms are needed to enable early and continuous engagement across the technical, policymaking, and operational communities. The essay then uses twelve interviews with AI and autonomy experts, which provide insight into what the technical and policymaking communities consider fundamental to the progression of responsible autonomous development. It also recommends practical steps for connecting the relevant stakeholders. The goal is to provide the Department of Defense with concrete steps for building organizational structures or processes that create incentives for engagement across communities.\",\"PeriodicalId\":11772,\"journal\":{\"name\":\"Ethics & International Affairs\",\"volume\":\" September\",\"pages\":\"272 - 286\"},\"PeriodicalIF\":1.3000,\"publicationDate\":\"2023-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Ethics & International Affairs\",\"FirstCategoryId\":\"90\",\"ListUrlMain\":\"https://doi.org/10.1017/S0892679423000321\",\"RegionNum\":3,\"RegionCategory\":\"哲学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"ETHICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ethics & International Affairs","FirstCategoryId":"90","ListUrlMain":"https://doi.org/10.1017/S0892679423000321","RegionNum":3,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ETHICS","Score":null,"Total":0}
Toward a Balanced Approach: Bridging the Military, Policy, and Technical Communities
Abstract The development of new technologies that enable autonomous weapon systems poses a challenge to policymakers and technologists trying to balance military requirements with international obligations and ethical norms. Some have called for new international agreements to restrict or ban lethal autonomous weapon systems. Given the tactical and strategic value of the technologies and the proliferation of threats, the military continues to explore the development of new autonomous technologies to execute national security missions. The rapid global diffusion and dual-use nature of autonomous systems necessitate a proactive approach and a shared understanding of the technical realities, threats, military relevance, and strategic implications of these technologies from these communities. Ultimately, developing AI-enabled defense systems that adhere to global norms and relevant treaty obligations, leverage emerging technologies, and provide operational advantages is possible. The development of a workable and realistic regulatory framework governing the use of lethal autonomous weapons and the artificial intelligence that underpins autonomy will be best supported through a coordinated effort of the regulatory community, technologists, and military to create requirements that reflect the global proliferation and rapidly evolving threat of autonomous weapon systems. This essay seeks to demonstrate that: (1) the lack of coherent dialogue between the technical and policy communities can create security, ethical, and legal dilemmas; and (2) bridging the military, technical, and policy communities can lead to technology with constraints that balance the needs of military, technical, and policy communities. It uses case studies to show why mechanisms are needed to enable early and continuous engagement across the technical, policymaking, and operational communities. The essay then uses twelve interviews with AI and autonomy experts, which provide insight into what the technical and policymaking communities consider fundamental to the progression of responsible autonomous development. It also recommends practical steps for connecting the relevant stakeholders. The goal is to provide the Department of Defense with concrete steps for building organizational structures or processes that create incentives for engagement across communities.