{"title":"自主武器系统开发和使用的道德案例","authors":"Erich Riesen","doi":"10.1080/15027570.2022.2124022","DOIUrl":null,"url":null,"abstract":"ABSTRACT Autonomous Weapon Systems (AWS) are artificial intelligence systems that can make and act on decisions concerning the termination of enemy soldiers and installations without direct intervention from a human being. In this article, I provide the positive moral case for the development and use of supervised and fully autonomous weapons that can reliably adhere to the laws of war. Two strong, prima facie obligations make up the positive case. First, we have a strong moral reason to deploy AWS (in an otherwise just war) because such systems decrease the psychological and moral risk of soldiers and would-be soldiers. Drones protect against lethal risk, AWS protect against psychological and moral risk in addition to lethal risk. Second, we have a prima facie obligation to develop such technologies because, once developed, we could employ forms of non-lethal warfare that would substantially reduce the risk of suffering and death for enemy combatants and civilians alike. These two arguments, covering both sides of a conflict, represent the normative hill that those in favor of a ban on autonomous weapons must overcome. Finally, I demonstrate that two recent objections to AWS fail because they misconstrue the way in which technology is used and conceptualized in modern warfare.","PeriodicalId":39180,"journal":{"name":"Journal of Military Ethics","volume":"21 1","pages":"132 - 150"},"PeriodicalIF":0.0000,"publicationDate":"2022-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"The Moral Case for the Development and Use of Autonomous Weapon Systems\",\"authors\":\"Erich Riesen\",\"doi\":\"10.1080/15027570.2022.2124022\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"ABSTRACT Autonomous Weapon Systems (AWS) are artificial intelligence systems that can make and act on decisions concerning the termination of enemy soldiers and installations without direct intervention from a human being. In this article, I provide the positive moral case for the development and use of supervised and fully autonomous weapons that can reliably adhere to the laws of war. Two strong, prima facie obligations make up the positive case. First, we have a strong moral reason to deploy AWS (in an otherwise just war) because such systems decrease the psychological and moral risk of soldiers and would-be soldiers. Drones protect against lethal risk, AWS protect against psychological and moral risk in addition to lethal risk. Second, we have a prima facie obligation to develop such technologies because, once developed, we could employ forms of non-lethal warfare that would substantially reduce the risk of suffering and death for enemy combatants and civilians alike. These two arguments, covering both sides of a conflict, represent the normative hill that those in favor of a ban on autonomous weapons must overcome. Finally, I demonstrate that two recent objections to AWS fail because they misconstrue the way in which technology is used and conceptualized in modern warfare.\",\"PeriodicalId\":39180,\"journal\":{\"name\":\"Journal of Military Ethics\",\"volume\":\"21 1\",\"pages\":\"132 - 150\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-04-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Military Ethics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/15027570.2022.2124022\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"Arts and Humanities\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Military Ethics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/15027570.2022.2124022","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Arts and Humanities","Score":null,"Total":0}
The Moral Case for the Development and Use of Autonomous Weapon Systems
ABSTRACT Autonomous Weapon Systems (AWS) are artificial intelligence systems that can make and act on decisions concerning the termination of enemy soldiers and installations without direct intervention from a human being. In this article, I provide the positive moral case for the development and use of supervised and fully autonomous weapons that can reliably adhere to the laws of war. Two strong, prima facie obligations make up the positive case. First, we have a strong moral reason to deploy AWS (in an otherwise just war) because such systems decrease the psychological and moral risk of soldiers and would-be soldiers. Drones protect against lethal risk, AWS protect against psychological and moral risk in addition to lethal risk. Second, we have a prima facie obligation to develop such technologies because, once developed, we could employ forms of non-lethal warfare that would substantially reduce the risk of suffering and death for enemy combatants and civilians alike. These two arguments, covering both sides of a conflict, represent the normative hill that those in favor of a ban on autonomous weapons must overcome. Finally, I demonstrate that two recent objections to AWS fail because they misconstrue the way in which technology is used and conceptualized in modern warfare.