{"title":"Killer Robots and Human Dignity","authors":"Daniel Lim","doi":"10.1145/3306618.3314291","DOIUrl":null,"url":null,"abstract":"Lethal Autonomous Weapon Systems (LAWS) have become the center of an internationally relevant ethical debate. Deontological arguments based on putative legal compliance failures and the creation of accountability gaps along with wide consequentialist arguments based on factors like the ease of engaging in wars have been leveraged by a number of different states and organizations to try and reach global consensus on a ban of LAWS. This paper will focus on one strand of deontological arguments-ones based on human dignity. Merely asserting that LAWS pose a threat to human dignity would be question begging. Independent evidence based on a morally relevant distinction between humans and LAWS is needed. There are at least four reasons to think that the capacity for emotion cannot be a morally relevant distinction. First, if the concept of human dignity is given a subjective definition, whether or not lethal force is administered by humans or LAWS seems to be irrelevant. Second, it is far from clear that human combatants either have the relevant capacity for emotion or that the capacity is exercised in the relevant circumstances. Third, the capacity for emotion can actually be an impediment to the exercising of a combatant's ability to treat an enemy respectfully. Fourth, there is strong inductive evidence to believe that any capacity, when sufficiently well described, can be carried out by artificially intelligent programs.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3306618.3314291","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
Abstract
Lethal Autonomous Weapon Systems (LAWS) have become the center of an internationally relevant ethical debate. Deontological arguments based on putative legal compliance failures and the creation of accountability gaps along with wide consequentialist arguments based on factors like the ease of engaging in wars have been leveraged by a number of different states and organizations to try and reach global consensus on a ban of LAWS. This paper will focus on one strand of deontological arguments-ones based on human dignity. Merely asserting that LAWS pose a threat to human dignity would be question begging. Independent evidence based on a morally relevant distinction between humans and LAWS is needed. There are at least four reasons to think that the capacity for emotion cannot be a morally relevant distinction. First, if the concept of human dignity is given a subjective definition, whether or not lethal force is administered by humans or LAWS seems to be irrelevant. Second, it is far from clear that human combatants either have the relevant capacity for emotion or that the capacity is exercised in the relevant circumstances. Third, the capacity for emotion can actually be an impediment to the exercising of a combatant's ability to treat an enemy respectfully. Fourth, there is strong inductive evidence to believe that any capacity, when sufficiently well described, can be carried out by artificially intelligent programs.