{"title":"An ethical boundary agent to prevent the abdication of responsibility in combat systems","authors":"T. D. Greef, A. Leveringhaus","doi":"10.1145/2501907.2501949","DOIUrl":null,"url":null,"abstract":"Remote controlled combat systems and future autonomous systems create unprecedented capabilities to control the delivery of military force. However, there is a growing concern, which is only starting to be addressed now, that ethical values are violated as a result of high levels of autonomy and remote control. Before combat technologies can be deployed, we need to ensure that their usage enhances, rather than undermines, human decision-making capacities. To do this, we propose combining the idea of an ethical boundary agent with a partnership approach. The partnership approach is seen as a promising area for improved efficiency in interactive systems. In this paper, we claim that the ethical boundary agent safeguards compliance with implemented legal and moral boundaries. Hypothetically, such an agent prevents human operators from abdicating from their responsibilities. It does this by challenging operators to think critically about whether actions meet relevant ethical standards.","PeriodicalId":279162,"journal":{"name":"Proceedings of the 31st European Conference on Cognitive Ergonomics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 31st European Conference on Cognitive Ergonomics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2501907.2501949","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Remote controlled combat systems and future autonomous systems create unprecedented capabilities to control the delivery of military force. However, there is a growing concern, which is only starting to be addressed now, that ethical values are violated as a result of high levels of autonomy and remote control. Before combat technologies can be deployed, we need to ensure that their usage enhances, rather than undermines, human decision-making capacities. To do this, we propose combining the idea of an ethical boundary agent with a partnership approach. The partnership approach is seen as a promising area for improved efficiency in interactive systems. In this paper, we claim that the ethical boundary agent safeguards compliance with implemented legal and moral boundaries. Hypothetically, such an agent prevents human operators from abdicating from their responsibilities. It does this by challenging operators to think critically about whether actions meet relevant ethical standards.