{"title":"论在军事行动中使用人工智能自动化的伦理问题","authors":"Wolfgang Koch;Dierk Spreen;Kairi Talves;Wolfgang Wagner;Eleri Lillemäe;Matthias Klaus;Auli Viidalepp;Camilla Guldahl Cooper;Janar Pekarev","doi":"10.1109/TTS.2024.3405309","DOIUrl":null,"url":null,"abstract":"In this paper, we explore the ethical dimension of artificial intelligent automation (often called AI) in military systems engineering, and present conclusions. Morality, ethics, and ethos, as well as technical excellence, need to be strengthened in both the developers and users of artificial intelligent automation. Only then can critical innovations like cognitive and volitive assistance systems or automated weapon systems be wielded efficiently and beneficially within the given legal constraints. Meaningful human control takes center stage here, which we understand in a broad sense as involving both technical controllability and accountability for outcomes. Explainable AI is essential for this task and requires rigorous testing to ensure deliberate decision making by the user. The military and industrial communities must work together to ensure adequate training for responsible use of AI-automation. Finally, these developments need to be accompanied by a politically supported open discourse, involving as many stakeholders from diverse backgrounds as possible. This serves as an extensive approach to both manage the risks of these new technologies and prevent exaggerated risk avoidance impeding necessary development.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"5 2","pages":"231-241"},"PeriodicalIF":0.0000,"publicationDate":"2024-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"On the Ethics of Employing Artificial Intelligent Automation in Military Operational Contexts\",\"authors\":\"Wolfgang Koch;Dierk Spreen;Kairi Talves;Wolfgang Wagner;Eleri Lillemäe;Matthias Klaus;Auli Viidalepp;Camilla Guldahl Cooper;Janar Pekarev\",\"doi\":\"10.1109/TTS.2024.3405309\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we explore the ethical dimension of artificial intelligent automation (often called AI) in military systems engineering, and present conclusions. Morality, ethics, and ethos, as well as technical excellence, need to be strengthened in both the developers and users of artificial intelligent automation. Only then can critical innovations like cognitive and volitive assistance systems or automated weapon systems be wielded efficiently and beneficially within the given legal constraints. Meaningful human control takes center stage here, which we understand in a broad sense as involving both technical controllability and accountability for outcomes. Explainable AI is essential for this task and requires rigorous testing to ensure deliberate decision making by the user. The military and industrial communities must work together to ensure adequate training for responsible use of AI-automation. Finally, these developments need to be accompanied by a politically supported open discourse, involving as many stakeholders from diverse backgrounds as possible. This serves as an extensive approach to both manage the risks of these new technologies and prevent exaggerated risk avoidance impeding necessary development.\",\"PeriodicalId\":73324,\"journal\":{\"name\":\"IEEE transactions on technology and society\",\"volume\":\"5 2\",\"pages\":\"231-241\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-03-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on technology and society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10538398/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on technology and society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10538398/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
On the Ethics of Employing Artificial Intelligent Automation in Military Operational Contexts
In this paper, we explore the ethical dimension of artificial intelligent automation (often called AI) in military systems engineering, and present conclusions. Morality, ethics, and ethos, as well as technical excellence, need to be strengthened in both the developers and users of artificial intelligent automation. Only then can critical innovations like cognitive and volitive assistance systems or automated weapon systems be wielded efficiently and beneficially within the given legal constraints. Meaningful human control takes center stage here, which we understand in a broad sense as involving both technical controllability and accountability for outcomes. Explainable AI is essential for this task and requires rigorous testing to ensure deliberate decision making by the user. The military and industrial communities must work together to ensure adequate training for responsible use of AI-automation. Finally, these developments need to be accompanied by a politically supported open discourse, involving as many stakeholders from diverse backgrounds as possible. This serves as an extensive approach to both manage the risks of these new technologies and prevent exaggerated risk avoidance impeding necessary development.