{"title":"The political viability of AI on the battlefield: Examining US public support, trust, and blame dynamics","authors":"Zachary Zwald, Ryan Kennedy, Adam Ozer","doi":"10.1177/00223433241290885","DOIUrl":null,"url":null,"abstract":"This study examines how the public views the use of artificial intelligence (AI) on the battlefield. We conduct three survey experiments on a representative sample of the US public to examine how variation in the level of human-machine autonomy affects the public’s support for the use of military force, the public’s trust in such systems (both in their reliability and interpersonal trust), and the level of blame the public places on drone operators when a mistake results in civilian deaths. Existing research on these questions remains quite thin, the data available often point in many directions, and the structure of those studies tends to prevent comparing divergent results. Our findings show that variation between full machine and human autonomy has little effect on the public’s trust in reliability. We also find that both interpersonal trust and blame in the military operator decline as machine autonomy increases. These results suggest multiple paths for future research and provide insight on the on-going policy debate over the viability of the Martens Clause as a basis for banning the military use of AI-enabled systems.","PeriodicalId":48324,"journal":{"name":"Journal of Peace Research","volume":"93 1","pages":""},"PeriodicalIF":3.4000,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Peace Research","FirstCategoryId":"90","ListUrlMain":"https://doi.org/10.1177/00223433241290885","RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"INTERNATIONAL RELATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
This study examines how the public views the use of artificial intelligence (AI) on the battlefield. We conduct three survey experiments on a representative sample of the US public to examine how variation in the level of human-machine autonomy affects the public’s support for the use of military force, the public’s trust in such systems (both in their reliability and interpersonal trust), and the level of blame the public places on drone operators when a mistake results in civilian deaths. Existing research on these questions remains quite thin, the data available often point in many directions, and the structure of those studies tends to prevent comparing divergent results. Our findings show that variation between full machine and human autonomy has little effect on the public’s trust in reliability. We also find that both interpersonal trust and blame in the military operator decline as machine autonomy increases. These results suggest multiple paths for future research and provide insight on the on-going policy debate over the viability of the Martens Clause as a basis for banning the military use of AI-enabled systems.
期刊介绍:
Journal of Peace Research is an interdisciplinary and international peer reviewed bimonthly journal of scholarly work in peace research. Edited at the International Peace Research Institute, Oslo (PRIO), by an international editorial committee, Journal of Peace Research strives for a global focus on conflict and peacemaking. From its establishment in 1964, authors from over 50 countries have published in JPR. The Journal encourages a wide conception of peace, but focuses on the causes of violence and conflict resolution. Without sacrificing the requirements for theoretical rigour and methodological sophistication, articles directed towards ways and means of peace are favoured.