As usage of military artificial intelligence (AI) expands, so will anti-AI countermeasures, known as adversarials. International humanitarian law offers many protections through its obligations in attack, but the nature of adversarials generates ambiguity regarding which party (system user or opponent) should incur attacker responsibilities. This article offers a cognitive framework for legally analyzing adversarials. It explores the technical, tactical and legal dimensions of adversarials, and proposes a model based on foreseeable harm to determine when legal responsibility should transfer to the countermeasure's author. The article provides illumination to the future combatant who ponders, before putting on their adversarial sunglasses: “Am I conducting an attack?”
The notion of solidarity, although not new to the humanitarian sector, has re-emerged in recent discussions about effective and ethical humanitarian action, particularly in contexts such as Ukraine and Myanmar where the traditional humanitarian principles have been facing certain pressures. Because solidarity appears as a good but can also involve selectivity and privilege, and because it risks continued militarism and normalization of civilians participating within that militarism, the notion of solidarity merits rich and rigorous thinking. This article explores how the notion of solidarity is being utilized by those currently re-emphasizing its importance and what it might mean in practice in today's humanitarian contexts. The article argues that if solidary action involves not only a political stance but solidary working methods, the recent calls for solidarity demand respect for the variety of principles and practices within the humanitarian ecosystem, while nevertheless upholding mutual obligations owed within that professional community – that is, within careful limits as to what is considered humanitarian action.
The protection of non-combatants in times of autonomous warfare raises the question of the timeliness of the international protective emblem. (Fully) Autonomous weapon systems are often launched from a great distance, and there may be no possibility for the operators to notice protective emblems at the point of impact; therefore, such weapon systems will need to have a way to detect protective emblems and react accordingly. In this regard, the present contribution suggests a cross-frequency protective emblem. Technical deployment is considered, as well as interpretation by methods of machine learning. Approaches are explored as to how software can recognize protective emblems under the influence of various boundary conditions. Since a new protective emblem could also be misused, methods of distribution are considered, including encryption and authentication of the received signal. Finally, ethical aspects are examined.
In situations of armed conflict, access to digital technology can save lives. However, the digitalization of armed conflict also brings new threats for civilians. Over the past decade, digital technologies have been used in armed conflict to disrupt critical civilian infrastructure and services, to incite violence against civilian populations, and to undermine humanitarian relief efforts. Moreover, in ever-more interdependent digital and physical environments, civilians and civilian infrastructure are not only in the crosshairs of hostile operations but also increasingly drawn upon to support military operations, blurring the lines between what is military and what is civilian.