{"title":"自主武器的风险:以残疾人权利为中心的分析","authors":"Mariana Díaz Figueroa, Anderson Henao Orozco, Jesús Martínez, Wanda Muñoz Jaime","doi":"10.1017/S1816383122000881","DOIUrl":null,"url":null,"abstract":"Abstract Autonomous weapons systems have been the subject of heated debate since 2010, when Philip Alston, then Special Rapporteur on Extrajudicial, Summary, or Arbitrary Executions, brought the issue to the international spotlight in his interim report to the United Nations (UN) General Assembly 65th Session. Alston affirmed that “automated technologies are becoming increasingly sophisticated, and artificial intelligence reasoning and decision-making abilities are actively being researched and receive significant funding. States’ militaries and defence industry developers are working to develop ‘fully autonomous capability’, such that technological advances in artificial intelligence will enable unmanned aerial vehicles to make and execute complex decisions, including the identification of human targets and the ability to kill them.”1 Later, in 2013, Christof Heyns, who was Special Rapporteur for Extrajudicial, Summary or Arbitrary Executions at the time, published a report that elaborated further on the issues raised by what he called “lethal autonomous robotics”.2 Following a recommendation by Advisory Board on Disarmament Matters at the UN General Assembly 68th Session, the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects, as amended on 21 December 2021, started discussing autonomous weapons systems in 2014. Then, the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems (GGE on LAWS)3 was created in 2016 to focus on this issue.4 While the group has kept meeting since then, no clear steps have been taken yet towards a normative framework on autonomous weapons as of September 2022. In all these years, persons with disabilities – including conflict survivors – have not been included in discussions, nor has the disability perspective been reflected in international debate on autonomous weapons. Only recently has there been any effort to consider the rights of persons with disabilities when examining ethical questions related to artificial intelligence (AI). In this article, we will examine how and why autonomous weapons have a disproportionate impact on persons with disabilities, because of the discrimination that results from a combination of factors such as bias in AI, bias in the military and the police, barriers to justice and humanitarian assistance in situations of armed conflict, and the lack of consultation and participation of persons with disabilities and their representative organizations on issues related to autonomy in weapons systems.","PeriodicalId":46925,"journal":{"name":"International Review of the Red Cross","volume":"33 1","pages":"278 - 305"},"PeriodicalIF":0.6000,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The risks of autonomous weapons: An analysis centred on the rights of persons with disabilities\",\"authors\":\"Mariana Díaz Figueroa, Anderson Henao Orozco, Jesús Martínez, Wanda Muñoz Jaime\",\"doi\":\"10.1017/S1816383122000881\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract Autonomous weapons systems have been the subject of heated debate since 2010, when Philip Alston, then Special Rapporteur on Extrajudicial, Summary, or Arbitrary Executions, brought the issue to the international spotlight in his interim report to the United Nations (UN) General Assembly 65th Session. Alston affirmed that “automated technologies are becoming increasingly sophisticated, and artificial intelligence reasoning and decision-making abilities are actively being researched and receive significant funding. States’ militaries and defence industry developers are working to develop ‘fully autonomous capability’, such that technological advances in artificial intelligence will enable unmanned aerial vehicles to make and execute complex decisions, including the identification of human targets and the ability to kill them.”1 Later, in 2013, Christof Heyns, who was Special Rapporteur for Extrajudicial, Summary or Arbitrary Executions at the time, published a report that elaborated further on the issues raised by what he called “lethal autonomous robotics”.2 Following a recommendation by Advisory Board on Disarmament Matters at the UN General Assembly 68th Session, the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects, as amended on 21 December 2021, started discussing autonomous weapons systems in 2014. Then, the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems (GGE on LAWS)3 was created in 2016 to focus on this issue.4 While the group has kept meeting since then, no clear steps have been taken yet towards a normative framework on autonomous weapons as of September 2022. In all these years, persons with disabilities – including conflict survivors – have not been included in discussions, nor has the disability perspective been reflected in international debate on autonomous weapons. Only recently has there been any effort to consider the rights of persons with disabilities when examining ethical questions related to artificial intelligence (AI). In this article, we will examine how and why autonomous weapons have a disproportionate impact on persons with disabilities, because of the discrimination that results from a combination of factors such as bias in AI, bias in the military and the police, barriers to justice and humanitarian assistance in situations of armed conflict, and the lack of consultation and participation of persons with disabilities and their representative organizations on issues related to autonomy in weapons systems.\",\"PeriodicalId\":46925,\"journal\":{\"name\":\"International Review of the Red Cross\",\"volume\":\"33 1\",\"pages\":\"278 - 305\"},\"PeriodicalIF\":0.6000,\"publicationDate\":\"2022-11-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Review of the Red Cross\",\"FirstCategoryId\":\"90\",\"ListUrlMain\":\"https://doi.org/10.1017/S1816383122000881\",\"RegionNum\":4,\"RegionCategory\":\"社会学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"LAW\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Review of the Red Cross","FirstCategoryId":"90","ListUrlMain":"https://doi.org/10.1017/S1816383122000881","RegionNum":4,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"LAW","Score":null,"Total":0}
引用次数: 0
摘要
自2010年以来,自主武器系统一直是激烈辩论的主题,当时的法外处决、即审即决或任意处决问题特别报告员菲利普·奥尔斯顿(Philip Alston)在向联合国大会第65届会议提交的中期报告中将这一问题置于国际聚光灯下。阿尔斯通肯定地说:“自动化技术正变得越来越复杂,人工智能推理和决策能力正在得到积极研究,并获得了大量资金。各国的军事和国防工业开发商正在努力开发“完全自主能力”,这样人工智能的技术进步将使无人机能够做出和执行复杂的决策,包括识别人类目标和杀死他们的能力。1后来,2013年,时任法外处决、即审即决或任意处决问题特别报告员的克里斯托弗·海恩斯(Christof Heyns)发表了一份报告,进一步阐述了他所谓的“致命自主机器人”所引发的问题根据联合国大会第六十八届会议裁军事项咨询委员会的建议,经2021年12月21日修订的《禁止或限制使用某些可被认为具有过分伤害力或滥杀滥伤作用的常规武器公约》于2014年开始讨论自主武器系统。随后,致命自主武器系统领域新兴技术政府专家组(GGE on LAWS)于2016年成立,专注于这一问题自那以后,该组织一直在举行会议,但截至2022年9月,尚未采取任何明确步骤来制定自主武器的规范框架。这些年来,包括冲突幸存者在内的残疾人没有被纳入讨论,残疾人的观点也没有反映在关于自主武器的国际辩论中。直到最近,在审查与人工智能(AI)有关的伦理问题时,才开始考虑残疾人的权利。在本文中,我们将研究自主武器如何以及为什么对残疾人产生不成比例的影响,这是因为人工智能的偏见、军队和警察的偏见、武装冲突局势中司法和人道主义援助的障碍、以及残疾人及其代表组织在与武器系统自治相关的问题上缺乏协商和参与等因素综合造成的歧视。
The risks of autonomous weapons: An analysis centred on the rights of persons with disabilities
Abstract Autonomous weapons systems have been the subject of heated debate since 2010, when Philip Alston, then Special Rapporteur on Extrajudicial, Summary, or Arbitrary Executions, brought the issue to the international spotlight in his interim report to the United Nations (UN) General Assembly 65th Session. Alston affirmed that “automated technologies are becoming increasingly sophisticated, and artificial intelligence reasoning and decision-making abilities are actively being researched and receive significant funding. States’ militaries and defence industry developers are working to develop ‘fully autonomous capability’, such that technological advances in artificial intelligence will enable unmanned aerial vehicles to make and execute complex decisions, including the identification of human targets and the ability to kill them.”1 Later, in 2013, Christof Heyns, who was Special Rapporteur for Extrajudicial, Summary or Arbitrary Executions at the time, published a report that elaborated further on the issues raised by what he called “lethal autonomous robotics”.2 Following a recommendation by Advisory Board on Disarmament Matters at the UN General Assembly 68th Session, the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects, as amended on 21 December 2021, started discussing autonomous weapons systems in 2014. Then, the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems (GGE on LAWS)3 was created in 2016 to focus on this issue.4 While the group has kept meeting since then, no clear steps have been taken yet towards a normative framework on autonomous weapons as of September 2022. In all these years, persons with disabilities – including conflict survivors – have not been included in discussions, nor has the disability perspective been reflected in international debate on autonomous weapons. Only recently has there been any effort to consider the rights of persons with disabilities when examining ethical questions related to artificial intelligence (AI). In this article, we will examine how and why autonomous weapons have a disproportionate impact on persons with disabilities, because of the discrimination that results from a combination of factors such as bias in AI, bias in the military and the police, barriers to justice and humanitarian assistance in situations of armed conflict, and the lack of consultation and participation of persons with disabilities and their representative organizations on issues related to autonomy in weapons systems.