{"title":"人工智能的挑战和人权保护的不足","authors":"Hin-Yan Liu","doi":"10.1080/0731129X.2021.1903709","DOIUrl":null,"url":null,"abstract":"My aim in this article is to set out some counter-intuitive claims about the challenges posed by artificial intelligence (AI) applications to the protection and enjoyment of human rights and to be your guide through my unorthodox ideas. While there are familiar human rights issues raised by AI and its applications, these are perhaps the easiest of the challenges because they are already recognized by the human rights regime as problems. Instead, the more pernicious challenges are those that have yet to be identified or articulated, because they arise from new affordances rather than directly through AI modeled as a technology. I suggest that we need to actively explore the potential problem space on this basis. I suggest that we need to adopt models and metaphors that systematically exclude the possibility of applying the human rights regime to AI applications. This orientation will present us with the difficult, intractable problems that most urgently require responses. There are convincing ways of understanding AI that lock out the very possibility for human rights responses and this should be grounds for serious concern. I suggest that responses need to exploit both sets of insights I present in this paper: first that proactive and systematic searches of the potential problem space need to be continuously conducted to find the problems that require responses; and second that the monopoly that the human rights regime holds with regards to addressing harm and suffering needs to be broken so that we can deploy a greater range of barriers against failures to recognize and remedy AI-induced wrongs.","PeriodicalId":35931,"journal":{"name":"Criminal Justice Ethics","volume":"40 1","pages":"2 - 22"},"PeriodicalIF":0.0000,"publicationDate":"2021-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0731129X.2021.1903709","citationCount":"2","resultStr":"{\"title\":\"AI Challenges and the Inadequacy of Human Rights Protections\",\"authors\":\"Hin-Yan Liu\",\"doi\":\"10.1080/0731129X.2021.1903709\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"My aim in this article is to set out some counter-intuitive claims about the challenges posed by artificial intelligence (AI) applications to the protection and enjoyment of human rights and to be your guide through my unorthodox ideas. While there are familiar human rights issues raised by AI and its applications, these are perhaps the easiest of the challenges because they are already recognized by the human rights regime as problems. Instead, the more pernicious challenges are those that have yet to be identified or articulated, because they arise from new affordances rather than directly through AI modeled as a technology. I suggest that we need to actively explore the potential problem space on this basis. I suggest that we need to adopt models and metaphors that systematically exclude the possibility of applying the human rights regime to AI applications. This orientation will present us with the difficult, intractable problems that most urgently require responses. There are convincing ways of understanding AI that lock out the very possibility for human rights responses and this should be grounds for serious concern. I suggest that responses need to exploit both sets of insights I present in this paper: first that proactive and systematic searches of the potential problem space need to be continuously conducted to find the problems that require responses; and second that the monopoly that the human rights regime holds with regards to addressing harm and suffering needs to be broken so that we can deploy a greater range of barriers against failures to recognize and remedy AI-induced wrongs.\",\"PeriodicalId\":35931,\"journal\":{\"name\":\"Criminal Justice Ethics\",\"volume\":\"40 1\",\"pages\":\"2 - 22\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-01-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1080/0731129X.2021.1903709\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Criminal Justice Ethics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/0731129X.2021.1903709\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"Social Sciences\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Criminal Justice Ethics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/0731129X.2021.1903709","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Social Sciences","Score":null,"Total":0}
AI Challenges and the Inadequacy of Human Rights Protections
My aim in this article is to set out some counter-intuitive claims about the challenges posed by artificial intelligence (AI) applications to the protection and enjoyment of human rights and to be your guide through my unorthodox ideas. While there are familiar human rights issues raised by AI and its applications, these are perhaps the easiest of the challenges because they are already recognized by the human rights regime as problems. Instead, the more pernicious challenges are those that have yet to be identified or articulated, because they arise from new affordances rather than directly through AI modeled as a technology. I suggest that we need to actively explore the potential problem space on this basis. I suggest that we need to adopt models and metaphors that systematically exclude the possibility of applying the human rights regime to AI applications. This orientation will present us with the difficult, intractable problems that most urgently require responses. There are convincing ways of understanding AI that lock out the very possibility for human rights responses and this should be grounds for serious concern. I suggest that responses need to exploit both sets of insights I present in this paper: first that proactive and systematic searches of the potential problem space need to be continuously conducted to find the problems that require responses; and second that the monopoly that the human rights regime holds with regards to addressing harm and suffering needs to be broken so that we can deploy a greater range of barriers against failures to recognize and remedy AI-induced wrongs.