{"title":"我们可以将不透明的人工智能技术用于法律目的吗?","authors":"G. Adamson","doi":"10.1109/ISTAS50296.2020.9462204","DOIUrl":null,"url":null,"abstract":"Artificial intelligence (AI) is a technology receiving significant attention from lawmakers, courts, and regulators. An aspect of this attention is an interest in understanding how AI works when applied to a process of law, or to a regulated application of technology such as driverless vehicles. One approach is to seek to understand what the AI technology does, with goals including “transparency” and “explainability”. This paper considers these concepts from a law and technology perspective. Research in this area commonly examines the challenge of “black box” technologies, particularly the approach of “post hoc explainability”. This paper points out that the post hoc approach provides an inference, rather than an actual description of AI behavior. It considers circumstances in which the post hoc approach may be satisfactory, and those involving arbitrary power in which it should not be used, as inconsistent with the principle of regularity in the rule of law. It recommends that the output of non-transparent AI technologies should necessarily be viewed critically. It concludes that human attention is required in determining whether or not to accept AI technology explanations.","PeriodicalId":196560,"journal":{"name":"2020 IEEE International Symposium on Technology and Society (ISTAS)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Can we use non-transparent artificial intelligence technologies for legal purposes?\",\"authors\":\"G. Adamson\",\"doi\":\"10.1109/ISTAS50296.2020.9462204\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Artificial intelligence (AI) is a technology receiving significant attention from lawmakers, courts, and regulators. An aspect of this attention is an interest in understanding how AI works when applied to a process of law, or to a regulated application of technology such as driverless vehicles. One approach is to seek to understand what the AI technology does, with goals including “transparency” and “explainability”. This paper considers these concepts from a law and technology perspective. Research in this area commonly examines the challenge of “black box” technologies, particularly the approach of “post hoc explainability”. This paper points out that the post hoc approach provides an inference, rather than an actual description of AI behavior. It considers circumstances in which the post hoc approach may be satisfactory, and those involving arbitrary power in which it should not be used, as inconsistent with the principle of regularity in the rule of law. It recommends that the output of non-transparent AI technologies should necessarily be viewed critically. It concludes that human attention is required in determining whether or not to accept AI technology explanations.\",\"PeriodicalId\":196560,\"journal\":{\"name\":\"2020 IEEE International Symposium on Technology and Society (ISTAS)\",\"volume\":\"5 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-11-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE International Symposium on Technology and Society (ISTAS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISTAS50296.2020.9462204\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE International Symposium on Technology and Society (ISTAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISTAS50296.2020.9462204","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Can we use non-transparent artificial intelligence technologies for legal purposes?
Artificial intelligence (AI) is a technology receiving significant attention from lawmakers, courts, and regulators. An aspect of this attention is an interest in understanding how AI works when applied to a process of law, or to a regulated application of technology such as driverless vehicles. One approach is to seek to understand what the AI technology does, with goals including “transparency” and “explainability”. This paper considers these concepts from a law and technology perspective. Research in this area commonly examines the challenge of “black box” technologies, particularly the approach of “post hoc explainability”. This paper points out that the post hoc approach provides an inference, rather than an actual description of AI behavior. It considers circumstances in which the post hoc approach may be satisfactory, and those involving arbitrary power in which it should not be used, as inconsistent with the principle of regularity in the rule of law. It recommends that the output of non-transparent AI technologies should necessarily be viewed critically. It concludes that human attention is required in determining whether or not to accept AI technology explanations.