{"title":"算法的奴隶?为什么“获得解释的权利”可能不是你想要的补救措施","authors":"L. Edwards, Michael Veale","doi":"10.2139/SSRN.2972855","DOIUrl":null,"url":null,"abstract":"Algorithms, particularly of the machine learning (ML) variety, are increasingly important to individuals' lives, but have caused a range of concerns evolving mainly around unfairness, discrimination and opacity. Transparency in the form of a \"right to an explanation\" has emerged as a compellingly attractive remedy since it intuitively presents as a means to \"open the black box\", hence allowing individual challenge and redress, as well as potential to instil accountability to the public in ML systems. In the general furore over algorithmic bias and other issues laid out in section 2, any remedy in a storm has looked attractive. However, we argue that a right to an explanation in the GDPR is unlikely to be a complete remedy to algorithmic harms, particularly in some of the core \"algorithmic war stories\" that have shaped recent attitudes in this domain. We present several reasons for this conclusion. First (section 3), the law is restrictive on when any explanation-related right can be triggered, and in many places is unclear, or even seems paradoxical. Second (section 4), even were some of these restrictions to be navigated, the way that explanations are conceived of legally — as \"meaningful information about the logic of processing\" — is unlikely to be provided by the kind of ML \"explanations\" computer scientists have been developing. ML explanations are restricted both by the type of explanation sought, the multi-dimensionality of the domain and the type of user seeking an explanation. However “subject-centric\" explanations (SCEs), which restrict explanations to particular regions of a model around a query, show promise for interactive exploration, as do pedagogical rather than decompositional explanations in dodging developers' worries of IP or trade secrets disclosure. As an interim conclusion then, while convinced that recent research in ML explanations shows promise, we fear that the search for a \"right to an explanation\" in the GDPR may be at best distracting, and at worst nurture a new kind of \"transparency fallacy\". However, in our final sections, we argue that other parts of the GDPR related (i) to other individual rights including the right to erasure (\"right to be forgotten\") and the right to data portability and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds we can use to build a more responsible, explicable and user-friendly algorithmic society.","PeriodicalId":87176,"journal":{"name":"Duke law and technology review","volume":"59 1","pages":"18-84"},"PeriodicalIF":0.0000,"publicationDate":"2017-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"80","resultStr":"{\"title\":\"Slave to the Algorithm? Why a 'Right to an Explanation' Is Probably Not the Remedy You Are Looking For\",\"authors\":\"L. Edwards, Michael Veale\",\"doi\":\"10.2139/SSRN.2972855\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Algorithms, particularly of the machine learning (ML) variety, are increasingly important to individuals' lives, but have caused a range of concerns evolving mainly around unfairness, discrimination and opacity. Transparency in the form of a \\\"right to an explanation\\\" has emerged as a compellingly attractive remedy since it intuitively presents as a means to \\\"open the black box\\\", hence allowing individual challenge and redress, as well as potential to instil accountability to the public in ML systems. In the general furore over algorithmic bias and other issues laid out in section 2, any remedy in a storm has looked attractive. However, we argue that a right to an explanation in the GDPR is unlikely to be a complete remedy to algorithmic harms, particularly in some of the core \\\"algorithmic war stories\\\" that have shaped recent attitudes in this domain. We present several reasons for this conclusion. First (section 3), the law is restrictive on when any explanation-related right can be triggered, and in many places is unclear, or even seems paradoxical. Second (section 4), even were some of these restrictions to be navigated, the way that explanations are conceived of legally — as \\\"meaningful information about the logic of processing\\\" — is unlikely to be provided by the kind of ML \\\"explanations\\\" computer scientists have been developing. ML explanations are restricted both by the type of explanation sought, the multi-dimensionality of the domain and the type of user seeking an explanation. However “subject-centric\\\" explanations (SCEs), which restrict explanations to particular regions of a model around a query, show promise for interactive exploration, as do pedagogical rather than decompositional explanations in dodging developers' worries of IP or trade secrets disclosure. As an interim conclusion then, while convinced that recent research in ML explanations shows promise, we fear that the search for a \\\"right to an explanation\\\" in the GDPR may be at best distracting, and at worst nurture a new kind of \\\"transparency fallacy\\\". However, in our final sections, we argue that other parts of the GDPR related (i) to other individual rights including the right to erasure (\\\"right to be forgotten\\\") and the right to data portability and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds we can use to build a more responsible, explicable and user-friendly algorithmic society.\",\"PeriodicalId\":87176,\"journal\":{\"name\":\"Duke law and technology review\",\"volume\":\"59 1\",\"pages\":\"18-84\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-05-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"80\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Duke law and technology review\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.2139/SSRN.2972855\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Duke law and technology review","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2139/SSRN.2972855","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Slave to the Algorithm? Why a 'Right to an Explanation' Is Probably Not the Remedy You Are Looking For
Algorithms, particularly of the machine learning (ML) variety, are increasingly important to individuals' lives, but have caused a range of concerns evolving mainly around unfairness, discrimination and opacity. Transparency in the form of a "right to an explanation" has emerged as a compellingly attractive remedy since it intuitively presents as a means to "open the black box", hence allowing individual challenge and redress, as well as potential to instil accountability to the public in ML systems. In the general furore over algorithmic bias and other issues laid out in section 2, any remedy in a storm has looked attractive. However, we argue that a right to an explanation in the GDPR is unlikely to be a complete remedy to algorithmic harms, particularly in some of the core "algorithmic war stories" that have shaped recent attitudes in this domain. We present several reasons for this conclusion. First (section 3), the law is restrictive on when any explanation-related right can be triggered, and in many places is unclear, or even seems paradoxical. Second (section 4), even were some of these restrictions to be navigated, the way that explanations are conceived of legally — as "meaningful information about the logic of processing" — is unlikely to be provided by the kind of ML "explanations" computer scientists have been developing. ML explanations are restricted both by the type of explanation sought, the multi-dimensionality of the domain and the type of user seeking an explanation. However “subject-centric" explanations (SCEs), which restrict explanations to particular regions of a model around a query, show promise for interactive exploration, as do pedagogical rather than decompositional explanations in dodging developers' worries of IP or trade secrets disclosure. As an interim conclusion then, while convinced that recent research in ML explanations shows promise, we fear that the search for a "right to an explanation" in the GDPR may be at best distracting, and at worst nurture a new kind of "transparency fallacy". However, in our final sections, we argue that other parts of the GDPR related (i) to other individual rights including the right to erasure ("right to be forgotten") and the right to data portability and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds we can use to build a more responsible, explicable and user-friendly algorithmic society.