Slave to the Algorithm? Why a 'Right to an Explanation' Is Probably Not the Remedy You Are Looking For

L. Edwards, Michael Veale
{"title":"Slave to the Algorithm? Why a 'Right to an Explanation' Is Probably Not the Remedy You Are Looking For","authors":"L. Edwards, Michael Veale","doi":"10.2139/SSRN.2972855","DOIUrl":null,"url":null,"abstract":"Algorithms, particularly of the machine learning (ML) variety, are increasingly important to individuals' lives, but have caused a range of concerns evolving mainly around unfairness, discrimination and opacity. Transparency in the form of a \"right to an explanation\" has emerged as a compellingly attractive remedy since it intuitively presents as a means to \"open the black box\", hence allowing individual challenge and redress, as well as potential to instil accountability to the public in ML systems. In the general furore over algorithmic bias and other issues laid out in section 2, any remedy in a storm has looked attractive. However, we argue that a right to an explanation in the GDPR is unlikely to be a complete remedy to algorithmic harms, particularly in some of the core \"algorithmic war stories\" that have shaped recent attitudes in this domain. We present several reasons for this conclusion. First (section 3), the law is restrictive on when any explanation-related right can be triggered, and in many places is unclear, or even seems paradoxical. Second (section 4), even were some of these restrictions to be navigated, the way that explanations are conceived of legally — as \"meaningful information about the logic of processing\" — is unlikely to be provided by the kind of ML \"explanations\" computer scientists have been developing. ML explanations are restricted both by the type of explanation sought, the multi-dimensionality of the domain and the type of user seeking an explanation. However “subject-centric\" explanations (SCEs), which restrict explanations to particular regions of a model around a query, show promise for interactive exploration, as do pedagogical rather than decompositional explanations in dodging developers' worries of IP or trade secrets disclosure. As an interim conclusion then, while convinced that recent research in ML explanations shows promise, we fear that the search for a \"right to an explanation\" in the GDPR may be at best distracting, and at worst nurture a new kind of \"transparency fallacy\". However, in our final sections, we argue that other parts of the GDPR related (i) to other individual rights including the right to erasure (\"right to be forgotten\") and the right to data portability and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds we can use to build a more responsible, explicable and user-friendly algorithmic society.","PeriodicalId":87176,"journal":{"name":"Duke law and technology review","volume":"59 1","pages":"18-84"},"PeriodicalIF":0.0000,"publicationDate":"2017-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"80","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Duke law and technology review","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2139/SSRN.2972855","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 80

Abstract

Algorithms, particularly of the machine learning (ML) variety, are increasingly important to individuals' lives, but have caused a range of concerns evolving mainly around unfairness, discrimination and opacity. Transparency in the form of a "right to an explanation" has emerged as a compellingly attractive remedy since it intuitively presents as a means to "open the black box", hence allowing individual challenge and redress, as well as potential to instil accountability to the public in ML systems. In the general furore over algorithmic bias and other issues laid out in section 2, any remedy in a storm has looked attractive. However, we argue that a right to an explanation in the GDPR is unlikely to be a complete remedy to algorithmic harms, particularly in some of the core "algorithmic war stories" that have shaped recent attitudes in this domain. We present several reasons for this conclusion. First (section 3), the law is restrictive on when any explanation-related right can be triggered, and in many places is unclear, or even seems paradoxical. Second (section 4), even were some of these restrictions to be navigated, the way that explanations are conceived of legally — as "meaningful information about the logic of processing" — is unlikely to be provided by the kind of ML "explanations" computer scientists have been developing. ML explanations are restricted both by the type of explanation sought, the multi-dimensionality of the domain and the type of user seeking an explanation. However “subject-centric" explanations (SCEs), which restrict explanations to particular regions of a model around a query, show promise for interactive exploration, as do pedagogical rather than decompositional explanations in dodging developers' worries of IP or trade secrets disclosure. As an interim conclusion then, while convinced that recent research in ML explanations shows promise, we fear that the search for a "right to an explanation" in the GDPR may be at best distracting, and at worst nurture a new kind of "transparency fallacy". However, in our final sections, we argue that other parts of the GDPR related (i) to other individual rights including the right to erasure ("right to be forgotten") and the right to data portability and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds we can use to build a more responsible, explicable and user-friendly algorithmic society.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
算法的奴隶?为什么“获得解释的权利”可能不是你想要的补救措施
算法,尤其是机器学习(ML)的算法,对个人的生活越来越重要,但也引起了一系列的担忧,主要是围绕不公平、歧视和不透明。“解释权”形式的透明度已经成为一种极具吸引力的补救措施,因为它直观地呈现为“打开黑箱”的手段,因此允许个人挑战和补救,以及在ML系统中向公众灌输问责制的潜力。在对算法偏差和第2节中列出的其他问题的普遍愤怒中,风暴中的任何补救措施看起来都很有吸引力。然而,我们认为,GDPR中的解释权不太可能是对算法危害的完全补救,特别是在一些核心的“算法战争故事”中,这些故事塑造了该领域最近的态度。我们为这一结论提出了几个理由。首先(第3条),法律对何时可以触发任何与解释有关的权利进行了限制,并且在许多地方不明确,甚至似乎自相矛盾。其次(第4节),即使这些限制能够被克服,解释被合法理解的方式——作为“关于处理逻辑的有意义的信息”——也不太可能由计算机科学家一直在开发的机器学习“解释”提供。ML解释受到所寻求的解释类型、领域的多维度和寻求解释的用户类型的限制。然而,“以主题为中心”的解释(sce),将解释限制在一个查询周围模型的特定区域,显示出交互式探索的希望,就像教学解释而不是分解解释一样,可以避免开发者对IP或商业秘密泄露的担忧。因此,作为一个临时结论,尽管我们相信最近在机器学习解释方面的研究显示出了希望,但我们担心,在GDPR中寻找“解释权”,往好了说可能会分散注意力,往坏了说可能会滋生一种新的“透明度谬论”。然而,在我们的最后部分中,我们认为GDPR的其他部分与(i)其他个人权利相关,包括删除权(“被遗忘权”)和数据可移植性权利,以及(ii)通过设计、数据保护影响评估和认证以及隐私印章来保护隐私,我们可以使用这些部分来建立一个更负责任、可解释和用户友好的算法社会。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A Political Economy of Utopia Hacking the Internet of Things: Vulnerabilities, Dangers, and Legal Responses Slave to the Algorithm? Why a 'Right to an Explanation' Is Probably Not the Remedy You Are Looking For Legal Nature of Emails: A Comparative Perspective Reasonable Expectations of Privacy Settings: Social Media and the Stored Communications Act
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1