Algorithmic discrimination at work

IF 1.1 Q2 LAW European Labour Law Journal Pub Date : 2023-04-02 DOI:10.1177/20319525231167300
Aislinn Kelly-Lyth
{"title":"Algorithmic discrimination at work","authors":"Aislinn Kelly-Lyth","doi":"10.1177/20319525231167300","DOIUrl":null,"url":null,"abstract":"The potential for algorithms to discriminate is now well-documented, and algorithmic management tools are no exception. Scholars have been quick to point to gaps in the equality law framework, but existing European law is remarkably robust. Where gaps do exist, they largely predate algorithmic decision-making. Careful judicial reasoning can resolve what appear to be novel legal issues; and policymakers should seek to reinforce European equality law, rather than reform it. This article disentangles some of the knottiest questions on the application of the prohibition on direct and indirect discrimination to algorithmic management, from how the law should deal with arguments that algorithms are ‘more accurate’ or ‘less biased’ than human decision-makers, to the attribution of liability in the employment context. By identifying possible routes for judicial resolution, the article demonstrates the adaptable nature of existing legal obligations. The duty to make reasonable accommodations in the disability context is also examined, and options for combining top-level and individualised adjustments are explored. The article concludes by turning to enforceability. Algorithmic discrimination gives rise to a concerning paradox: on the one hand, automating previously human decision-making processes can render discriminatory criteria more traceable and outcomes more quantifiable. On the other hand, algorithmic decision-making processes are rarely transparent, and scholars consistently point to algorithmic opacity as the key barrier to litigation and enforcement action. Judicial and legislative routes to greater transparency are explored.","PeriodicalId":41157,"journal":{"name":"European Labour Law Journal","volume":null,"pages":null},"PeriodicalIF":1.1000,"publicationDate":"2023-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"European Labour Law Journal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/20319525231167300","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"LAW","Score":null,"Total":0}
引用次数: 0

Abstract

The potential for algorithms to discriminate is now well-documented, and algorithmic management tools are no exception. Scholars have been quick to point to gaps in the equality law framework, but existing European law is remarkably robust. Where gaps do exist, they largely predate algorithmic decision-making. Careful judicial reasoning can resolve what appear to be novel legal issues; and policymakers should seek to reinforce European equality law, rather than reform it. This article disentangles some of the knottiest questions on the application of the prohibition on direct and indirect discrimination to algorithmic management, from how the law should deal with arguments that algorithms are ‘more accurate’ or ‘less biased’ than human decision-makers, to the attribution of liability in the employment context. By identifying possible routes for judicial resolution, the article demonstrates the adaptable nature of existing legal obligations. The duty to make reasonable accommodations in the disability context is also examined, and options for combining top-level and individualised adjustments are explored. The article concludes by turning to enforceability. Algorithmic discrimination gives rise to a concerning paradox: on the one hand, automating previously human decision-making processes can render discriminatory criteria more traceable and outcomes more quantifiable. On the other hand, algorithmic decision-making processes are rarely transparent, and scholars consistently point to algorithmic opacity as the key barrier to litigation and enforcement action. Judicial and legislative routes to greater transparency are explored.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
工作中的算法歧视
算法歧视的可能性现在已经得到了充分的证明,算法管理工具也不例外。学者们很快就指出了平等法框架中的漏洞,但现有的欧洲法律相当健全。在差距确实存在的地方,它们在很大程度上早于算法决策。仔细的司法推理可以解决看似新颖的法律问题;政策制定者应该加强欧洲平等法,而不是对其进行改革。本文解开了关于禁止直接和间接歧视对算法管理的应用的一些最棘手的问题,从法律应该如何处理算法比人类决策者“更准确”或“更少偏见”的论点,到就业背景下的责任归属。通过确定司法解决的可能途径,本文论证了现有法律义务的适应性。还审查了在残疾背景下提供合理便利的责任,并探讨了将顶层调整和个性化调整相结合的选择。文章最后谈到了可执行性。算法歧视产生了一个令人担忧的悖论:一方面,自动化以前的人类决策过程可以使歧视标准更可追溯,结果更可量化。另一方面,算法决策过程很少是透明的,学者们一致认为算法不透明是诉讼和执法行动的主要障碍。探索提高透明度的司法和立法途径。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
1.60
自引率
28.60%
发文量
29
期刊最新文献
Anti-discrimination cases decided by the Court of Justice of the EU in 2023 Resocialisation through prisoner remuneration: The unconstitutionally low remuneration of working prisoners in Germany Work in prison: Reintegration or exclusion and exploitation? Beyond profit: A model framework for ethical and feasible private prison labour Minding the gap? Blind spots in the ILO's and the EU's perspective on anti-forced labour policy
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1