Why the algorithmic recruiter discriminates: The causal challenges of data-driven discrimination

Christine Carter
{"title":"Why the algorithmic recruiter discriminates: The causal challenges of data-driven discrimination","authors":"Christine Carter","doi":"10.1177/1023263x241248474","DOIUrl":null,"url":null,"abstract":"Automated decision-making systems are commonly used by human resources to automate recruitment decisions. Most automated decision-making systems utilize machine learning to screen, assess, and give recommendations on candidates. Algorithmic bias and prejudice are common side-effects of these technologies that result in data-driven discrimination. However, proof of this is often unavailable due to the statistical complexities and operational opacities of machine learning, which interferes with the abilities of complainants to meet the requisite causal requirements of the EU equality directives. In direct discrimination, the use of machine learning prevents complainants from demonstrating a prima facie case. In indirect discrimination, the problems mainly manifest once the burden has shifted to the respondent, and causation operates as a quasi-defence by reference to objectively justified factors unrelated to the discrimination. This paper argues that causation must be understood as an informational challenge that can be addressed in three ways. First, through the fundamental rights lens of the EU Charter of Fundamental Rights. Second, through data protection measures such as the General Data Protection Regulation. Third, the article also considers the future liabilities that may arise under incoming legislation such as the Artificial Intelligence Act and the Artificial Intelligence Liability Directive proposal.","PeriodicalId":39672,"journal":{"name":"Maastricht Journal of European and Comparative Law","volume":"29 3","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Maastricht Journal of European and Comparative Law","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/1023263x241248474","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Social Sciences","Score":null,"Total":0}
引用次数: 0

Abstract

Automated decision-making systems are commonly used by human resources to automate recruitment decisions. Most automated decision-making systems utilize machine learning to screen, assess, and give recommendations on candidates. Algorithmic bias and prejudice are common side-effects of these technologies that result in data-driven discrimination. However, proof of this is often unavailable due to the statistical complexities and operational opacities of machine learning, which interferes with the abilities of complainants to meet the requisite causal requirements of the EU equality directives. In direct discrimination, the use of machine learning prevents complainants from demonstrating a prima facie case. In indirect discrimination, the problems mainly manifest once the burden has shifted to the respondent, and causation operates as a quasi-defence by reference to objectively justified factors unrelated to the discrimination. This paper argues that causation must be understood as an informational challenge that can be addressed in three ways. First, through the fundamental rights lens of the EU Charter of Fundamental Rights. Second, through data protection measures such as the General Data Protection Regulation. Third, the article also considers the future liabilities that may arise under incoming legislation such as the Artificial Intelligence Act and the Artificial Intelligence Liability Directive proposal.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
算法招聘者为何歧视?数据驱动歧视的因果挑战
人力资源部门通常使用自动决策系统来自动做出招聘决定。大多数自动决策系统利用机器学习来筛选、评估和推荐候选人。算法偏见和成见是这些技术常见的副作用,会导致数据驱动的歧视。然而,由于机器学习在统计上的复杂性和操作上的不透明性,往往无法证明这一点,这干扰了投诉人满足欧盟平等指令中必要因果关系要求的能力。在直接歧视中,机器学习的使用使投诉人无法证明表面证据确凿的案件。在间接歧视中,一旦责任转移到被告身上,问题就会显现出来,而因果关系则作为一种准抗辩,通过提及与歧视无关的客观合理因素来实现。本文认为,因果关系必须被理解为一种信息挑战,可以通过三种方式加以解决。首先,通过《欧盟基本权利宪章》的基本权利视角。第二,通过《通用数据保护条例》等数据保护措施。第三,文章还考虑了未来可能出现的法律责任,如《人工智能法》和《人工智能责任指令提案》。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
2.00
自引率
0.00%
发文量
27
期刊最新文献
Non-contractual liability of the EU: Need for a ‘diligent’ administrator test The European Arrest Warrant and the protection of the best interests of the child: The Court's last word on the limits of mutual recognition and the evolving obligations of national judicial authorities OP v. Commune d’Ans: When equality, intersectionality and state neutrality collide DPA independence and ‘indirect’ access – illusory in Belgium, France and Germany? Chilling effect: Turning the poison into an antidote for fundamental rights in Europe
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1