{"title":"算法招聘者为何歧视?数据驱动歧视的因果挑战","authors":"Christine Carter","doi":"10.1177/1023263x241248474","DOIUrl":null,"url":null,"abstract":"Automated decision-making systems are commonly used by human resources to automate recruitment decisions. Most automated decision-making systems utilize machine learning to screen, assess, and give recommendations on candidates. Algorithmic bias and prejudice are common side-effects of these technologies that result in data-driven discrimination. However, proof of this is often unavailable due to the statistical complexities and operational opacities of machine learning, which interferes with the abilities of complainants to meet the requisite causal requirements of the EU equality directives. In direct discrimination, the use of machine learning prevents complainants from demonstrating a prima facie case. In indirect discrimination, the problems mainly manifest once the burden has shifted to the respondent, and causation operates as a quasi-defence by reference to objectively justified factors unrelated to the discrimination. This paper argues that causation must be understood as an informational challenge that can be addressed in three ways. First, through the fundamental rights lens of the EU Charter of Fundamental Rights. Second, through data protection measures such as the General Data Protection Regulation. Third, the article also considers the future liabilities that may arise under incoming legislation such as the Artificial Intelligence Act and the Artificial Intelligence Liability Directive proposal.","PeriodicalId":39672,"journal":{"name":"Maastricht Journal of European and Comparative Law","volume":"29 3","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Why the algorithmic recruiter discriminates: The causal challenges of data-driven discrimination\",\"authors\":\"Christine Carter\",\"doi\":\"10.1177/1023263x241248474\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Automated decision-making systems are commonly used by human resources to automate recruitment decisions. Most automated decision-making systems utilize machine learning to screen, assess, and give recommendations on candidates. Algorithmic bias and prejudice are common side-effects of these technologies that result in data-driven discrimination. However, proof of this is often unavailable due to the statistical complexities and operational opacities of machine learning, which interferes with the abilities of complainants to meet the requisite causal requirements of the EU equality directives. In direct discrimination, the use of machine learning prevents complainants from demonstrating a prima facie case. In indirect discrimination, the problems mainly manifest once the burden has shifted to the respondent, and causation operates as a quasi-defence by reference to objectively justified factors unrelated to the discrimination. This paper argues that causation must be understood as an informational challenge that can be addressed in three ways. First, through the fundamental rights lens of the EU Charter of Fundamental Rights. Second, through data protection measures such as the General Data Protection Regulation. Third, the article also considers the future liabilities that may arise under incoming legislation such as the Artificial Intelligence Act and the Artificial Intelligence Liability Directive proposal.\",\"PeriodicalId\":39672,\"journal\":{\"name\":\"Maastricht Journal of European and Comparative Law\",\"volume\":\"29 3\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-04-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Maastricht Journal of European and Comparative Law\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1177/1023263x241248474\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"Social Sciences\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Maastricht Journal of European and Comparative Law","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/1023263x241248474","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Social Sciences","Score":null,"Total":0}
Why the algorithmic recruiter discriminates: The causal challenges of data-driven discrimination
Automated decision-making systems are commonly used by human resources to automate recruitment decisions. Most automated decision-making systems utilize machine learning to screen, assess, and give recommendations on candidates. Algorithmic bias and prejudice are common side-effects of these technologies that result in data-driven discrimination. However, proof of this is often unavailable due to the statistical complexities and operational opacities of machine learning, which interferes with the abilities of complainants to meet the requisite causal requirements of the EU equality directives. In direct discrimination, the use of machine learning prevents complainants from demonstrating a prima facie case. In indirect discrimination, the problems mainly manifest once the burden has shifted to the respondent, and causation operates as a quasi-defence by reference to objectively justified factors unrelated to the discrimination. This paper argues that causation must be understood as an informational challenge that can be addressed in three ways. First, through the fundamental rights lens of the EU Charter of Fundamental Rights. Second, through data protection measures such as the General Data Protection Regulation. Third, the article also considers the future liabilities that may arise under incoming legislation such as the Artificial Intelligence Act and the Artificial Intelligence Liability Directive proposal.