{"title":"工作中的算法歧视","authors":"Aislinn Kelly-Lyth","doi":"10.1177/20319525231167300","DOIUrl":null,"url":null,"abstract":"The potential for algorithms to discriminate is now well-documented, and algorithmic management tools are no exception. Scholars have been quick to point to gaps in the equality law framework, but existing European law is remarkably robust. Where gaps do exist, they largely predate algorithmic decision-making. Careful judicial reasoning can resolve what appear to be novel legal issues; and policymakers should seek to reinforce European equality law, rather than reform it. This article disentangles some of the knottiest questions on the application of the prohibition on direct and indirect discrimination to algorithmic management, from how the law should deal with arguments that algorithms are ‘more accurate’ or ‘less biased’ than human decision-makers, to the attribution of liability in the employment context. By identifying possible routes for judicial resolution, the article demonstrates the adaptable nature of existing legal obligations. The duty to make reasonable accommodations in the disability context is also examined, and options for combining top-level and individualised adjustments are explored. The article concludes by turning to enforceability. Algorithmic discrimination gives rise to a concerning paradox: on the one hand, automating previously human decision-making processes can render discriminatory criteria more traceable and outcomes more quantifiable. On the other hand, algorithmic decision-making processes are rarely transparent, and scholars consistently point to algorithmic opacity as the key barrier to litigation and enforcement action. Judicial and legislative routes to greater transparency are explored.","PeriodicalId":41157,"journal":{"name":"European Labour Law Journal","volume":null,"pages":null},"PeriodicalIF":1.1000,"publicationDate":"2023-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Algorithmic discrimination at work\",\"authors\":\"Aislinn Kelly-Lyth\",\"doi\":\"10.1177/20319525231167300\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The potential for algorithms to discriminate is now well-documented, and algorithmic management tools are no exception. Scholars have been quick to point to gaps in the equality law framework, but existing European law is remarkably robust. Where gaps do exist, they largely predate algorithmic decision-making. Careful judicial reasoning can resolve what appear to be novel legal issues; and policymakers should seek to reinforce European equality law, rather than reform it. This article disentangles some of the knottiest questions on the application of the prohibition on direct and indirect discrimination to algorithmic management, from how the law should deal with arguments that algorithms are ‘more accurate’ or ‘less biased’ than human decision-makers, to the attribution of liability in the employment context. By identifying possible routes for judicial resolution, the article demonstrates the adaptable nature of existing legal obligations. The duty to make reasonable accommodations in the disability context is also examined, and options for combining top-level and individualised adjustments are explored. The article concludes by turning to enforceability. Algorithmic discrimination gives rise to a concerning paradox: on the one hand, automating previously human decision-making processes can render discriminatory criteria more traceable and outcomes more quantifiable. On the other hand, algorithmic decision-making processes are rarely transparent, and scholars consistently point to algorithmic opacity as the key barrier to litigation and enforcement action. Judicial and legislative routes to greater transparency are explored.\",\"PeriodicalId\":41157,\"journal\":{\"name\":\"European Labour Law Journal\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.1000,\"publicationDate\":\"2023-04-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"European Labour Law Journal\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1177/20319525231167300\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"LAW\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"European Labour Law Journal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/20319525231167300","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"LAW","Score":null,"Total":0}
The potential for algorithms to discriminate is now well-documented, and algorithmic management tools are no exception. Scholars have been quick to point to gaps in the equality law framework, but existing European law is remarkably robust. Where gaps do exist, they largely predate algorithmic decision-making. Careful judicial reasoning can resolve what appear to be novel legal issues; and policymakers should seek to reinforce European equality law, rather than reform it. This article disentangles some of the knottiest questions on the application of the prohibition on direct and indirect discrimination to algorithmic management, from how the law should deal with arguments that algorithms are ‘more accurate’ or ‘less biased’ than human decision-makers, to the attribution of liability in the employment context. By identifying possible routes for judicial resolution, the article demonstrates the adaptable nature of existing legal obligations. The duty to make reasonable accommodations in the disability context is also examined, and options for combining top-level and individualised adjustments are explored. The article concludes by turning to enforceability. Algorithmic discrimination gives rise to a concerning paradox: on the one hand, automating previously human decision-making processes can render discriminatory criteria more traceable and outcomes more quantifiable. On the other hand, algorithmic decision-making processes are rarely transparent, and scholars consistently point to algorithmic opacity as the key barrier to litigation and enforcement action. Judicial and legislative routes to greater transparency are explored.