{"title":"Fairness, AI & recruitment","authors":"Carlotta Rigotti, Eduard Fosch-Villaronga","doi":"10.1016/j.clsr.2024.105966","DOIUrl":null,"url":null,"abstract":"<div><p>The ever-increasing adoption of AI technologies in the hiring landscape to enhance human resources efficiency raises questions about algorithmic decision-making's implications in employment, especially for job applicants, including those at higher risk of social discrimination. Among other concepts, such as transparency and accountability, fairness has become crucial in AI recruitment debates due to the potential reproduction of bias and discrimination that can disproportionately affect certain vulnerable groups. However, the ideals and ambitions of fairness may signify different meanings to various stakeholders. Conceptualizing fairness is critical because it may provide a clear benchmark for evaluating and mitigating biases, ensuring that AI systems do not perpetuate existing imbalances and promote, in this case, equitable opportunities for all candidates in the job market. To this end, in this article, we conduct a scoping literature review on fairness in AI applications for recruitment and selection purposes, with special emphasis on its definition, categorization, and practical implementation. We start by explaining how AI applications have been increasingly used in the hiring process, especially to increase the efficiency of the HR team. We then move to the limitations of this technological innovation, which is known to be at high risk of privacy violations and social discrimination. Against this backdrop, we focus on defining and operationalizing fairness in AI applications for recruitment and selection purposes through cross-disciplinary lenses. Although the applicable legal frameworks and some research currently address the issue piecemeal, we observe and welcome the emergence of some cross-disciplinary efforts aimed at tackling this multifaceted challenge. We conclude the article with some brief recommendations to guide and shape future research and action on the fairness of AI applications in the hiring process for the better.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"53 ","pages":"Article 105966"},"PeriodicalIF":3.3000,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Law & Security Review","FirstCategoryId":"90","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0267364924000335","RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"LAW","Score":null,"Total":0}
引用次数: 0
Abstract
The ever-increasing adoption of AI technologies in the hiring landscape to enhance human resources efficiency raises questions about algorithmic decision-making's implications in employment, especially for job applicants, including those at higher risk of social discrimination. Among other concepts, such as transparency and accountability, fairness has become crucial in AI recruitment debates due to the potential reproduction of bias and discrimination that can disproportionately affect certain vulnerable groups. However, the ideals and ambitions of fairness may signify different meanings to various stakeholders. Conceptualizing fairness is critical because it may provide a clear benchmark for evaluating and mitigating biases, ensuring that AI systems do not perpetuate existing imbalances and promote, in this case, equitable opportunities for all candidates in the job market. To this end, in this article, we conduct a scoping literature review on fairness in AI applications for recruitment and selection purposes, with special emphasis on its definition, categorization, and practical implementation. We start by explaining how AI applications have been increasingly used in the hiring process, especially to increase the efficiency of the HR team. We then move to the limitations of this technological innovation, which is known to be at high risk of privacy violations and social discrimination. Against this backdrop, we focus on defining and operationalizing fairness in AI applications for recruitment and selection purposes through cross-disciplinary lenses. Although the applicable legal frameworks and some research currently address the issue piecemeal, we observe and welcome the emergence of some cross-disciplinary efforts aimed at tackling this multifaceted challenge. We conclude the article with some brief recommendations to guide and shape future research and action on the fairness of AI applications in the hiring process for the better.
期刊介绍:
CLSR publishes refereed academic and practitioner papers on topics such as Web 2.0, IT security, Identity management, ID cards, RFID, interference with privacy, Internet law, telecoms regulation, online broadcasting, intellectual property, software law, e-commerce, outsourcing, data protection, EU policy, freedom of information, computer security and many other topics. In addition it provides a regular update on European Union developments, national news from more than 20 jurisdictions in both Europe and the Pacific Rim. It is looking for papers within the subject area that display good quality legal analysis and new lines of legal thought or policy development that go beyond mere description of the subject area, however accurate that may be.