Addressing diversity in hiring procedures: a generative adversarial network approach

Tales Marra, Emeric Kubiak
{"title":"Addressing diversity in hiring procedures: a generative adversarial network approach","authors":"Tales Marra,&nbsp;Emeric Kubiak","doi":"10.1007/s43681-024-00445-2","DOIUrl":null,"url":null,"abstract":"<div><p>The combination of machine learning and organizational psychology has led to innovative methods to address the diversity-validity dilemma in personnel selection, which is the tradeoff between selecting valid predictors of job performance while minimizing adverse impact. Recent technological advancements provide new strategies to mitigate gender biases while preserving the ability to predict job performance accurately. Our research introduces a novel framework consisting of three blocks: a gating block to filter user data, a bias measurement block using an adversarial network for detecting gender bias, and a feature importance block, identifying and removing biased, non-contributory performance features. We applied this model architecture to both simulated datasets and real-world hiring scenarios, with a particular emphasis on personality-based algorithms, aiming to refine the hiring predictive models to be gender fair and to meet the EEOC standards. In simulated environments, 70% of the predictive models get their impact ratio improved, approaching the ideal ratio by 22.73% while only incurring a slight 4.16% decrease in performance predictability. Real-world data testing yielded similar improvements, with 71% of the models showing an increased impact ratio, 18.8% closer to the ideal, and a 2.18% increase in predictive accuracy for job performance. The findings suggest that the application of neural networks can be an effective strategy for enhancing fairness in hiring practices with only minimal loss in predictive accuracy. Future research directions should explore the refinement of these models and the implications of their deployment in high-stakes hiring environments.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 2","pages":"1381 - 1405"},"PeriodicalIF":0.0000,"publicationDate":"2024-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI and ethics","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s43681-024-00445-2","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The combination of machine learning and organizational psychology has led to innovative methods to address the diversity-validity dilemma in personnel selection, which is the tradeoff between selecting valid predictors of job performance while minimizing adverse impact. Recent technological advancements provide new strategies to mitigate gender biases while preserving the ability to predict job performance accurately. Our research introduces a novel framework consisting of three blocks: a gating block to filter user data, a bias measurement block using an adversarial network for detecting gender bias, and a feature importance block, identifying and removing biased, non-contributory performance features. We applied this model architecture to both simulated datasets and real-world hiring scenarios, with a particular emphasis on personality-based algorithms, aiming to refine the hiring predictive models to be gender fair and to meet the EEOC standards. In simulated environments, 70% of the predictive models get their impact ratio improved, approaching the ideal ratio by 22.73% while only incurring a slight 4.16% decrease in performance predictability. Real-world data testing yielded similar improvements, with 71% of the models showing an increased impact ratio, 18.8% closer to the ideal, and a 2.18% increase in predictive accuracy for job performance. The findings suggest that the application of neural networks can be an effective strategy for enhancing fairness in hiring practices with only minimal loss in predictive accuracy. Future research directions should explore the refinement of these models and the implications of their deployment in high-stakes hiring environments.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
解决招聘程序中的多样性问题:生成式对抗网络方法
机器学习和组织心理学的结合导致了解决人员选择中的多样性-有效性困境的创新方法,这是在选择有效的工作绩效预测因子和最小化不利影响之间的权衡。最近的技术进步提供了新的策略来减轻性别偏见,同时保持准确预测工作表现的能力。我们的研究引入了一个由三个块组成的新框架:用于过滤用户数据的门限块,用于检测性别偏见的使用对抗网络的偏见测量块,以及用于识别和删除有偏见的非贡献性能特征的特征重要性块。我们将该模型架构应用于模拟数据集和现实世界的招聘场景,特别强调基于个性的算法,旨在改进招聘预测模型,使其性别公平,并符合EEOC标准。在模拟环境中,70%的预测模型的影响比得到了提高,接近理想的影响比提高了22.73%,而性能的可预测性只略微下降了4.16%。现实世界的数据测试也得到了类似的改进,71%的模型显示出更高的影响率,接近理想值18.8%,工作表现的预测准确性提高了2.18%。研究结果表明,神经网络的应用可以是一种有效的策略,在招聘实践中提高公平性,而预测准确性的损失最小。未来的研究方向应该探索这些模型的改进及其在高风险招聘环境中部署的含义。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Beyond black-box medicine: a bioethical considerations for informed consent in AI-driven endoscopy Rectifying illusion: a Buddhist–Confucian framework for LLM hallucinations A dynamic contextual responsibility framework for evaluating large language models in socio-technical contexts Political fantasies of fairness: artificial intelligence, law, and the myth of sovereign reason A critical analysis of the ethical benefits and challenges related to the development and use of wearable AI devices
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1