EXPRESS: Goal Orientation for Fair Machine Learning Algorithms

IF 4.8 3区 管理学 Q1 ENGINEERING, MANUFACTURING Production and Operations Management Pub Date : 2024-02-14 DOI:10.1177/10591478241234998
Heng Xu, Nan Zhang
{"title":"EXPRESS: Goal Orientation for Fair Machine Learning Algorithms","authors":"Heng Xu, Nan Zhang","doi":"10.1177/10591478241234998","DOIUrl":null,"url":null,"abstract":"A key challenge facing the use of Machine Learning (ML) in organizational selection settings (e.g., the pro­cessing of loan or job applications) is the potential bias against (racial and gender) minorities. To address this challenge, a rich literature of Fairness-Aware ML (FAML) algorithms has emerged, attempting to ameliorate biases while main­taining the predictive accuracy of ML algorithms. Almost all existing FAML algorithms define their optimization goals according to a selection task, meaning that ML outputs are assumed to be the final selection outcome. In practice, though, ML outputs are rarely used as-is. In personnel selection, for example, ML often serves a support role to human resource managers, allowing them to more easily exclude unqualified applicants. This effectively assigns to ML a screening rather than selection task. it might be tempting to treat selection and screening as two variations of the same task that differ only quantitatively on the admission rate. This paper, however, reveals a qualitative difference between the two in terms of fairness. Specifically, we demonstrate through conceptual development and mathematical analysis that mis-categorizing a screening task as a selection one could not only degrade final selection quality but result in fairness problems such as selection biases within the minority group. After validating our findings with experimental studies on simulated and real-world data, we discuss several business and policy implications, highlighting the need for firms and policymakers to properly categorize the task assigned to ML in assessing and correcting algorithmic biases.","PeriodicalId":20623,"journal":{"name":"Production and Operations Management","volume":null,"pages":null},"PeriodicalIF":4.8000,"publicationDate":"2024-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Production and Operations Management","FirstCategoryId":"91","ListUrlMain":"https://doi.org/10.1177/10591478241234998","RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MANUFACTURING","Score":null,"Total":0}
引用次数: 0

Abstract

A key challenge facing the use of Machine Learning (ML) in organizational selection settings (e.g., the pro­cessing of loan or job applications) is the potential bias against (racial and gender) minorities. To address this challenge, a rich literature of Fairness-Aware ML (FAML) algorithms has emerged, attempting to ameliorate biases while main­taining the predictive accuracy of ML algorithms. Almost all existing FAML algorithms define their optimization goals according to a selection task, meaning that ML outputs are assumed to be the final selection outcome. In practice, though, ML outputs are rarely used as-is. In personnel selection, for example, ML often serves a support role to human resource managers, allowing them to more easily exclude unqualified applicants. This effectively assigns to ML a screening rather than selection task. it might be tempting to treat selection and screening as two variations of the same task that differ only quantitatively on the admission rate. This paper, however, reveals a qualitative difference between the two in terms of fairness. Specifically, we demonstrate through conceptual development and mathematical analysis that mis-categorizing a screening task as a selection one could not only degrade final selection quality but result in fairness problems such as selection biases within the minority group. After validating our findings with experimental studies on simulated and real-world data, we discuss several business and policy implications, highlighting the need for firms and policymakers to properly categorize the task assigned to ML in assessing and correcting algorithmic biases.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
EXPRESS:公平机器学习算法的目标导向
机器学习(ML)在组织选拔环境(如处理贷款或工作申请)中的使用所面临的一个关键挑战是,对(种族和性别)少数群体的潜在偏见。为了应对这一挑战,出现了丰富的公平感知 ML(FAML)算法文献,试图在保持 ML 算法预测准确性的同时改善偏差。几乎所有现有的 FAML 算法都是根据选择任务确定优化目标的,这意味着 ML 输出被假定为最终选择结果。但在实践中,ML 输出很少被原封不动地使用。例如,在人事选拔中,ML 通常为人力资源经理提供支持,使他们能够更轻松地排除不合格的申请人。这实际上赋予了 ML 筛选而非选拔的任务。将选拔和筛选视为同一任务的两种变体,仅在录取率上存在数量上的差异,这可能很有诱惑力。然而,本文揭示了两者在公平性方面的质的区别。具体来说,我们通过概念发展和数学分析证明,将筛选任务错误地归类为选拔任务,不仅会降低最终的选拔质量,还会导致公平性问题,如少数群体内部的选拔偏差。在通过对模拟数据和真实数据的实验研究验证了我们的发现后,我们讨论了若干商业和政策影响,强调企业和政策制定者在评估和纠正算法偏差时,需要对分配给 ML 的任务进行正确分类。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Production and Operations Management
Production and Operations Management 管理科学-工程:制造
CiteScore
7.50
自引率
16.00%
发文量
278
审稿时长
24 months
期刊介绍: The mission of Production and Operations Management is to serve as the flagship research journal in operations management in manufacturing and services. The journal publishes scientific research into the problems, interest, and concerns of managers who manage product and process design, operations, and supply chains. It covers all topics in product and process design, operations, and supply chain management and welcomes papers using any research paradigm.
期刊最新文献
EXPRESS: Execution Failures in Retail Supply Chains – A Virtual Reality Experiment EXPRESS: Can Social Technologies Drive Purchases in E-Commerce Live Streaming? an Empirical Study of Broadcasters' Cognitive and Affective Social Call-To-Actions EXPRESS: Framing Inclusive Practice Options for Financial, Operational, and Community Outcomes EXPRESS: Impact of Capacity Strain on the Health Status of Patients Discharged from an ICU EXPRESS: E-tailer’s Inventory Location and Pricing with Strategic Consumers
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1