Learning fair prediction models with an imputed sensitive variable: Empirical studies

IF 0.5 Q4 STATISTICS & PROBABILITY Communications for Statistical Applications and Methods Pub Date : 2022-03-31 DOI:10.29220/csam.2022.29.2.251
Yongdai Kim, Hwichang Jeong
{"title":"Learning fair prediction models with an imputed sensitive variable: Empirical studies","authors":"Yongdai Kim, Hwichang Jeong","doi":"10.29220/csam.2022.29.2.251","DOIUrl":null,"url":null,"abstract":"As AI has a wide range of influence on human social life, issues of transparency and ethics of AI are emerg-ing. In particular, it is widely known that due to the existence of historical bias in data against ethics or regulatory frameworks for fairness, trained AI models based on such biased data could also impose bias or unfairness against a certain sensitive group (e.g., non-white, women). Demographic disparities due to AI, which refer to socially unacceptable bias that an AI model favors certain groups (e.g., white, men) over other groups (e.g., black, women), have been observed frequently in many applications of AI and many studies have been done recently to develop AI algorithms which remove or alleviate such demographic disparities in trained AI models. In this paper, we consider a problem of using the information in the sensitive variable for fair prediction when using the sensitive variable as a part of input variables is prohibitive by laws or regulations to avoid unfairness. As a way of reflecting the information in the sensitive variable to prediction, we consider a two-stage procedure. First, the sensitive variable is fully included in the learning phase to have a prediction model depending on the sensitive variable, and then an imputed sensitive variable is used in the prediction phase. The aim of this paper is to evaluate this procedure by analyzing several benchmark datasets. We illustrate that using an imputed sensitive variable is helpful to improve prediction accuracies without hampering the degree of fairness much.","PeriodicalId":44931,"journal":{"name":"Communications for Statistical Applications and Methods","volume":" ","pages":""},"PeriodicalIF":0.5000,"publicationDate":"2022-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Communications for Statistical Applications and Methods","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.29220/csam.2022.29.2.251","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"STATISTICS & PROBABILITY","Score":null,"Total":0}
引用次数: 1

Abstract

As AI has a wide range of influence on human social life, issues of transparency and ethics of AI are emerg-ing. In particular, it is widely known that due to the existence of historical bias in data against ethics or regulatory frameworks for fairness, trained AI models based on such biased data could also impose bias or unfairness against a certain sensitive group (e.g., non-white, women). Demographic disparities due to AI, which refer to socially unacceptable bias that an AI model favors certain groups (e.g., white, men) over other groups (e.g., black, women), have been observed frequently in many applications of AI and many studies have been done recently to develop AI algorithms which remove or alleviate such demographic disparities in trained AI models. In this paper, we consider a problem of using the information in the sensitive variable for fair prediction when using the sensitive variable as a part of input variables is prohibitive by laws or regulations to avoid unfairness. As a way of reflecting the information in the sensitive variable to prediction, we consider a two-stage procedure. First, the sensitive variable is fully included in the learning phase to have a prediction model depending on the sensitive variable, and then an imputed sensitive variable is used in the prediction phase. The aim of this paper is to evaluate this procedure by analyzing several benchmark datasets. We illustrate that using an imputed sensitive variable is helpful to improve prediction accuracies without hampering the degree of fairness much.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
带有输入敏感变量的学习公平预测模型:实证研究
由于人工智能对人类社会生活有着广泛的影响,人工智能的透明度和伦理问题正在浮出水面。特别是,众所周知,由于数据中存在违反道德或公平监管框架的历史偏见,基于这种偏见数据训练的人工智能模型也可能对某个敏感群体(如非白人女性)施加偏见或不公平。人工智能导致的人口统计学差异是指人工智能模型偏爱某些群体(如白人、男性)而非其他群体(如黑人、女性)这一社会不可接受的偏见,在人工智能的许多应用中经常被观察到,最近也进行了许多研究来开发人工智能算法,以消除或缓解训练后的人工智能模型中的这种人口统计学差异。在本文中,我们考虑了一个问题,即当使用敏感变量作为输入变量的一部分时,使用敏感变量中的信息进行公平预测是法律或法规禁止的,以避免不公平。作为一种将敏感变量中的信息反映为预测的方法,我们考虑了一个两阶段的过程。首先,在学习阶段完全包括敏感变量,以具有取决于敏感变量的预测模型,然后在预测阶段使用估算的敏感变量。本文的目的是通过分析几个基准数据集来评估这一过程。我们说明,使用估算的敏感变量有助于提高预测精度,而不会大大妨碍公平程度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
0.90
自引率
0.00%
发文量
49
期刊介绍: Communications for Statistical Applications and Methods (Commun. Stat. Appl. Methods, CSAM) is an official journal of the Korean Statistical Society and Korean International Statistical Society. It is an international and Open Access journal dedicated to publishing peer-reviewed, high quality and innovative statistical research. CSAM publishes articles on applied and methodological research in the areas of statistics and probability. It features rapid publication and broad coverage of statistical applications and methods. It welcomes papers on novel applications of statistical methodology in the areas including medicine (pharmaceutical, biotechnology, medical device), business, management, economics, ecology, education, computing, engineering, operational research, biology, sociology and earth science, but papers from other areas are also considered.
期刊最新文献
Influence diagnostics for skew-t censored linear regression models Identification of indirect effects in the two-condition within-subject mediation model and its implementation using SEM Robust extreme quantile estimation for Pareto-type tails through an exponential regression model Two-stage imputation method to handle missing data for categorical response variable Counterfactual image generation by disentangling data attributes with deep generative models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1