Impact on bias mitigation algorithms to variations in inferred sensitive attribute uncertainty.

IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Frontiers in Artificial Intelligence Pub Date : 2025-03-06 eCollection Date: 2025-01-01 DOI:10.3389/frai.2025.1520330
Yanchen Wang, Lisa Singh
{"title":"Impact on bias mitigation algorithms to variations in inferred sensitive attribute uncertainty.","authors":"Yanchen Wang, Lisa Singh","doi":"10.3389/frai.2025.1520330","DOIUrl":null,"url":null,"abstract":"<p><p>Concerns about the trustworthiness, fairness, and privacy of AI systems are growing, and strategies for mitigating these concerns are still in their infancy. One approach to improve trustworthiness and fairness in AI systems is to use bias mitigation algorithms. However, most bias mitigation algorithms require data sets that contain sensitive attribute values to assess the fairness of the algorithm. A growing number of real world data sets do not make sensitive attribute information readily available to researchers. One solution is to infer the missing sensitive attribute information and apply an existing bias mitigation algorithm using this inferred knowledge. While researchers are beginning to explore this question, it is still unclear how robust existing bias mitigation algorithms are to different levels of inference accuracy. This paper explores this question by investigating the impact of different levels of accuracy of the inferred sensitive attribute on the performance of different bias mitigation strategies. We generate variation in sensitive attribute accuracy using both simulation and construction of neural models for the inference task. We then assess the quality of six bias mitigation algorithms that are deployed across different parts of our learning life cycle: pre-processing, in-processing, and post-processing. We find that the disparate impact remover is the least sensitive bias mitigation strategy and that if we apply the bias mitigation algorithms using an inferred sensitive attribute with reasonable accuracy, the fairness scores are higher than the best standard model and the balanced accuracy is similar to that of the standard model. These findings open the door for improving fairness of black box AI systems using some bias mitigation strategies.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1520330"},"PeriodicalIF":4.7000,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11924408/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/frai.2025.1520330","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Concerns about the trustworthiness, fairness, and privacy of AI systems are growing, and strategies for mitigating these concerns are still in their infancy. One approach to improve trustworthiness and fairness in AI systems is to use bias mitigation algorithms. However, most bias mitigation algorithms require data sets that contain sensitive attribute values to assess the fairness of the algorithm. A growing number of real world data sets do not make sensitive attribute information readily available to researchers. One solution is to infer the missing sensitive attribute information and apply an existing bias mitigation algorithm using this inferred knowledge. While researchers are beginning to explore this question, it is still unclear how robust existing bias mitigation algorithms are to different levels of inference accuracy. This paper explores this question by investigating the impact of different levels of accuracy of the inferred sensitive attribute on the performance of different bias mitigation strategies. We generate variation in sensitive attribute accuracy using both simulation and construction of neural models for the inference task. We then assess the quality of six bias mitigation algorithms that are deployed across different parts of our learning life cycle: pre-processing, in-processing, and post-processing. We find that the disparate impact remover is the least sensitive bias mitigation strategy and that if we apply the bias mitigation algorithms using an inferred sensitive attribute with reasonable accuracy, the fairness scores are higher than the best standard model and the balanced accuracy is similar to that of the standard model. These findings open the door for improving fairness of black box AI systems using some bias mitigation strategies.

Abstract Image

Abstract Image

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
偏差缓解算法对推断敏感属性不确定性变化的影响。
人们对人工智能系统的可信度、公平性和隐私性的担忧正在增加,而缓解这些担忧的策略仍处于起步阶段。提高人工智能系统可信度和公平性的一种方法是使用偏见缓解算法。然而,大多数偏差缓解算法需要包含敏感属性值的数据集来评估算法的公平性。越来越多的真实世界的数据集不能使敏感的属性信息容易地提供给研究人员。一种解决方案是推断缺失的敏感属性信息,并使用该推断的知识应用现有的偏差缓解算法。虽然研究人员开始探索这个问题,但目前还不清楚现有的偏见缓解算法对不同程度的推理准确性有多强。本文通过研究推断的敏感属性的不同精度水平对不同偏差缓解策略性能的影响来探讨这个问题。我们通过对推理任务的神经模型的模拟和构建来产生敏感属性精度的变化。然后,我们评估了六种偏差缓解算法的质量,这些算法部署在我们学习生命周期的不同部分:预处理、处理中和后处理。研究发现,分散性影响去除器是最不敏感的偏见缓解策略,当我们使用合理精度的推断敏感属性来应用偏见缓解算法时,公平性得分高于最佳标准模型,平衡精度与标准模型相似。这些发现为使用一些减轻偏见的策略提高黑盒人工智能系统的公平性打开了大门。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
6.10
自引率
2.50%
发文量
272
审稿时长
13 weeks
期刊最新文献
Computational understanding of non-coding RNA pairwise interactions. Both ends of artificial intelligence impacting privacy: a review of violation and protection. Machine learning strategies for predicting pediatric suicidal behaviors in a Brazilian emergency setting. Formal methods for safety-critical machine learning: a systematic literature review. Transforming data to delight: AI-led campaign personalization shapes digital natives' intention to accept over a TAM-VAM approach.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1