Fair - n:结构化数据的公平和鲁棒神经网络

Shubham Sharma, Alan H. Gee, D. Paydarfar, J. Ghosh
{"title":"Fair - n:结构化数据的公平和鲁棒神经网络","authors":"Shubham Sharma, Alan H. Gee, D. Paydarfar, J. Ghosh","doi":"10.1145/3461702.3462559","DOIUrl":null,"url":null,"abstract":"Fairness and robustness in machine learning are crucial when individuals are subject to automated decisions made by models in high-stake domains. To promote ethical artificial intelligence, fairness metrics that rely on comparing model error rates across subpopulations have been widely investigated for the detection and mitigation of bias. However, fairness measures that rely on comparing the ability to achieve recourse have been relatively unexplored. In this paper, we present a novel formulation for training neural networks that considers the distance of data observations to the decision boundary such that the new objective: (1) reduces the disparity in the average ability of recourse between individuals in each protected group, and (2) increases the average distance of data points to the boundary to promote adversarial robustness. We demonstrate that models trained with this new objective are more fair and adversarially robust neural networks, with similar accuracies, when compared to models without it. We also investigate a trade-off between the recourse-based fairness and robustness objectives. Moreover, we qualitatively motivate and empirically show that reducing recourse disparity across protected groups also improves fairness measures that rely on error rates. To the best of our knowledge, this is the first time that recourse disparity across groups are considered to train fairer neural networks.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":"{\"title\":\"FaiR-N: Fair and Robust Neural Networks for Structured Data\",\"authors\":\"Shubham Sharma, Alan H. Gee, D. Paydarfar, J. Ghosh\",\"doi\":\"10.1145/3461702.3462559\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Fairness and robustness in machine learning are crucial when individuals are subject to automated decisions made by models in high-stake domains. To promote ethical artificial intelligence, fairness metrics that rely on comparing model error rates across subpopulations have been widely investigated for the detection and mitigation of bias. However, fairness measures that rely on comparing the ability to achieve recourse have been relatively unexplored. In this paper, we present a novel formulation for training neural networks that considers the distance of data observations to the decision boundary such that the new objective: (1) reduces the disparity in the average ability of recourse between individuals in each protected group, and (2) increases the average distance of data points to the boundary to promote adversarial robustness. We demonstrate that models trained with this new objective are more fair and adversarially robust neural networks, with similar accuracies, when compared to models without it. We also investigate a trade-off between the recourse-based fairness and robustness objectives. Moreover, we qualitatively motivate and empirically show that reducing recourse disparity across protected groups also improves fairness measures that rely on error rates. To the best of our knowledge, this is the first time that recourse disparity across groups are considered to train fairer neural networks.\",\"PeriodicalId\":197336,\"journal\":{\"name\":\"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society\",\"volume\":\"5 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-10-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3461702.3462559\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3461702.3462559","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10

摘要

当个人受制于高风险领域模型的自动决策时,机器学习中的公平性和鲁棒性至关重要。为了促进道德人工智能,人们已经广泛研究了依赖于比较不同亚群体的模型错误率的公平指标,以检测和减轻偏见。然而,相对而言,以比较获得追索权的能力为依据的公平措施尚未得到探索。在本文中,我们提出了一种新的训练神经网络的公式,该公式考虑了数据观测到决策边界的距离,从而实现了新的目标:(1)减少每个受保护群体中个体之间平均求助能力的差异;(2)增加数据点到边界的平均距离,以提高对抗鲁棒性。我们证明,与没有这个新目标的模型相比,用这个新目标训练的模型是更公平和对抗鲁棒的神经网络,具有相似的精度。我们还研究了基于资源的公平性和鲁棒性目标之间的权衡。此外,我们定性地激励和实证地表明,减少受保护群体之间的追索权差距也改善了依赖错误率的公平措施。据我们所知,这是第一次考虑群体之间的追索权差异来训练更公平的神经网络。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
FaiR-N: Fair and Robust Neural Networks for Structured Data
Fairness and robustness in machine learning are crucial when individuals are subject to automated decisions made by models in high-stake domains. To promote ethical artificial intelligence, fairness metrics that rely on comparing model error rates across subpopulations have been widely investigated for the detection and mitigation of bias. However, fairness measures that rely on comparing the ability to achieve recourse have been relatively unexplored. In this paper, we present a novel formulation for training neural networks that considers the distance of data observations to the decision boundary such that the new objective: (1) reduces the disparity in the average ability of recourse between individuals in each protected group, and (2) increases the average distance of data points to the boundary to promote adversarial robustness. We demonstrate that models trained with this new objective are more fair and adversarially robust neural networks, with similar accuracies, when compared to models without it. We also investigate a trade-off between the recourse-based fairness and robustness objectives. Moreover, we qualitatively motivate and empirically show that reducing recourse disparity across protected groups also improves fairness measures that rely on error rates. To the best of our knowledge, this is the first time that recourse disparity across groups are considered to train fairer neural networks.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Beyond Reasonable Doubt: Improving Fairness in Budget-Constrained Decision Making using Confidence Thresholds Measuring Automated Influence: Between Empirical Evidence and Ethical Values Artificial Intelligence and the Purpose of Social Systems Ethically Compliant Planning within Moral Communities Co-design and Ethical Artificial Intelligence for Health: Myths and Misconceptions
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1