Shubham Sharma, Alan H. Gee, D. Paydarfar, J. Ghosh
{"title":"Fair - n:结构化数据的公平和鲁棒神经网络","authors":"Shubham Sharma, Alan H. Gee, D. Paydarfar, J. Ghosh","doi":"10.1145/3461702.3462559","DOIUrl":null,"url":null,"abstract":"Fairness and robustness in machine learning are crucial when individuals are subject to automated decisions made by models in high-stake domains. To promote ethical artificial intelligence, fairness metrics that rely on comparing model error rates across subpopulations have been widely investigated for the detection and mitigation of bias. However, fairness measures that rely on comparing the ability to achieve recourse have been relatively unexplored. In this paper, we present a novel formulation for training neural networks that considers the distance of data observations to the decision boundary such that the new objective: (1) reduces the disparity in the average ability of recourse between individuals in each protected group, and (2) increases the average distance of data points to the boundary to promote adversarial robustness. We demonstrate that models trained with this new objective are more fair and adversarially robust neural networks, with similar accuracies, when compared to models without it. We also investigate a trade-off between the recourse-based fairness and robustness objectives. Moreover, we qualitatively motivate and empirically show that reducing recourse disparity across protected groups also improves fairness measures that rely on error rates. To the best of our knowledge, this is the first time that recourse disparity across groups are considered to train fairer neural networks.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":"{\"title\":\"FaiR-N: Fair and Robust Neural Networks for Structured Data\",\"authors\":\"Shubham Sharma, Alan H. Gee, D. Paydarfar, J. Ghosh\",\"doi\":\"10.1145/3461702.3462559\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Fairness and robustness in machine learning are crucial when individuals are subject to automated decisions made by models in high-stake domains. To promote ethical artificial intelligence, fairness metrics that rely on comparing model error rates across subpopulations have been widely investigated for the detection and mitigation of bias. However, fairness measures that rely on comparing the ability to achieve recourse have been relatively unexplored. In this paper, we present a novel formulation for training neural networks that considers the distance of data observations to the decision boundary such that the new objective: (1) reduces the disparity in the average ability of recourse between individuals in each protected group, and (2) increases the average distance of data points to the boundary to promote adversarial robustness. We demonstrate that models trained with this new objective are more fair and adversarially robust neural networks, with similar accuracies, when compared to models without it. We also investigate a trade-off between the recourse-based fairness and robustness objectives. Moreover, we qualitatively motivate and empirically show that reducing recourse disparity across protected groups also improves fairness measures that rely on error rates. To the best of our knowledge, this is the first time that recourse disparity across groups are considered to train fairer neural networks.\",\"PeriodicalId\":197336,\"journal\":{\"name\":\"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society\",\"volume\":\"5 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-10-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3461702.3462559\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3461702.3462559","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
FaiR-N: Fair and Robust Neural Networks for Structured Data
Fairness and robustness in machine learning are crucial when individuals are subject to automated decisions made by models in high-stake domains. To promote ethical artificial intelligence, fairness metrics that rely on comparing model error rates across subpopulations have been widely investigated for the detection and mitigation of bias. However, fairness measures that rely on comparing the ability to achieve recourse have been relatively unexplored. In this paper, we present a novel formulation for training neural networks that considers the distance of data observations to the decision boundary such that the new objective: (1) reduces the disparity in the average ability of recourse between individuals in each protected group, and (2) increases the average distance of data points to the boundary to promote adversarial robustness. We demonstrate that models trained with this new objective are more fair and adversarially robust neural networks, with similar accuracies, when compared to models without it. We also investigate a trade-off between the recourse-based fairness and robustness objectives. Moreover, we qualitatively motivate and empirically show that reducing recourse disparity across protected groups also improves fairness measures that rely on error rates. To the best of our knowledge, this is the first time that recourse disparity across groups are considered to train fairer neural networks.