Zeyu Yang , Jizhi Zhang , Fuli Feng , Chongming Gao , Qifan Wang , Xiangnan He
{"title":"Interactive active learning for fairness with partial group label","authors":"Zeyu Yang , Jizhi Zhang , Fuli Feng , Chongming Gao , Qifan Wang , Xiangnan He","doi":"10.1016/j.aiopen.2023.10.003","DOIUrl":null,"url":null,"abstract":"<div><p>The rapid development of AI technologies has found numerous applications across various domains in human society. Ensuring fairness and preventing discrimination are critical considerations in the development of AI models. However, incomplete information often hinders the complete collection of sensitive attributes in real-world applications, primarily due to the high cost and potential privacy violations associated with such data collection. Label reconstruction through building another learner on sensitive attributes is a common approach to address this issue. However, existing methods focus solely on improving the prediction accuracy of the sensitive learner as a separate model, while ignoring the disparity between its accuracy and the fairness of the base model. To bridge this gap, this paper proposes an interactive learning framework that aims to optimize the sensitive learner while considering the fairness of the base learner. Furthermore, a new active sampling strategy is developed to select the most valuable data for the sensitive learner regarding the fairness of the base model. The effectiveness of our proposed method in improving model fairness is demonstrated through comprehensive evaluations conducted on various datasets and fairness criteria.</p></div>","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"4 ","pages":"Pages 175-182"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666651023000190/pdfft?md5=8647172d4d8f417e44b8c64861c1afd4&pid=1-s2.0-S2666651023000190-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI Open","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666651023000190","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The rapid development of AI technologies has found numerous applications across various domains in human society. Ensuring fairness and preventing discrimination are critical considerations in the development of AI models. However, incomplete information often hinders the complete collection of sensitive attributes in real-world applications, primarily due to the high cost and potential privacy violations associated with such data collection. Label reconstruction through building another learner on sensitive attributes is a common approach to address this issue. However, existing methods focus solely on improving the prediction accuracy of the sensitive learner as a separate model, while ignoring the disparity between its accuracy and the fairness of the base model. To bridge this gap, this paper proposes an interactive learning framework that aims to optimize the sensitive learner while considering the fairness of the base learner. Furthermore, a new active sampling strategy is developed to select the most valuable data for the sensitive learner regarding the fairness of the base model. The effectiveness of our proposed method in improving model fairness is demonstrated through comprehensive evaluations conducted on various datasets and fairness criteria.