Taeeon Park, Sangwon Jung, Sanghyuk Chun, Taesup Moon
{"title":"FairDRO: Group fairness regularization via classwise robust optimization.","authors":"Taeeon Park, Sangwon Jung, Sanghyuk Chun, Taesup Moon","doi":"10.1016/j.neunet.2024.106891","DOIUrl":null,"url":null,"abstract":"<p><p>Existing group fairness-aware training methods fall into two categories: re-weighting underrepresented groups according to certain rules, or using regularization terms such as smoothed approximations of fairness metrics or surrogate statistical quantities. While each category has its own strength in applicability or performance when compared to each other, their successful performances are typically limited to specific cases. To that end, we propose a new approach called FairDRO, which takes advantage of both categories through a classwise group distributionally robust optimization (DRO) framework. Our method unifies re-weighting and regularization by incorporating a well-justified group fairness metric into the objective as regularization, but solving it through a principled re-weighting strategy. To optimize our resulting objective efficiently, we adopt an iterative algorithm and consequently develop two variants of FairDRO algorithm depending on the choice of surrogate loss. For in-depth understanding, we derive three theoretical results: (i) a closed-form solution for the correct re-weights; (ii) justifications for using the surrogate losses; and (iii) a convergence analysis of our method. Experimental results show that our algorithms consistently achieve state-of-the-art performance in accuracy-fairness trade-offs across multiple benchmarks, demonstrating scalability and broad applicability compared to existing methods.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"182 ","pages":"106891"},"PeriodicalIF":6.0000,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1016/j.neunet.2024.106891","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Existing group fairness-aware training methods fall into two categories: re-weighting underrepresented groups according to certain rules, or using regularization terms such as smoothed approximations of fairness metrics or surrogate statistical quantities. While each category has its own strength in applicability or performance when compared to each other, their successful performances are typically limited to specific cases. To that end, we propose a new approach called FairDRO, which takes advantage of both categories through a classwise group distributionally robust optimization (DRO) framework. Our method unifies re-weighting and regularization by incorporating a well-justified group fairness metric into the objective as regularization, but solving it through a principled re-weighting strategy. To optimize our resulting objective efficiently, we adopt an iterative algorithm and consequently develop two variants of FairDRO algorithm depending on the choice of surrogate loss. For in-depth understanding, we derive three theoretical results: (i) a closed-form solution for the correct re-weights; (ii) justifications for using the surrogate losses; and (iii) a convergence analysis of our method. Experimental results show that our algorithms consistently achieve state-of-the-art performance in accuracy-fairness trade-offs across multiple benchmarks, demonstrating scalability and broad applicability compared to existing methods.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.