{"title":"Fair-Siamese Approach for Accurate Fairness in Image Classification","authors":"Kwanhyong Lee, Van-Thuan Pham, Jiayuan He","doi":"10.1109/FairWare59297.2023.00005","DOIUrl":null,"url":null,"abstract":"Machine learning models are trained by iteratively fitting their parameters to the features of training data. These features may correlate to sensitive attributes such as race, age, or gender so they could introduce discrimination against minority groups. In a recent study, a fair Siamese network has been applied to discrete structured data under ‘accurate fairness’ constraints, showing promising results of improving fairness without sacrificing accuracy. However, the data augmentation strategy used in their paper cannot be applied to computer vision applications due to the reliance on a discrete perturbation method. In this paper, we adapt the structure of the fair Siamese approach for image classification and address the challenge of data augmentation using CycleGAN. We benchmark the performance of our approach in accuracy and fairness against the adversarial debiasing method. The results show that this adaptation of the fair Siamese approach outperform adversarial debiasing in accuracy and fairness for a variety of image classification tasks.","PeriodicalId":169742,"journal":{"name":"2023 IEEE/ACM International Workshop on Equitable Data & Technology (FairWare)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE/ACM International Workshop on Equitable Data & Technology (FairWare)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/FairWare59297.2023.00005","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Machine learning models are trained by iteratively fitting their parameters to the features of training data. These features may correlate to sensitive attributes such as race, age, or gender so they could introduce discrimination against minority groups. In a recent study, a fair Siamese network has been applied to discrete structured data under ‘accurate fairness’ constraints, showing promising results of improving fairness without sacrificing accuracy. However, the data augmentation strategy used in their paper cannot be applied to computer vision applications due to the reliance on a discrete perturbation method. In this paper, we adapt the structure of the fair Siamese approach for image classification and address the challenge of data augmentation using CycleGAN. We benchmark the performance of our approach in accuracy and fairness against the adversarial debiasing method. The results show that this adaptation of the fair Siamese approach outperform adversarial debiasing in accuracy and fairness for a variety of image classification tasks.