Christopher T. H. Teo;Milad Abdollahzadeh;Ngai-Man Cheung
{"title":"FairTL:减轻深度生成模型偏差的迁移学习方法","authors":"Christopher T. H. Teo;Milad Abdollahzadeh;Ngai-Man Cheung","doi":"10.1109/JSTSP.2024.3363419","DOIUrl":null,"url":null,"abstract":"This work studies fair generative models. We reveal and quantify the biases in state-of-the-art (SOTA) GANs \n<italic>w.r.t.</i>\n different sensitive attributes. To address the biases, our main contribution is to propose novel methods to learn fair generative models via transfer learning. Specifically, first, we propose \n<sc>FairTL</small>\n where we pre-train the generative model with a large biased dataset, then adapt the model using a small fair reference dataset. Second, to further improve sample diversity, we propose \n<sc>FairTL++</small>\n, containing two additional innovations: 1) \n<italic>aligned feature adaptation</i>\n, which preserves learned general knowledge while improving fairness by adapting only sensitive attribute-specific parameters, 2) \n<italic>multiple feedback discrimination</i>\n, which introduces a frozen discriminator for quality feedback and another evolving discriminator for fairness feedback. Taking one step further, we consider an alternative challenging and practical setup. Here, only a pre-trained model is available but the dataset used to pre-train the model is inaccessible. We remark that previous work requires access to large, biased datasets and cannot handle this setup. Extensive experimental results show that \n<sc>FairTL</small>\n and \n<sc>FairTL++</small>\n achieve state-of-the-art performance in quality, diversity and fairness in both setups.","PeriodicalId":13038,"journal":{"name":"IEEE Journal of Selected Topics in Signal Processing","volume":"18 2","pages":"155-167"},"PeriodicalIF":8.7000,"publicationDate":"2024-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"FairTL: A Transfer Learning Approach for Bias Mitigation in Deep Generative Models\",\"authors\":\"Christopher T. H. Teo;Milad Abdollahzadeh;Ngai-Man Cheung\",\"doi\":\"10.1109/JSTSP.2024.3363419\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This work studies fair generative models. We reveal and quantify the biases in state-of-the-art (SOTA) GANs \\n<italic>w.r.t.</i>\\n different sensitive attributes. To address the biases, our main contribution is to propose novel methods to learn fair generative models via transfer learning. Specifically, first, we propose \\n<sc>FairTL</small>\\n where we pre-train the generative model with a large biased dataset, then adapt the model using a small fair reference dataset. Second, to further improve sample diversity, we propose \\n<sc>FairTL++</small>\\n, containing two additional innovations: 1) \\n<italic>aligned feature adaptation</i>\\n, which preserves learned general knowledge while improving fairness by adapting only sensitive attribute-specific parameters, 2) \\n<italic>multiple feedback discrimination</i>\\n, which introduces a frozen discriminator for quality feedback and another evolving discriminator for fairness feedback. Taking one step further, we consider an alternative challenging and practical setup. Here, only a pre-trained model is available but the dataset used to pre-train the model is inaccessible. We remark that previous work requires access to large, biased datasets and cannot handle this setup. Extensive experimental results show that \\n<sc>FairTL</small>\\n and \\n<sc>FairTL++</small>\\n achieve state-of-the-art performance in quality, diversity and fairness in both setups.\",\"PeriodicalId\":13038,\"journal\":{\"name\":\"IEEE Journal of Selected Topics in Signal Processing\",\"volume\":\"18 2\",\"pages\":\"155-167\"},\"PeriodicalIF\":8.7000,\"publicationDate\":\"2024-02-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Journal of Selected Topics in Signal Processing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10436577/\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal of Selected Topics in Signal Processing","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10436577/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
FairTL: A Transfer Learning Approach for Bias Mitigation in Deep Generative Models
This work studies fair generative models. We reveal and quantify the biases in state-of-the-art (SOTA) GANs
w.r.t.
different sensitive attributes. To address the biases, our main contribution is to propose novel methods to learn fair generative models via transfer learning. Specifically, first, we propose
FairTL
where we pre-train the generative model with a large biased dataset, then adapt the model using a small fair reference dataset. Second, to further improve sample diversity, we propose
FairTL++
, containing two additional innovations: 1)
aligned feature adaptation
, which preserves learned general knowledge while improving fairness by adapting only sensitive attribute-specific parameters, 2)
multiple feedback discrimination
, which introduces a frozen discriminator for quality feedback and another evolving discriminator for fairness feedback. Taking one step further, we consider an alternative challenging and practical setup. Here, only a pre-trained model is available but the dataset used to pre-train the model is inaccessible. We remark that previous work requires access to large, biased datasets and cannot handle this setup. Extensive experimental results show that
FairTL
and
FairTL++
achieve state-of-the-art performance in quality, diversity and fairness in both setups.
期刊介绍:
The IEEE Journal of Selected Topics in Signal Processing (JSTSP) focuses on the Field of Interest of the IEEE Signal Processing Society, which encompasses the theory and application of various signal processing techniques. These techniques include filtering, coding, transmitting, estimating, detecting, analyzing, recognizing, synthesizing, recording, and reproducing signals using digital or analog devices. The term "signal" covers a wide range of data types, including audio, video, speech, image, communication, geophysical, sonar, radar, medical, musical, and others.
The journal format allows for in-depth exploration of signal processing topics, enabling the Society to cover both established and emerging areas. This includes interdisciplinary fields such as biomedical engineering and language processing, as well as areas not traditionally associated with engineering.