Pub Date : 2024-12-12DOI: 10.1109/TIFS.2024.3516549
Ziniu Liu;Han Yu;Kai Chen;Aiping Li
Large models require larger datasets. While people gain from using massive amounts of data to train large models, they must be concerned about privacy issues. To address this issue, we propose a novel approach for private generative modeling using the Sliced Wasserstein Distance (SWD) metric in a Differential Private (DP) manner. We propose Normalized Clipping, a parameter-free clipping technique that generates higher-quality images. We demonstrate the advantages of Normalized Clipping over the traditional clipping method in parameter tuning and model performance through experiments. Moreover, experimental results indicate that our model outperforms previous methods in differentially private image generation tasks.
{"title":"Privacy-Preserving Generative Modeling With Sliced Wasserstein Distance","authors":"Ziniu Liu;Han Yu;Kai Chen;Aiping Li","doi":"10.1109/TIFS.2024.3516549","DOIUrl":"10.1109/TIFS.2024.3516549","url":null,"abstract":"Large models require larger datasets. While people gain from using massive amounts of data to train large models, they must be concerned about privacy issues. To address this issue, we propose a novel approach for private generative modeling using the Sliced Wasserstein Distance (SWD) metric in a Differential Private (DP) manner. We propose Normalized Clipping, a parameter-free clipping technique that generates higher-quality images. We demonstrate the advantages of Normalized Clipping over the traditional clipping method in parameter tuning and model performance through experiments. Moreover, experimental results indicate that our model outperforms previous methods in differentially private image generation tasks.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"1011-1022"},"PeriodicalIF":6.3,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142815916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adversarial training (AT) is widely considered the state-of-the-art technique for improving the robustness of deep neural networks (DNNs) against adversarial examples (AEs). Nevertheless, recent studies have revealed that adversarially trained models are prone to unfairness problems. Recent works in this field usually apply class-wise regularization methods to enhance the fairness of AT. However, this paper discovers that these paradigms can be sub-optimal in improving robust fairness. Specifically, we empirically observe that the AEs that are already robust (referred to as “easy AEs” in this paper) are useless and even harmful in improving robust fairness. To this end, we propose the hard adversarial example mining (HAM) technique which concentrates on mining hard AEs while discarding the easy AEs in AT. Specifically, HAM identifies the easy AEs and hard AEs with a fast adversarial attack method. By discarding the easy AEs and reweighting the hard AEs, the robust fairness of the model can be efficiently and effectively improved. Extensive experimental results on four image classification datasets demonstrate the improvement of HAM in robust fairness and training efficiency compared to several state-of-the-art fair adversarial training methods. Our code is available at https://github.com/yyl-github-1896/HAM