Lang Li , Pei-gen Ye , Zhengdao Li , Zuopeng Yang , Zhenxin Zhang
{"title":"Finetune and Label Reversal: Privacy-preserving unlearning strategies for GAN models in cloud computing","authors":"Lang Li , Pei-gen Ye , Zhengdao Li , Zuopeng Yang , Zhenxin Zhang","doi":"10.1016/j.csi.2025.103976","DOIUrl":null,"url":null,"abstract":"<div><div>With the increasing emphasis on data protection by governments, machine unlearning has become a highly researched and prominent topic of interest. Machine unlearning is the process of eliminating the influence of specific samples from a machine learning model. Currently, most work on machine unlearning focuses on supervised learning, with limited research on unsupervised learning models such as GANs (Generative Adversarial Networks). GANs, as generative models, are widely applied in cloud computing platforms to generate high-quality synthetic data for various applications, including image synthesis, data augmentation, and anomaly detection. However, these models are often trained on large datasets that may contain personal or sensitive information, raising concerns about data privacy in cloud environments. Given the structural differences between GANs and traditional supervised learning models, transferring classical supervised unlearning algorithms to GANs poses significant challenges. Furthermore, the evaluation metrics for supervised learning unlearning algorithms are not directly applicable to GANs. To address these challenges, we propose two novel methods for unlearning in GANs: Finetune and Label Reversal. The Finetune methodology extends supervised learning unlearning by channeling residual data back into a pretrained GAN model for further refinement. Label Reversal involves reversing the labels of unlearning samples and performing iterative training to neutralize their influence on the model. To meet the needs of cloud-based GAN applications, we also introduce an evaluation metric tailored to GAN unlearning based on prediction loss. This metric ensures the reliability of unlearning methods while maintaining the quality of synthetic data generated in cloud environments. Extensive experiments conducted on the SVHN, CIFAR10, and CIFAR100 datasets demonstrate the efficiency of our methods. Our approach effectively removes specific samples from GAN models while preserving their generative capabilities, making it highly suitable for privacy-preserving GAN applications in cloud computing.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"93 ","pages":"Article 103976"},"PeriodicalIF":4.1000,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Standards & Interfaces","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0920548925000054","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
With the increasing emphasis on data protection by governments, machine unlearning has become a highly researched and prominent topic of interest. Machine unlearning is the process of eliminating the influence of specific samples from a machine learning model. Currently, most work on machine unlearning focuses on supervised learning, with limited research on unsupervised learning models such as GANs (Generative Adversarial Networks). GANs, as generative models, are widely applied in cloud computing platforms to generate high-quality synthetic data for various applications, including image synthesis, data augmentation, and anomaly detection. However, these models are often trained on large datasets that may contain personal or sensitive information, raising concerns about data privacy in cloud environments. Given the structural differences between GANs and traditional supervised learning models, transferring classical supervised unlearning algorithms to GANs poses significant challenges. Furthermore, the evaluation metrics for supervised learning unlearning algorithms are not directly applicable to GANs. To address these challenges, we propose two novel methods for unlearning in GANs: Finetune and Label Reversal. The Finetune methodology extends supervised learning unlearning by channeling residual data back into a pretrained GAN model for further refinement. Label Reversal involves reversing the labels of unlearning samples and performing iterative training to neutralize their influence on the model. To meet the needs of cloud-based GAN applications, we also introduce an evaluation metric tailored to GAN unlearning based on prediction loss. This metric ensures the reliability of unlearning methods while maintaining the quality of synthetic data generated in cloud environments. Extensive experiments conducted on the SVHN, CIFAR10, and CIFAR100 datasets demonstrate the efficiency of our methods. Our approach effectively removes specific samples from GAN models while preserving their generative capabilities, making it highly suitable for privacy-preserving GAN applications in cloud computing.
期刊介绍:
The quality of software, well-defined interfaces (hardware and software), the process of digitalisation, and accepted standards in these fields are essential for building and exploiting complex computing, communication, multimedia and measuring systems. Standards can simplify the design and construction of individual hardware and software components and help to ensure satisfactory interworking.
Computer Standards & Interfaces is an international journal dealing specifically with these topics.
The journal
• Provides information about activities and progress on the definition of computer standards, software quality, interfaces and methods, at national, European and international levels
• Publishes critical comments on standards and standards activities
• Disseminates user''s experiences and case studies in the application and exploitation of established or emerging standards, interfaces and methods
• Offers a forum for discussion on actual projects, standards, interfaces and methods by recognised experts
• Stimulates relevant research by providing a specialised refereed medium.