Jianxin Zhang;Mengda Zhao;Zhenwei Wang;Weijian Su;Pengfei Wang
{"title":"Model Recovery in Federated Unlearning With Restricted Server Data Resources","authors":"Jianxin Zhang;Mengda Zhao;Zhenwei Wang;Weijian Su;Pengfei Wang","doi":"10.1109/JIOT.2025.3540463","DOIUrl":null,"url":null,"abstract":"Recent model recovery methods in federated unlearning (FUL) either rely on additional communication with the remaining clients or require large amounts of high-quality data from the server for training, overlooking scenarios with limited data resources. Currently, contrastive language-image pretraining (CLIP) has demonstrated remarkable performance across a wide range of tasks, particularly excelling in few-shot learning scenarios. In this article, inspired by CLIP, we explore the scenario of few-shot knowledge distillation and propose CLIP-guided few-shot knowledge distillation (CGKD) for model recovery in FUL. CGKD mainly consists of three components: 1) the unlearning module constructs the unlearning model by erasing all historical contributions of the target client, and this model is treated as the student model; 2) fine-tuning the pretrained CLIP model using few-shot data from the server side to obtain a more robust teacher model (CLIP<inline-formula> <tex-math>$^{\\mathbf {*}}$ </tex-math></inline-formula>); and 3) model recovery is achieved through knowledge distillation, leveraging the rich visual and semantic knowledge of CLIP<inline-formula> <tex-math>$^{\\mathbf {*}}$ </tex-math></inline-formula> to enhance the student model’s understanding of image semantic context, thereby improving the performance of the unlearning model. Extensive experimental results demonstrate that CGKD outperforms the compared FUL method in recovery performance across four standard datasets, validating the effectiveness of our approach.","PeriodicalId":54347,"journal":{"name":"IEEE Internet of Things Journal","volume":"12 11","pages":"17920-17935"},"PeriodicalIF":8.9000,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Internet of Things Journal","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10879239/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Recent model recovery methods in federated unlearning (FUL) either rely on additional communication with the remaining clients or require large amounts of high-quality data from the server for training, overlooking scenarios with limited data resources. Currently, contrastive language-image pretraining (CLIP) has demonstrated remarkable performance across a wide range of tasks, particularly excelling in few-shot learning scenarios. In this article, inspired by CLIP, we explore the scenario of few-shot knowledge distillation and propose CLIP-guided few-shot knowledge distillation (CGKD) for model recovery in FUL. CGKD mainly consists of three components: 1) the unlearning module constructs the unlearning model by erasing all historical contributions of the target client, and this model is treated as the student model; 2) fine-tuning the pretrained CLIP model using few-shot data from the server side to obtain a more robust teacher model (CLIP$^{\mathbf {*}}$ ); and 3) model recovery is achieved through knowledge distillation, leveraging the rich visual and semantic knowledge of CLIP$^{\mathbf {*}}$ to enhance the student model’s understanding of image semantic context, thereby improving the performance of the unlearning model. Extensive experimental results demonstrate that CGKD outperforms the compared FUL method in recovery performance across four standard datasets, validating the effectiveness of our approach.
期刊介绍:
The EEE Internet of Things (IoT) Journal publishes articles and review articles covering various aspects of IoT, including IoT system architecture, IoT enabling technologies, IoT communication and networking protocols such as network coding, and IoT services and applications. Topics encompass IoT's impacts on sensor technologies, big data management, and future internet design for applications like smart cities and smart homes. Fields of interest include IoT architecture such as things-centric, data-centric, service-oriented IoT architecture; IoT enabling technologies and systematic integration such as sensor technologies, big sensor data management, and future Internet design for IoT; IoT services, applications, and test-beds such as IoT service middleware, IoT application programming interface (API), IoT application design, and IoT trials/experiments; IoT standardization activities and technology development in different standard development organizations (SDO) such as IEEE, IETF, ITU, 3GPP, ETSI, etc.