Yunpeng Xiao;Yutong Guo;Haipeng Zhu;Chaolong Jia;Qian Li;Rong Wang;Guoyin Wang
{"title":"An Unsupervised Federated Domain Adaptation Method Based on Knowledge Distillation","authors":"Yunpeng Xiao;Yutong Guo;Haipeng Zhu;Chaolong Jia;Qian Li;Rong Wang;Guoyin Wang","doi":"10.1109/TNNLS.2024.3510382","DOIUrl":null,"url":null,"abstract":"Conventional unsupervised multi source domain adaptation (UMDA) methods are based on the assumption that all source domain data are accessible directly. Aiming at the problem that current UMDA methods cannot directly obtain source domain data in federated learning (FL), a knowledge distillation-based multisource domain adaptation method adapted to FL is proposed. First of all, considering that knowledge distillation allows learning solely through model access, this article adopts an improved voting mechanism by applying a smoothing technique to the confidence distribution in the source domain models. This reduces the influence of models with extreme high confidence, thereby extracting high-quality consensus knowledge. Second, this article designs a teacher model adaptive weighting strategy. It identifies irrelevant domains and malicious domains according to the similarity between consensus knowledge and the output of the teacher model. Then, it improves the robustness of the model for negative transfer. Finally, this article introduces the idea of contrastive learning. It can control the drift of a single source domain and bridge the deviation between the representation learned by the local model and the global model. Experiments show that the method proposed in this article is superior to the mainstream UMDA methods. Moreover, it is robust to negative transfer, which is suitable for many practical FL applications.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"36 6","pages":"10993-11007"},"PeriodicalIF":8.9000,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10804845/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Conventional unsupervised multi source domain adaptation (UMDA) methods are based on the assumption that all source domain data are accessible directly. Aiming at the problem that current UMDA methods cannot directly obtain source domain data in federated learning (FL), a knowledge distillation-based multisource domain adaptation method adapted to FL is proposed. First of all, considering that knowledge distillation allows learning solely through model access, this article adopts an improved voting mechanism by applying a smoothing technique to the confidence distribution in the source domain models. This reduces the influence of models with extreme high confidence, thereby extracting high-quality consensus knowledge. Second, this article designs a teacher model adaptive weighting strategy. It identifies irrelevant domains and malicious domains according to the similarity between consensus knowledge and the output of the teacher model. Then, it improves the robustness of the model for negative transfer. Finally, this article introduces the idea of contrastive learning. It can control the drift of a single source domain and bridge the deviation between the representation learned by the local model and the global model. Experiments show that the method proposed in this article is superior to the mainstream UMDA methods. Moreover, it is robust to negative transfer, which is suitable for many practical FL applications.
期刊介绍:
The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.