An Unsupervised Federated Domain Adaptation Method Based on Knowledge Distillation

IF 8.9 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE IEEE transactions on neural networks and learning systems Pub Date : 2024-12-17 DOI:10.1109/TNNLS.2024.3510382
Yunpeng Xiao;Yutong Guo;Haipeng Zhu;Chaolong Jia;Qian Li;Rong Wang;Guoyin Wang
{"title":"An Unsupervised Federated Domain Adaptation Method Based on Knowledge Distillation","authors":"Yunpeng Xiao;Yutong Guo;Haipeng Zhu;Chaolong Jia;Qian Li;Rong Wang;Guoyin Wang","doi":"10.1109/TNNLS.2024.3510382","DOIUrl":null,"url":null,"abstract":"Conventional unsupervised multi source domain adaptation (UMDA) methods are based on the assumption that all source domain data are accessible directly. Aiming at the problem that current UMDA methods cannot directly obtain source domain data in federated learning (FL), a knowledge distillation-based multisource domain adaptation method adapted to FL is proposed. First of all, considering that knowledge distillation allows learning solely through model access, this article adopts an improved voting mechanism by applying a smoothing technique to the confidence distribution in the source domain models. This reduces the influence of models with extreme high confidence, thereby extracting high-quality consensus knowledge. Second, this article designs a teacher model adaptive weighting strategy. It identifies irrelevant domains and malicious domains according to the similarity between consensus knowledge and the output of the teacher model. Then, it improves the robustness of the model for negative transfer. Finally, this article introduces the idea of contrastive learning. It can control the drift of a single source domain and bridge the deviation between the representation learned by the local model and the global model. Experiments show that the method proposed in this article is superior to the mainstream UMDA methods. Moreover, it is robust to negative transfer, which is suitable for many practical FL applications.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"36 6","pages":"10993-11007"},"PeriodicalIF":8.9000,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10804845/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Conventional unsupervised multi source domain adaptation (UMDA) methods are based on the assumption that all source domain data are accessible directly. Aiming at the problem that current UMDA methods cannot directly obtain source domain data in federated learning (FL), a knowledge distillation-based multisource domain adaptation method adapted to FL is proposed. First of all, considering that knowledge distillation allows learning solely through model access, this article adopts an improved voting mechanism by applying a smoothing technique to the confidence distribution in the source domain models. This reduces the influence of models with extreme high confidence, thereby extracting high-quality consensus knowledge. Second, this article designs a teacher model adaptive weighting strategy. It identifies irrelevant domains and malicious domains according to the similarity between consensus knowledge and the output of the teacher model. Then, it improves the robustness of the model for negative transfer. Finally, this article introduces the idea of contrastive learning. It can control the drift of a single source domain and bridge the deviation between the representation learned by the local model and the global model. Experiments show that the method proposed in this article is superior to the mainstream UMDA methods. Moreover, it is robust to negative transfer, which is suitable for many practical FL applications.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
一种基于知识蒸馏的无监督联邦领域自适应方法
传统的无监督多源域自适应方法是基于所有源域数据可直接访问的假设。针对当前UMDA方法在联邦学习(FL)中无法直接获取源域数据的问题,提出了一种基于知识提取的适用于联邦学习的多源域自适应方法。首先,考虑到知识蒸馏只允许通过模型访问进行学习,本文通过对源域模型中的置信度分布应用平滑技术,采用了一种改进的投票机制。这减少了具有极高置信度的模型的影响,从而提取出高质量的共识知识。其次,本文设计了教师模型自适应权重策略。它根据共识知识和教师模型输出之间的相似性来识别不相关领域和恶意领域。然后,提高了模型对负迁移的鲁棒性。最后,本文介绍了对比学习的概念。它可以控制单个源域的漂移,并弥补局部模型和全局模型学习到的表示之间的偏差。实验表明,本文提出的方法优于主流的UMDA方法。此外,它对负迁移具有鲁棒性,适用于许多实际的FL应用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE transactions on neural networks and learning systems
IEEE transactions on neural networks and learning systems COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
CiteScore
23.80
自引率
9.60%
发文量
2102
审稿时长
3-8 weeks
期刊介绍: The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.
期刊最新文献
PerC-SimAM-BLSTM: Position Perception Circular Convolution With Simple Attention Mechanism Based on BLSTM for Bundle Branch Block Detection. Fractional-Order Dynamics Learning and Control via Data-Driven Approaches: Taking Soft Manipulator as an Example. DPIU: Dynamic Pedestrian Intention Understanding Through Cognitive Decision-Making. Rethinking the Utilization of Individual Rewards in Multiagent Reinforcement Learning With Sparse Team Rewards. IEEE Computational Intelligence Society
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1