MUFTI: Multi-Domain Distillation-Based Heterogeneous Federated Continuous Learning

IF 8 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS IEEE Transactions on Information Forensics and Security Pub Date : 2025-02-14 DOI:10.1109/TIFS.2025.3542246
Keke Gai;Zijun Wang;Jing Yu;Liehuang Zhu
{"title":"MUFTI: Multi-Domain Distillation-Based Heterogeneous Federated Continuous Learning","authors":"Keke Gai;Zijun Wang;Jing Yu;Liehuang Zhu","doi":"10.1109/TIFS.2025.3542246","DOIUrl":null,"url":null,"abstract":"Federated Learning (FL) is an alternative approach that facilitates training machine learning models on distributed users’ data while preserving privacy. However, clients have different local model structures and most local data are non-independent and identically distributed, so that FL encounters heterogeneity and catastrophic forgetting issues when clients continuously accumulate new knowledge. In this work, we propose a scheme called MUFTI (Multi-Domain Distillation-based Heterogeneous Federated ConTInuous Learning). On one hand, we have extended domain adaptation to FL via extracting features to obtain feature representations on unlabeled public datasets for collaborative training, narrowing the distance between feature outputs of different models under the same sample. On the other hand, we propose a combining knowledge distillation method to solve the catastrophic forgetting issue. Within a single task, dual-domain distillation is used to avoid data forgetting between different domains; for cross task learning in task flow, the logits output of the previous model is used as the teacher to avoid forgetting old tasks. The experiment results showed that MUFTI had a better performance in accuracy and robustness comparing to state-of-the-art methods. The evaluation also demonstrated that MUFTI could perform well in handling task increment issues, reducing catastrophic forgetting, and achieving trade-offs between multiple objectives.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"2721-2733"},"PeriodicalIF":8.0000,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Forensics and Security","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10887363/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

Abstract

Federated Learning (FL) is an alternative approach that facilitates training machine learning models on distributed users’ data while preserving privacy. However, clients have different local model structures and most local data are non-independent and identically distributed, so that FL encounters heterogeneity and catastrophic forgetting issues when clients continuously accumulate new knowledge. In this work, we propose a scheme called MUFTI (Multi-Domain Distillation-based Heterogeneous Federated ConTInuous Learning). On one hand, we have extended domain adaptation to FL via extracting features to obtain feature representations on unlabeled public datasets for collaborative training, narrowing the distance between feature outputs of different models under the same sample. On the other hand, we propose a combining knowledge distillation method to solve the catastrophic forgetting issue. Within a single task, dual-domain distillation is used to avoid data forgetting between different domains; for cross task learning in task flow, the logits output of the previous model is used as the teacher to avoid forgetting old tasks. The experiment results showed that MUFTI had a better performance in accuracy and robustness comparing to state-of-the-art methods. The evaluation also demonstrated that MUFTI could perform well in handling task increment issues, reducing catastrophic forgetting, and achieving trade-offs between multiple objectives.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
MUFTI:基于多域蒸馏的异构联邦持续学习
联邦学习(FL)是一种替代方法,它有助于在保护隐私的同时在分布式用户数据上训练机器学习模型。然而,客户端具有不同的局部模型结构,且大多数局部数据是非独立的、同分布的,因此,在客户端不断积累新知识的过程中,FL会遇到异质性和灾难性遗忘问题。在这项工作中,我们提出了一种称为MUFTI(基于多域蒸馏的异构联邦持续学习)的方案。一方面,我们将领域自适应扩展到FL,通过提取特征来获得未标记的公共数据集上的特征表示进行协同训练,缩小了相同样本下不同模型的特征输出之间的距离。另一方面,我们提出了一种结合知识蒸馏的方法来解决灾难性遗忘问题。在单个任务中,使用双域蒸馏来避免不同域之间的数据遗忘;对于任务流中的跨任务学习,使用之前模型的logits输出作为老师,避免忘记旧的任务。实验结果表明,该方法在精度和鲁棒性方面均优于现有方法。评估还表明,MUFTI在处理任务增量问题、减少灾难性遗忘和实现多个目标之间的权衡方面表现良好。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Information Forensics and Security
IEEE Transactions on Information Forensics and Security 工程技术-工程:电子与电气
CiteScore
14.40
自引率
7.40%
发文量
234
审稿时长
6.5 months
期刊介绍: The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features
期刊最新文献
Differentially Private Zeroth-Order Methods for Scalable Large Language Model Fine-tuning PromptGuard: Soft Prompt-Guided Unsafe Content Moderation for Text-to-Image Models Rethinking Frequency Modeling: Tail-Aware Dynamic Adversarial Training for Long-Tailed Robustness DeFiMix: Indistinguishable Coin Mixing Schemes in Decentralized Finance SeeGait: Synergistic Co-evolving Representations for Multimodal Gait Recognition via Hierarchical Multi-Stage Fusion
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1