Noise-Robust Federated Learning via Interclient Co-Distillation

IF 8.9 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE IEEE transactions on neural networks and learning systems Pub Date : 2025-03-19 DOI:10.1109/TNNLS.2025.3546903
Liang Gao;Li Li;Yingwen Chen;Shaojing Fu;Dongsheng Wang;Siwei Wang;Cheng-Zhong Xu;Ming Xu
{"title":"Noise-Robust Federated Learning via Interclient Co-Distillation","authors":"Liang Gao;Li Li;Yingwen Chen;Shaojing Fu;Dongsheng Wang;Siwei Wang;Cheng-Zhong Xu;Ming Xu","doi":"10.1109/TNNLS.2025.3546903","DOIUrl":null,"url":null,"abstract":"Federated learning (FL) is a new learning paradigm that enables multiple clients to collaboratively train a high-performance model while preserving user privacy. However, the effectiveness of FL heavily relies on the availability of accurately labeled data, which can be challenging to obtain in real-world scenarios. To address this issue and robustly train shared models using distributed noisy labeled data, we propose <italic>FedDQ</i>, a noise-robust FL framework that utilizes co-distillation and quality-aware aggregation techniques. <italic>FedDQ</i> incorporates two key features: a noise-adaptive training strategy and an efficient label-correcting mechanism. The noise-adaptive training strategy relies on the estimation of labels’ noise levels to dynamically adjust clients’ training engagement, which mitigates the impact of wrong labels while efficiently exploring features from clean data. In addition, <italic>FedDQ</i> designs a two-head network and employs it for co-distillation. The co-distillation strategy facilitates knowledge transfer among clients to share the representational capabilities. Besides, <italic>FedDQ</i> enhances label correction to rectify improper labels through co-filtering and label correction. The experimental results demonstrate the effectiveness of <italic>FedDQ</i> in improving model performance and handling noisy data challenges in FL settings. On the CIFAR-100 dataset with noisy labels, <italic>FedDQ</i> exhibits a notable improvement of up to 32.4% compared to the baseline method.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"36 8","pages":"14343-14357"},"PeriodicalIF":8.9000,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10934047/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Federated learning (FL) is a new learning paradigm that enables multiple clients to collaboratively train a high-performance model while preserving user privacy. However, the effectiveness of FL heavily relies on the availability of accurately labeled data, which can be challenging to obtain in real-world scenarios. To address this issue and robustly train shared models using distributed noisy labeled data, we propose FedDQ, a noise-robust FL framework that utilizes co-distillation and quality-aware aggregation techniques. FedDQ incorporates two key features: a noise-adaptive training strategy and an efficient label-correcting mechanism. The noise-adaptive training strategy relies on the estimation of labels’ noise levels to dynamically adjust clients’ training engagement, which mitigates the impact of wrong labels while efficiently exploring features from clean data. In addition, FedDQ designs a two-head network and employs it for co-distillation. The co-distillation strategy facilitates knowledge transfer among clients to share the representational capabilities. Besides, FedDQ enhances label correction to rectify improper labels through co-filtering and label correction. The experimental results demonstrate the effectiveness of FedDQ in improving model performance and handling noisy data challenges in FL settings. On the CIFAR-100 dataset with noisy labels, FedDQ exhibits a notable improvement of up to 32.4% compared to the baseline method.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于客户端协同蒸馏的噪声鲁棒联邦学习
联邦学习(FL)是一种新的学习范式,它使多个客户机能够在保护用户隐私的同时协作训练高性能模型。然而,FL的有效性在很大程度上依赖于准确标记数据的可用性,这在现实场景中很难获得。为了解决这个问题并使用分布式噪声标记数据鲁棒训练共享模型,我们提出了FedDQ,这是一个利用共蒸馏和质量感知聚合技术的噪声鲁棒FL框架。FedDQ包含两个关键特征:噪声自适应训练策略和有效的标签校正机制。噪声自适应训练策略依赖于对标签噪声水平的估计来动态调整客户端的训练参与度,从而减轻了错误标签的影响,同时有效地从干净的数据中探索特征。此外,FedDQ设计了一个双头网络,并将其用于共蒸馏。共同蒸馏策略促进了客户之间的知识转移,以共享表示能力。此外,FedDQ还加强了标签校正,通过共过滤和标签校正来纠正不正确的标签。实验结果证明了FedDQ在提高模型性能和处理FL设置中的噪声数据挑战方面的有效性。在带有噪声标签的CIFAR-100数据集上,与基线方法相比,FedDQ显示出高达32.4%的显着改进。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE transactions on neural networks and learning systems
IEEE transactions on neural networks and learning systems COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
CiteScore
23.80
自引率
9.60%
发文量
2102
审稿时长
3-8 weeks
期刊介绍: The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.
期刊最新文献
Learning From M-Tuple One-vs-All Confidence Comparison Data. Sparse Variational Student-t Processes for Heavy-Tailed Modeling. Prompt Then Refine: Prompt-Free SAM-Enhanced Collaborative Learning Network for Detecting Salient Objects in Underwater Images CoreKD: A Context-Aware Local Region Structural Contrastive Knowledge Distillation Framework for Object Detection Enhancing Stability of Probabilistic Model-Based Reinforcement Learning by Adaptive Noise Filtering
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1