{"title":"Noise-Robust Federated Learning via Interclient Co-Distillation","authors":"Liang Gao;Li Li;Yingwen Chen;Shaojing Fu;Dongsheng Wang;Siwei Wang;Cheng-Zhong Xu;Ming Xu","doi":"10.1109/TNNLS.2025.3546903","DOIUrl":null,"url":null,"abstract":"Federated learning (FL) is a new learning paradigm that enables multiple clients to collaboratively train a high-performance model while preserving user privacy. However, the effectiveness of FL heavily relies on the availability of accurately labeled data, which can be challenging to obtain in real-world scenarios. To address this issue and robustly train shared models using distributed noisy labeled data, we propose <italic>FedDQ</i>, a noise-robust FL framework that utilizes co-distillation and quality-aware aggregation techniques. <italic>FedDQ</i> incorporates two key features: a noise-adaptive training strategy and an efficient label-correcting mechanism. The noise-adaptive training strategy relies on the estimation of labels’ noise levels to dynamically adjust clients’ training engagement, which mitigates the impact of wrong labels while efficiently exploring features from clean data. In addition, <italic>FedDQ</i> designs a two-head network and employs it for co-distillation. The co-distillation strategy facilitates knowledge transfer among clients to share the representational capabilities. Besides, <italic>FedDQ</i> enhances label correction to rectify improper labels through co-filtering and label correction. The experimental results demonstrate the effectiveness of <italic>FedDQ</i> in improving model performance and handling noisy data challenges in FL settings. On the CIFAR-100 dataset with noisy labels, <italic>FedDQ</i> exhibits a notable improvement of up to 32.4% compared to the baseline method.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"36 8","pages":"14343-14357"},"PeriodicalIF":8.9000,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10934047/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Federated learning (FL) is a new learning paradigm that enables multiple clients to collaboratively train a high-performance model while preserving user privacy. However, the effectiveness of FL heavily relies on the availability of accurately labeled data, which can be challenging to obtain in real-world scenarios. To address this issue and robustly train shared models using distributed noisy labeled data, we propose FedDQ, a noise-robust FL framework that utilizes co-distillation and quality-aware aggregation techniques. FedDQ incorporates two key features: a noise-adaptive training strategy and an efficient label-correcting mechanism. The noise-adaptive training strategy relies on the estimation of labels’ noise levels to dynamically adjust clients’ training engagement, which mitigates the impact of wrong labels while efficiently exploring features from clean data. In addition, FedDQ designs a two-head network and employs it for co-distillation. The co-distillation strategy facilitates knowledge transfer among clients to share the representational capabilities. Besides, FedDQ enhances label correction to rectify improper labels through co-filtering and label correction. The experimental results demonstrate the effectiveness of FedDQ in improving model performance and handling noisy data challenges in FL settings. On the CIFAR-100 dataset with noisy labels, FedDQ exhibits a notable improvement of up to 32.4% compared to the baseline method.
期刊介绍:
The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.