Enhancing Federated Learning Robustness Using Locally Benignity-Assessable Bayesian Dropout

IF 8 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS IEEE Transactions on Information Forensics and Security Pub Date : 2025-01-30 DOI:10.1109/TIFS.2025.3536777
Jingjing Xue;Sheng Sun;Min Liu;Qi Li;Ke Xu
{"title":"Enhancing Federated Learning Robustness Using Locally Benignity-Assessable Bayesian Dropout","authors":"Jingjing Xue;Sheng Sun;Min Liu;Qi Li;Ke Xu","doi":"10.1109/TIFS.2025.3536777","DOIUrl":null,"url":null,"abstract":"Federated Learning (FL) has emerged as a privacy-preserving training paradigm, which enables distributed devices to jointly learn a shared model without raw data sharing. However, the inaccessible client-side data and unverifiable local training leave FL vulnerable to Byzantine attacks. Most defense strategies focus on penalizing malicious clients in server-side aggregations and ignore clients-side weight units poisoning assessment, failing to maintain robustness and convergence in non-IID settings. In this paper, we propose Federated learning with Benignity-assessable Bayesian Dropout and variational Attention (FedBDA) to achieve local robust training based on fine-grained benignity indicators and guarantee global robustness over non-IID data. Specifically, FedBDA integrates variational inference explanation of dropout into local training, where each client individually quantifies the benign degree of weight units to determine a resilient dropping pattern for the local Bayesian model, enabling client-side robust training with Bayesian interpretability. To accommodate variational distributions of local Bayesian models and globally assess their benign potentials, we design a joint attention mechanism based on Jensen-Shannon divergence among local, global, and median distributions for robust weighted aggregation. Theoretical analysis proves the robustness and convergence of FedBDA. We conduct extensive experiments on four benchmark datasets with five typical attacks, and the results demonstrate that FedBDA outperforms status quo approaches in model performance and running efficiency.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"2464-2479"},"PeriodicalIF":8.0000,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Forensics and Security","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10858077/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

Abstract

Federated Learning (FL) has emerged as a privacy-preserving training paradigm, which enables distributed devices to jointly learn a shared model without raw data sharing. However, the inaccessible client-side data and unverifiable local training leave FL vulnerable to Byzantine attacks. Most defense strategies focus on penalizing malicious clients in server-side aggregations and ignore clients-side weight units poisoning assessment, failing to maintain robustness and convergence in non-IID settings. In this paper, we propose Federated learning with Benignity-assessable Bayesian Dropout and variational Attention (FedBDA) to achieve local robust training based on fine-grained benignity indicators and guarantee global robustness over non-IID data. Specifically, FedBDA integrates variational inference explanation of dropout into local training, where each client individually quantifies the benign degree of weight units to determine a resilient dropping pattern for the local Bayesian model, enabling client-side robust training with Bayesian interpretability. To accommodate variational distributions of local Bayesian models and globally assess their benign potentials, we design a joint attention mechanism based on Jensen-Shannon divergence among local, global, and median distributions for robust weighted aggregation. Theoretical analysis proves the robustness and convergence of FedBDA. We conduct extensive experiments on four benchmark datasets with five typical attacks, and the results demonstrate that FedBDA outperforms status quo approaches in model performance and running efficiency.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用局部可评估贝叶斯Dropout增强联邦学习鲁棒性
联邦学习(FL)已经成为一种保护隐私的训练范例,它使分布式设备能够在不共享原始数据的情况下共同学习共享模型。然而,无法访问的客户端数据和无法验证的本地训练使FL容易受到拜占庭攻击。大多数防御策略侧重于惩罚服务器端聚合中的恶意客户端,而忽略了客户端权重单位中毒评估,未能在非iid设置中保持鲁棒性和收敛性。在本文中,我们提出了使用benity -assessable Bayesian Dropout and variational Attention (FedBDA)的联邦学习,以实现基于细粒度benity指标的局部鲁棒性训练,并保证非iid数据的全局鲁棒性。具体来说,FedBDA将dropout的变分推理解释集成到局部训练中,其中每个客户端单独量化权重单元的良性程度,以确定局部贝叶斯模型的弹性下降模式,从而实现具有贝叶斯可解释性的客户端鲁棒训练。为了适应局部贝叶斯模型的变分分布并全面评估其良性潜力,我们设计了一种基于局部、全局和中位数分布之间Jensen-Shannon散度的联合注意机制,用于鲁棒加权聚合。理论分析证明了该算法的鲁棒性和收敛性。我们在4个具有5种典型攻击的基准数据集上进行了广泛的实验,结果表明FedBDA在模型性能和运行效率方面优于现有方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Information Forensics and Security
IEEE Transactions on Information Forensics and Security 工程技术-工程:电子与电气
CiteScore
14.40
自引率
7.40%
发文量
234
审稿时长
6.5 months
期刊介绍: The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features
期刊最新文献
HINHJ: Hierarchical Attention-Based Heterogeneous Graph Neural Network for DNS Hijacking Detection A Distributed Multi-Agent Deep Reinforcement Learning-Based Anti-Jamming Approach for Mega LEO Constellations Leveraging Angle of Arrival Estimation against Impersonation Attacks in Physical Layer Authentication ModFuzz: Adaptive Module-level Fuzzing of Processors FORCE: Byzantine-Resilient Decentralized Federated Learning via Game-Theoretic Contribution Aggregation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1