FedGhost: Data-Free Model Poisoning Enhancement in Federated Learning

IF 8 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS IEEE Transactions on Information Forensics and Security Pub Date : 2025-02-07 DOI:10.1109/TIFS.2025.3539087
Zhuoran Ma;Xinyi Huang;Zhuzhu Wang;Zhan Qin;Xiangyu Wang;Jianfeng Ma
{"title":"FedGhost: Data-Free Model Poisoning Enhancement in Federated Learning","authors":"Zhuoran Ma;Xinyi Huang;Zhuzhu Wang;Zhan Qin;Xiangyu Wang;Jianfeng Ma","doi":"10.1109/TIFS.2025.3539087","DOIUrl":null,"url":null,"abstract":"FL is vulnerable to model poisoning attacks due to the invisibility of local data and the decentralized nature of FL training. The adversary attempts to maliciously manipulate local model gradients to compromise the global model (i.e., victim model). Commonly-studied model poisoning attacks heavily depend on accessing additional knowledge, such as local data and the aggregation algorithm from the victim model, which easily encounter practical obstacles due to limited adversarial knowledge. In this paper, we first reveal that aggregated gradients in FL can serve as an attack carrier, exposing the latent knowledge of the victim model. In particular, we propose a data-free model poisoning attack named FedGhost, which aims to redirect the training objective of FL towards the adversary’s objective without any auxiliary information. In FedGhost, we design a black-box adaptive optimization algorithm to dynamically adjust the perturbation factor for malicious gradients, maximizing the poisoning impact of FL. Experimental results on five datasets in IID and Non-IID FL settings demonstrate that FedGhost achieves the highest attack success rate, outperforming other state-of-the-art model poisoning attacks by more than <inline-formula> <tex-math>$10\\%-60\\%$ </tex-math></inline-formula>.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"2096-2108"},"PeriodicalIF":8.0000,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Forensics and Security","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10877716/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

Abstract

FL is vulnerable to model poisoning attacks due to the invisibility of local data and the decentralized nature of FL training. The adversary attempts to maliciously manipulate local model gradients to compromise the global model (i.e., victim model). Commonly-studied model poisoning attacks heavily depend on accessing additional knowledge, such as local data and the aggregation algorithm from the victim model, which easily encounter practical obstacles due to limited adversarial knowledge. In this paper, we first reveal that aggregated gradients in FL can serve as an attack carrier, exposing the latent knowledge of the victim model. In particular, we propose a data-free model poisoning attack named FedGhost, which aims to redirect the training objective of FL towards the adversary’s objective without any auxiliary information. In FedGhost, we design a black-box adaptive optimization algorithm to dynamically adjust the perturbation factor for malicious gradients, maximizing the poisoning impact of FL. Experimental results on five datasets in IID and Non-IID FL settings demonstrate that FedGhost achieves the highest attack success rate, outperforming other state-of-the-art model poisoning attacks by more than $10\%-60\%$ .
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
FedGhost:联邦学习中无数据模型中毒增强
由于局部数据的不可见性和FL训练的分散性,FL容易受到模型中毒攻击。攻击者试图恶意操纵局部模型梯度来破坏全局模型(即受害者模型)。通常研究的模型中毒攻击严重依赖于访问额外的知识,例如来自受害者模型的本地数据和聚合算法,由于有限的对抗性知识,这种攻击容易遇到实际障碍。在本文中,我们首先揭示了FL中的聚合梯度可以作为攻击载体,暴露受害者模型的潜在知识。特别地,我们提出了一种无数据模型投毒攻击,命名为FedGhost,其目的是在没有任何辅助信息的情况下,将FL的训练目标重定向到对手的目标上。在FedGhost中,我们设计了一个黑盒自适应优化算法来动态调整恶意梯度的扰动因子,最大限度地提高FL的中毒影响。在IID和非IID FL设置的五个数据集上的实验结果表明,FedGhost实现了最高的攻击成功率,比其他最先进的模型中毒攻击高出10% - 60%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Information Forensics and Security
IEEE Transactions on Information Forensics and Security 工程技术-工程:电子与电气
CiteScore
14.40
自引率
7.40%
发文量
234
审稿时长
6.5 months
期刊介绍: The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features
期刊最新文献
HINHJ: Hierarchical Attention-Based Heterogeneous Graph Neural Network for DNS Hijacking Detection A Distributed Multi-Agent Deep Reinforcement Learning-Based Anti-Jamming Approach for Mega LEO Constellations Leveraging Angle of Arrival Estimation against Impersonation Attacks in Physical Layer Authentication ModFuzz: Adaptive Module-level Fuzzing of Processors FORCE: Byzantine-Resilient Decentralized Federated Learning via Game-Theoretic Contribution Aggregation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1