Evaluating Security and Robustness for Split Federated Learning Against Poisoning Attacks

IF 8 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS IEEE Transactions on Information Forensics and Security Pub Date : 2024-11-04 DOI:10.1109/TIFS.2024.3490861
Xiaodong Wu;Henry Yuan;Xiangman Li;Jianbing Ni;Rongxing Lu
{"title":"Evaluating Security and Robustness for Split Federated Learning Against Poisoning Attacks","authors":"Xiaodong Wu;Henry Yuan;Xiangman Li;Jianbing Ni;Rongxing Lu","doi":"10.1109/TIFS.2024.3490861","DOIUrl":null,"url":null,"abstract":"Split federated learning (SFL) is a recently proposed distributed collaborative learning architecture that integrates federated learning (FL) with split learning (SL), offering an ingenious solution for safeguarding privacy in resource-limited environments. Despite the compelling potential of SFL and its appealing attributes, its robustness remains uncharted territory. In this paper, we investigate the security and robustness of SFL, with a specific focus on its susceptibility to malicious client-driven poisoning attacks. Specifically, we study the weaknesses of SFL against the well-known poisoning attacks designed for FL, like dataset poisoning, weight poisoning, and label poisoning. We also introduce a novel type of poisoning attacks tailored for SFL, named smash poisoning, and evaluate the robustness against smash poisoning attacks and advanced hybrid attacks (DatasetSmash, LabelSmash, and WeightSmash) that amalgamate smash poisoning with the other three methods for FL. By simulating these attacks across diverse domains over four datasets, we find that most of these attacks (including weight, WeightSmash, and LabelSmash poisoning) can disrupt the converged models with straightforward poisoning actions or have persistent negative influence on the model accuracy even after the termination of the attacks. Furthermore, our findings reveal that the robustness of SFL can be augmented by strategically adjusting the system parameters, such as client quantity, bottleneck size or split type. Finally, we verify the effectiveness of the typical defense mechanisms of poisoning attacks intended for FL and design a new defense strategy that filters out malicious smashed data to improve the robustness of SFL. We observe that the adoption of properly chosen defense mechanisms is beneficial in decreasing the security risks of SFL, but entirely eliminating the impacts of poisoning attacks in SFL is still challenging.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"175-190"},"PeriodicalIF":8.0000,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Forensics and Security","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10741585/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

Abstract

Split federated learning (SFL) is a recently proposed distributed collaborative learning architecture that integrates federated learning (FL) with split learning (SL), offering an ingenious solution for safeguarding privacy in resource-limited environments. Despite the compelling potential of SFL and its appealing attributes, its robustness remains uncharted territory. In this paper, we investigate the security and robustness of SFL, with a specific focus on its susceptibility to malicious client-driven poisoning attacks. Specifically, we study the weaknesses of SFL against the well-known poisoning attacks designed for FL, like dataset poisoning, weight poisoning, and label poisoning. We also introduce a novel type of poisoning attacks tailored for SFL, named smash poisoning, and evaluate the robustness against smash poisoning attacks and advanced hybrid attacks (DatasetSmash, LabelSmash, and WeightSmash) that amalgamate smash poisoning with the other three methods for FL. By simulating these attacks across diverse domains over four datasets, we find that most of these attacks (including weight, WeightSmash, and LabelSmash poisoning) can disrupt the converged models with straightforward poisoning actions or have persistent negative influence on the model accuracy even after the termination of the attacks. Furthermore, our findings reveal that the robustness of SFL can be augmented by strategically adjusting the system parameters, such as client quantity, bottleneck size or split type. Finally, we verify the effectiveness of the typical defense mechanisms of poisoning attacks intended for FL and design a new defense strategy that filters out malicious smashed data to improve the robustness of SFL. We observe that the adoption of properly chosen defense mechanisms is beneficial in decreasing the security risks of SFL, but entirely eliminating the impacts of poisoning attacks in SFL is still challenging.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
评估针对中毒攻击的拆分联合学习的安全性和鲁棒性
分裂联邦学习(SFL)是最近提出的一种分布式协作学习架构,它将联邦学习(FL)与分裂学习(SL)集成在一起,为在资源有限的环境中保护隐私提供了一种巧妙的解决方案。尽管SFL具有令人信服的潜力和吸引人的特性,但其稳健性仍然是未知的领域。在本文中,我们研究了SFL的安全性和鲁棒性,特别关注它对恶意客户端驱动的中毒攻击的敏感性。具体来说,我们研究了SFL针对众所周知的针对FL设计的中毒攻击的弱点,如数据集中毒、权重中毒和标签中毒。我们还介绍了一种为SFL量身定制的新型投毒攻击,称为smash投毒,并评估了对smash投毒攻击和高级混合攻击(DatasetSmash, LabelSmash和WeightSmash)的鲁棒性,这些攻击将smash投毒与其他三种FL方法合并在一起。通过在四个数据集上跨不同领域模拟这些攻击,我们发现大多数这些攻击(包括weight, WeightSmash,和LabelSmash中毒)可以通过直接的中毒行为破坏聚合模型,或者即使在攻击终止后对模型精度也有持续的负面影响。此外,我们的研究结果表明,SFL的鲁棒性可以通过有策略地调整系统参数(如客户数量、瓶颈大小或分裂类型)来增强。最后,我们验证了针对FL的典型中毒攻击防御机制的有效性,并设计了一种新的防御策略来过滤恶意破坏数据,以提高SFL的鲁棒性。我们观察到,采用适当选择的防御机制有利于降低SFL的安全风险,但完全消除SFL中毒攻击的影响仍然是一个挑战。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Information Forensics and Security
IEEE Transactions on Information Forensics and Security 工程技术-工程:电子与电气
CiteScore
14.40
自引率
7.40%
发文量
234
审稿时长
6.5 months
期刊介绍: The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features
期刊最新文献
Differentially Private Zeroth-Order Methods for Scalable Large Language Model Fine-tuning PromptGuard: Soft Prompt-Guided Unsafe Content Moderation for Text-to-Image Models Rethinking Frequency Modeling: Tail-Aware Dynamic Adversarial Training for Long-Tailed Robustness DeFiMix: Indistinguishable Coin Mixing Schemes in Decentralized Finance SeeGait: Synergistic Co-evolving Representations for Multimodal Gait Recognition via Hierarchical Multi-Stage Fusion
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1