Backdoor Attacks and Defenses in Federated Learning: State-of-the-Art, Taxonomy, and Future Directions

IF 10.9 1区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE IEEE Wireless Communications Pub Date : 2023-04-01 DOI:10.1109/MWC.017.2100714
Xueluan Gong, Yanjiao Chen, Qian Wang, Weihan Kong
{"title":"Backdoor Attacks and Defenses in Federated Learning: State-of-the-Art, Taxonomy, and Future Directions","authors":"Xueluan Gong, Yanjiao Chen, Qian Wang, Weihan Kong","doi":"10.1109/MWC.017.2100714","DOIUrl":null,"url":null,"abstract":"The federated learning framework is designed for massively distributed training of deep learning models among thousands of participants without compromising the privacy of their training datasets. The training dataset across participants usually has heterogeneous data distributions. Besides, the central server aggregates the updates provided by different parties, but has no visibility into how such updates are created. The inherent characteristics of federated learning may incur a severe security concern. The malicious participants can upload poisoned updates to introduce backdoored functionality into the global model, in which the backdoored global model will misclassify all the malicious images (i.e., attached with the backdoor trigger) into a false label but will behave normally in the absence of the backdoor trigger. In this work, we present a comprehensive review of the state-of-the-art backdoor attacks and defenses in federated learning. We classify the existing backdoor attacks into two categories: data poisoning attacks and model poisoning attacks, and divide the defenses into anomaly updates detection, robust federated training, and backdoored model restoration. We give a detailed comparison of both attacks and defenses through experiments. Lastly, we pinpoint a variety of potential future directions of both backdoor attacks and defenses in the framework of federated learning.","PeriodicalId":13342,"journal":{"name":"IEEE Wireless Communications","volume":"30 1","pages":"114-121"},"PeriodicalIF":10.9000,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"15","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Wireless Communications","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/MWC.017.2100714","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 15

Abstract

The federated learning framework is designed for massively distributed training of deep learning models among thousands of participants without compromising the privacy of their training datasets. The training dataset across participants usually has heterogeneous data distributions. Besides, the central server aggregates the updates provided by different parties, but has no visibility into how such updates are created. The inherent characteristics of federated learning may incur a severe security concern. The malicious participants can upload poisoned updates to introduce backdoored functionality into the global model, in which the backdoored global model will misclassify all the malicious images (i.e., attached with the backdoor trigger) into a false label but will behave normally in the absence of the backdoor trigger. In this work, we present a comprehensive review of the state-of-the-art backdoor attacks and defenses in federated learning. We classify the existing backdoor attacks into two categories: data poisoning attacks and model poisoning attacks, and divide the defenses into anomaly updates detection, robust federated training, and backdoored model restoration. We give a detailed comparison of both attacks and defenses through experiments. Lastly, we pinpoint a variety of potential future directions of both backdoor attacks and defenses in the framework of federated learning.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
联合学习中的后门攻击和防御:现状、分类和未来方向
联邦学习框架是为在数千参与者中大规模分布式训练深度学习模型而设计的,而不会损害其训练数据集的隐私。跨参与者的训练数据集通常具有异构数据分布。此外,中央服务器聚合由不同方提供的更新,但不知道如何创建这些更新。联邦学习的固有特性可能会引起严重的安全问题。恶意参与者可以上传有毒的更新以将后门功能引入全局模型,其中后门全局模型会将所有恶意图像(即附加后门触发器)错误分类为错误标签,但在没有后门触发器的情况下会正常运行。在这项工作中,我们对联邦学习中最先进的后门攻击和防御进行了全面的回顾。我们将现有的后门攻击分为数据投毒攻击和模型投毒攻击两类,并将防御分为异常更新检测、鲁棒联邦训练和后门模型恢复。我们通过实验对攻击和防御进行了详细的比较。最后,我们指出了联邦学习框架中后门攻击和防御的各种潜在未来方向。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Wireless Communications
IEEE Wireless Communications 工程技术-电信学
CiteScore
24.20
自引率
1.60%
发文量
183
审稿时长
6-12 weeks
期刊介绍: IEEE Wireless Communications is tailored for professionals within the communications and networking communities. It addresses technical and policy issues associated with personalized, location-independent communications across various media and protocol layers. Encompassing both wired and wireless communications, the magazine explores the intersection of computing, the mobility of individuals, communicating devices, and personalized services. Every issue of this interdisciplinary publication presents high-quality articles delving into the revolutionary technological advances in personal, location-independent communications, and computing. IEEE Wireless Communications provides an insightful platform for individuals engaged in these dynamic fields, offering in-depth coverage of significant developments in the realm of communication technology.
期刊最新文献
Feasibility and Opportunities of Terrestrial Network and Non-Terrestrial Network Spectrum Sharing Toward the Development of 6G System Level Simulators: Addressing the Computational Complexity Challenge MaCro: Mega-Constellations Routing Systems with Multi-Edge Cross-Domain Features Computer Vision-Based Joint Space Sensing and Communication Systems: Non-Source, Autonomy, and Low Latency Distributed Approach to Satellite Direct-to-Cell Connectivity in 6G Non-Terrestrial Networks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1