FedMUA: Exploring the Vulnerabilities of Federated Learning to Malicious Unlearning Attacks

IF 8 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS IEEE Transactions on Information Forensics and Security Pub Date : 2025-01-17 DOI:10.1109/TIFS.2025.3531141
Jian Chen;Zehui Lin;Wanyu Lin;Wenlong Shi;Xiaoyan Yin;Di Wang
{"title":"FedMUA: Exploring the Vulnerabilities of Federated Learning to Malicious Unlearning Attacks","authors":"Jian Chen;Zehui Lin;Wanyu Lin;Wenlong Shi;Xiaoyan Yin;Di Wang","doi":"10.1109/TIFS.2025.3531141","DOIUrl":null,"url":null,"abstract":"Recently, the practical needs of “the right to be forgotten” in federated learning gave birth to a paradigm known as federated unlearning, which enables the server to forget personal data upon the client’s removal request. Existing studies on federated unlearning have primarily focused on efficiently eliminating the influence of requested data from the client’s model without retraining from scratch, however, they have rarely doubted the reliability of the global model posed by the discrepancy between its prediction performance before and after unlearning. To bridge this gap, we take the first step by introducing a novel malicious unlearning attack dubbed FedMUA, aiming to unveil potential vulnerabilities emerging from federated learning during the unlearning process. Specifically, clients may act as attackers by crafting malicious unlearning requests to manipulate the prediction behavior of the global model. The crux of FedMUA is to mislead the global model into unlearning more information associated with the influential samples for the target sample than anticipated, thus inducing adverse effects on target samples from other clients. To achieve this, we design a novel two-step method, known as Influential Sample Identification and Malicious Unlearning Generation, to identify and subsequently generate malicious feature unlearning requests within the influential samples. By doing so, we can significantly alter the predictions pertaining to the target sample by initiating the malicious feature unlearning requests, leading to the deliberate manipulation for the user adversely. Additionally, we design a new defense mechanism that is highly resilient against malicious unlearning attacks. Extensive experiments on three realistic datasets reveal that FedMUA effectively induces misclassification on target samples and can achieve an 80% attack success rate by triggering only 0.3% malicious unlearning requests.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"1665-1678"},"PeriodicalIF":8.0000,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Forensics and Security","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10844903/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

Abstract

Recently, the practical needs of “the right to be forgotten” in federated learning gave birth to a paradigm known as federated unlearning, which enables the server to forget personal data upon the client’s removal request. Existing studies on federated unlearning have primarily focused on efficiently eliminating the influence of requested data from the client’s model without retraining from scratch, however, they have rarely doubted the reliability of the global model posed by the discrepancy between its prediction performance before and after unlearning. To bridge this gap, we take the first step by introducing a novel malicious unlearning attack dubbed FedMUA, aiming to unveil potential vulnerabilities emerging from federated learning during the unlearning process. Specifically, clients may act as attackers by crafting malicious unlearning requests to manipulate the prediction behavior of the global model. The crux of FedMUA is to mislead the global model into unlearning more information associated with the influential samples for the target sample than anticipated, thus inducing adverse effects on target samples from other clients. To achieve this, we design a novel two-step method, known as Influential Sample Identification and Malicious Unlearning Generation, to identify and subsequently generate malicious feature unlearning requests within the influential samples. By doing so, we can significantly alter the predictions pertaining to the target sample by initiating the malicious feature unlearning requests, leading to the deliberate manipulation for the user adversely. Additionally, we design a new defense mechanism that is highly resilient against malicious unlearning attacks. Extensive experiments on three realistic datasets reveal that FedMUA effectively induces misclassification on target samples and can achieve an 80% attack success rate by triggering only 0.3% malicious unlearning requests.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
FedMUA:探索联邦学习对恶意学习攻击的漏洞
最近,联邦学习中“被遗忘权”的实际需求催生了一种称为联邦遗忘的范例,它使服务器能够在客户端删除请求时忘记个人数据。现有的联合学习研究主要集中在有效地消除客户端模型中请求数据的影响,而无需从头开始重新训练,然而,他们很少怀疑全局模型在遗忘前后预测性能的差异所带来的可靠性。为了弥补这一差距,我们首先引入了一种名为FedMUA的新型恶意学习攻击,旨在揭示联邦学习在学习过程中出现的潜在漏洞。具体来说,客户机可以通过制造恶意的取消学习请求来操纵全局模型的预测行为,从而充当攻击者。FedMUA的关键在于误导全局模型,使其忘记了目标样本中与有影响的样本相关的比预期更多的信息,从而导致其他客户对目标样本产生不利影响。为了实现这一目标,我们设计了一种新的两步方法,称为影响样本识别和恶意学习生成,以识别并随后在有影响的样本中生成恶意特征学习请求。通过这样做,我们可以通过发起恶意的特征学习请求来显著改变与目标样本相关的预测,从而导致对用户不利的故意操纵。此外,我们设计了一种新的防御机制,对恶意的遗忘攻击具有很高的弹性。在三个真实数据集上的大量实验表明,FedMUA有效地诱导了目标样本的错误分类,并且通过仅触发0.3%的恶意学习请求可以实现80%的攻击成功率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Information Forensics and Security
IEEE Transactions on Information Forensics and Security 工程技术-工程:电子与电气
CiteScore
14.40
自引率
7.40%
发文量
234
审稿时长
6.5 months
期刊介绍: The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features
期刊最新文献
A Novel Perspective on Gradient Defense: Layer-Specific Protection Against Privacy Leakage Cert-SSBD: Certified Backdoor Defense with Sample-Specific Smoothing Noises GUARD: A Unified Open-Set and Closed-Set Gait Recognition Framework via Feature Reconstruction on Wi-Fi CSI VoIP Call Identification via a Dual-Level 1D-CNN with Frame and Utterance Features Risk-Aware Privacy Preservation for LLM Inference
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1