AGIC: Approximate Gradient Inversion Attack on Federated Learning

Jin Xu, Chi Hong, Jiyue Huang, L. Chen, Jérémie Decouchant
{"title":"AGIC: Approximate Gradient Inversion Attack on Federated Learning","authors":"Jin Xu, Chi Hong, Jiyue Huang, L. Chen, Jérémie Decouchant","doi":"10.1109/SRDS55811.2022.00012","DOIUrl":null,"url":null,"abstract":"Federated learning is a private-by-design distributed learning paradigm where clients train local models on their own data before a central server aggregates their local updates to compute a global model. Depending on the aggregation method used, the local updates are either the gradients or the weights of local learning models, e.g., FedAvg aggregates model weights. Unfortunately, recent reconstruction attacks apply a gradient inversion optimization on the gradient update of a single mini-batch to reconstruct the private data used by clients during training. As the state-of-the-art reconstruction attacks solely focus on single update, realistic adversarial scenarios are over-looked, such as observation across multiple updates and updates trained from multiple mini-batches. A few studies consider a more challenging adversarial scenario where only model updates based on multiple mini-batches are observable, and resort to computationally expensive simulation to untangle the underlying samples for each local step. In this paper, we propose AGIC, a novel Approximate Gradient Inversion Attack that efficiently and effectively reconstructs images from both model or gradient updates, and across multiple epochs. In a nutshell, AGIC (i) approximates gradient updates of used training samples from model updates to avoid costly simulation procedures, (ii) leverages gradient/model updates collected from multiple epochs, and (iii) assigns increasing weights to layers with respect to the neural network structure for reconstruction quality. We extensively evaluate AGIC on three datasets, namely CIFAR-10, CIFAR-100 and ImageNet. Our results show that AGIC increases the peak signal-to-noise ratio (PSNR) by up to 50% compared to two representative state-of-the-art gradient inversion attacks. Furthermore, AGIC is faster than the state-of-the-art simulation-based attack, e.g., it is 5x faster when attacking FedAvg with 8 local steps in between model updates.","PeriodicalId":143115,"journal":{"name":"2022 41st International Symposium on Reliable Distributed Systems (SRDS)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 41st International Symposium on Reliable Distributed Systems (SRDS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SRDS55811.2022.00012","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Federated learning is a private-by-design distributed learning paradigm where clients train local models on their own data before a central server aggregates their local updates to compute a global model. Depending on the aggregation method used, the local updates are either the gradients or the weights of local learning models, e.g., FedAvg aggregates model weights. Unfortunately, recent reconstruction attacks apply a gradient inversion optimization on the gradient update of a single mini-batch to reconstruct the private data used by clients during training. As the state-of-the-art reconstruction attacks solely focus on single update, realistic adversarial scenarios are over-looked, such as observation across multiple updates and updates trained from multiple mini-batches. A few studies consider a more challenging adversarial scenario where only model updates based on multiple mini-batches are observable, and resort to computationally expensive simulation to untangle the underlying samples for each local step. In this paper, we propose AGIC, a novel Approximate Gradient Inversion Attack that efficiently and effectively reconstructs images from both model or gradient updates, and across multiple epochs. In a nutshell, AGIC (i) approximates gradient updates of used training samples from model updates to avoid costly simulation procedures, (ii) leverages gradient/model updates collected from multiple epochs, and (iii) assigns increasing weights to layers with respect to the neural network structure for reconstruction quality. We extensively evaluate AGIC on three datasets, namely CIFAR-10, CIFAR-100 and ImageNet. Our results show that AGIC increases the peak signal-to-noise ratio (PSNR) by up to 50% compared to two representative state-of-the-art gradient inversion attacks. Furthermore, AGIC is faster than the state-of-the-art simulation-based attack, e.g., it is 5x faster when attacking FedAvg with 8 local steps in between model updates.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
AGIC:联邦学习的近似梯度反演攻击
联邦学习是一种基于私有设计的分布式学习范式,在这种范式中,客户机在自己的数据上训练本地模型,然后中央服务器聚合它们的本地更新以计算全局模型。根据所使用的聚合方法,局部更新要么是梯度,要么是局部学习模型的权重,例如,fedag聚合模型权重。不幸的是,最近的重构攻击在单个小批的梯度更新上应用梯度反演优化来重构客户端在训练期间使用的私有数据。由于最先进的重建攻击只关注单个更新,因此忽略了现实的对抗场景,例如跨多个更新的观察和从多个小批量训练的更新。一些研究考虑了一个更具挑战性的对抗场景,其中只有基于多个小批量的模型更新是可观察的,并且借助于计算昂贵的模拟来为每个局部步骤解耦底层样本。在本文中,我们提出了一种新的近似梯度反演攻击AGIC,它可以从模型或梯度更新中高效地重建图像,并且跨越多个时代。简而言之,AGIC (i)从模型更新中近似使用训练样本的梯度更新,以避免昂贵的模拟程序,(ii)利用从多个时代收集的梯度/模型更新,以及(iii)相对于神经网络结构为重建质量分配增加的权重。我们在三个数据集(CIFAR-10、CIFAR-100和ImageNet)上广泛评估了AGIC。我们的研究结果表明,与两种代表性的最先进的梯度反转攻击相比,AGIC将峰值信噪比(PSNR)提高了50%。此外,AGIC比最先进的基于仿真的攻击要快,例如,在模型更新之间使用8个局部步骤攻击fedag时,它的速度要快5倍。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
FWC: Fitting Weight Compression Method for Reducing Communication Traffic for Federated Learning External Reviewers & Co-Reviewers Secure Publish-Process-Subscribe System for Dispersed Computing An In-Depth Correlative Study Between DRAM Errors and Server Failures in Production Data Centers An Investigation on Data Center Cooling Systems Using FPGA-based Temperature Side Channels
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1