Zipeng Ye, Wenjian Luo, Qi Zhou, Zhenqian Zhu, Yuhui Shi, Yan Jia
{"title":"Gradient Inversion Attacks: Impact Factors Analyses and Privacy Enhancement.","authors":"Zipeng Ye, Wenjian Luo, Qi Zhou, Zhenqian Zhu, Yuhui Shi, Yan Jia","doi":"10.1109/TPAMI.2024.3430533","DOIUrl":null,"url":null,"abstract":"<p><p>Gradient inversion attacks (GIAs) have posed significant challenges to the emerging paradigm of distributed learning, which aims to reconstruct the private training data of clients (participating parties in distributed training) through the shared parameters. For counteracting GIAs, a large number of privacy-preserving methods for distributed learning scenario have emerged. However, these methods have significant limitations, either compromising the usability of global model or consuming substantial additional computational resources. Furthermore, despite the extensive efforts dedicated to defense methods, the underlying causes of data leakage in distributed learning still have not been thoroughly investigated. Therefore, this paper tries to reveal the potential reasons behind the successful implementation of existing GIAs, explore variations in the robustness of models against GIAs during the training process, and investigate the impact of different model structures on attack performance. After these explorations and analyses, this paper propose a plug-and-play GIAs defense method, which augments the training data by a designed vicinal distribution. Sufficient empirical experiments demonstrate that this easy-toimplement method can ensure the basic level of privacy without compromising the usability of global model.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on pattern analysis and machine intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TPAMI.2024.3430533","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Gradient inversion attacks (GIAs) have posed significant challenges to the emerging paradigm of distributed learning, which aims to reconstruct the private training data of clients (participating parties in distributed training) through the shared parameters. For counteracting GIAs, a large number of privacy-preserving methods for distributed learning scenario have emerged. However, these methods have significant limitations, either compromising the usability of global model or consuming substantial additional computational resources. Furthermore, despite the extensive efforts dedicated to defense methods, the underlying causes of data leakage in distributed learning still have not been thoroughly investigated. Therefore, this paper tries to reveal the potential reasons behind the successful implementation of existing GIAs, explore variations in the robustness of models against GIAs during the training process, and investigate the impact of different model structures on attack performance. After these explorations and analyses, this paper propose a plug-and-play GIAs defense method, which augments the training data by a designed vicinal distribution. Sufficient empirical experiments demonstrate that this easy-toimplement method can ensure the basic level of privacy without compromising the usability of global model.
梯度反转攻击(Gradient Inversion Attack,GIAs)对新兴的分布式学习范式提出了重大挑战,该范式旨在通过共享参数重建客户端(参与分布式训练的各方)的隐私训练数据。为应对 GIA,出现了大量针对分布式学习场景的隐私保护方法。然而,这些方法都有很大的局限性,要么会影响全局模型的可用性,要么会消耗大量额外的计算资源。此外,尽管人们在防御方法上做了大量努力,但分布式学习中数据泄漏的根本原因仍未得到深入研究。因此,本文试图揭示现有 GIA 成功实施背后的潜在原因,探索模型在训练过程中对 GIA 的鲁棒性变化,并研究不同模型结构对攻击性能的影响。经过这些探索和分析,本文提出了一种即插即用的 GIAs 防御方法,即通过设计的邻域分布来增强训练数据。充分的实证实验证明,这种易于实现的方法可以在不影响全局模型可用性的前提下确保基本的隐私水平。