{"title":"关于 ADAM-DPGAN 的局部收敛与同步和交替梯度正交训练方法","authors":"Maryam Azadmanesh, Behrouz Shahgholi Ghahfarokhi , Maede Ashouri Talouki","doi":"10.1016/j.eswa.2024.125646","DOIUrl":null,"url":null,"abstract":"<div><div>Generative Adversarial Networks (GANs) do not ensure the privacy of the training datasets and may memorize sensitive details. To maintain privacy of data during inference, various privacy-preserving GAN mechanisms have been proposed. Despite the different approaches and their characteristics, advantages, and disadvantages, there is a lack of a systematic review on them. This paper first presents a comprehensive survey on privacy-preserving mechanisms and offers a taxonomy based on their characteristics. The survey reveals that many of these mechanisms modify the GAN learning algorithm to enhance privacy, highlighting the need for theoretical and empirical analysis of the impact of these modifications on GAN convergence. Among the surveyed methods, ADAM-DPGAN is a promising approach that ensures differential privacy in GANs for both the discriminator and the generator networks when using the ADAM optimizer, by introducing appropriate noise based on the global sensitivity of discriminator parameters. Therefore, this paper conducts a theoretical and empirical analysis of the convergence of ADAM-DPGAN. In the presented theoretical analysis, assuming that simultaneous/alternating gradient descent method with ADAM optimizer converges locally to a fixed point and its operator is L-Lipschitz with L < 1, the effect of ADAM-DPGAN-based noise disturbance on local convergence is investigated and an upper bound for the convergence rate is provided. The analysis highlights the significant impact of differential privacy parameters, the number of training iterations, the discriminator’s learning rate, and the ADAM hyper-parameters on the convergence rate. The theoretical analysis is further validated through empirical analysis. Both theoretical and empirical analyses reveal that a stronger privacy guarantee leads to a slower convergence, highlighting the trade-off between privacy and performance. The findings also indicate that there exists an optimal value for the number of training iterations regarding the privacy needs. The optimal settings for each parameter are calculated and outlined in the paper.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":null,"pages":null},"PeriodicalIF":7.5000,"publicationDate":"2024-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"On the local convergence of ADAM-DPGAN with simultaneous and alternating gradient decent training methods\",\"authors\":\"Maryam Azadmanesh, Behrouz Shahgholi Ghahfarokhi , Maede Ashouri Talouki\",\"doi\":\"10.1016/j.eswa.2024.125646\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Generative Adversarial Networks (GANs) do not ensure the privacy of the training datasets and may memorize sensitive details. To maintain privacy of data during inference, various privacy-preserving GAN mechanisms have been proposed. Despite the different approaches and their characteristics, advantages, and disadvantages, there is a lack of a systematic review on them. This paper first presents a comprehensive survey on privacy-preserving mechanisms and offers a taxonomy based on their characteristics. The survey reveals that many of these mechanisms modify the GAN learning algorithm to enhance privacy, highlighting the need for theoretical and empirical analysis of the impact of these modifications on GAN convergence. Among the surveyed methods, ADAM-DPGAN is a promising approach that ensures differential privacy in GANs for both the discriminator and the generator networks when using the ADAM optimizer, by introducing appropriate noise based on the global sensitivity of discriminator parameters. Therefore, this paper conducts a theoretical and empirical analysis of the convergence of ADAM-DPGAN. In the presented theoretical analysis, assuming that simultaneous/alternating gradient descent method with ADAM optimizer converges locally to a fixed point and its operator is L-Lipschitz with L < 1, the effect of ADAM-DPGAN-based noise disturbance on local convergence is investigated and an upper bound for the convergence rate is provided. The analysis highlights the significant impact of differential privacy parameters, the number of training iterations, the discriminator’s learning rate, and the ADAM hyper-parameters on the convergence rate. The theoretical analysis is further validated through empirical analysis. Both theoretical and empirical analyses reveal that a stronger privacy guarantee leads to a slower convergence, highlighting the trade-off between privacy and performance. The findings also indicate that there exists an optimal value for the number of training iterations regarding the privacy needs. The optimal settings for each parameter are calculated and outlined in the paper.</div></div>\",\"PeriodicalId\":50461,\"journal\":{\"name\":\"Expert Systems with Applications\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":7.5000,\"publicationDate\":\"2024-11-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Expert Systems with Applications\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0957417424025132\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Expert Systems with Applications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0957417424025132","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
摘要
生成对抗网络(GAN)无法确保训练数据集的隐私性,并可能记住敏感细节。为了在推理过程中维护数据隐私,人们提出了各种保护隐私的 GAN 机制。尽管有各种不同的方法及其特点、优点和缺点,但目前还缺乏对这些方法的系统综述。本文首先对隐私保护机制进行了全面调查,并根据其特点进行了分类。调查显示,这些机制中有很多都修改了 GAN 学习算法以增强隐私保护,这就凸显了对这些修改对 GAN 收敛性的影响进行理论和实证分析的必要性。在所调查的方法中,ADAM-DPGAN 是一种很有前途的方法,它通过根据判别器参数的全局敏感性引入适当的噪声,在使用 ADAM 优化器时确保 GAN 中判别器和生成器网络的不同隐私。因此,本文对 ADAM-DPGAN 的收敛性进行了理论和实证分析。在本文的理论分析中,假定带有 ADAM 优化器的同步/交替梯度下降法局部收敛到一个固定点,且其算子为 L Lipschitz,L < 1,研究了基于 ADAM-DPGAN 的噪声干扰对局部收敛的影响,并给出了收敛率的上限。分析强调了差分隐私参数、训练迭代次数、判别器学习率和 ADAM 超参数对收敛率的重要影响。实证分析进一步验证了理论分析。理论和实证分析都表明,更强的隐私保证会导致收敛速度减慢,从而突出了隐私和性能之间的权衡。研究结果还表明,就隐私需求而言,训练迭代次数存在一个最佳值。本文计算并概述了每个参数的最佳设置。
On the local convergence of ADAM-DPGAN with simultaneous and alternating gradient decent training methods
Generative Adversarial Networks (GANs) do not ensure the privacy of the training datasets and may memorize sensitive details. To maintain privacy of data during inference, various privacy-preserving GAN mechanisms have been proposed. Despite the different approaches and their characteristics, advantages, and disadvantages, there is a lack of a systematic review on them. This paper first presents a comprehensive survey on privacy-preserving mechanisms and offers a taxonomy based on their characteristics. The survey reveals that many of these mechanisms modify the GAN learning algorithm to enhance privacy, highlighting the need for theoretical and empirical analysis of the impact of these modifications on GAN convergence. Among the surveyed methods, ADAM-DPGAN is a promising approach that ensures differential privacy in GANs for both the discriminator and the generator networks when using the ADAM optimizer, by introducing appropriate noise based on the global sensitivity of discriminator parameters. Therefore, this paper conducts a theoretical and empirical analysis of the convergence of ADAM-DPGAN. In the presented theoretical analysis, assuming that simultaneous/alternating gradient descent method with ADAM optimizer converges locally to a fixed point and its operator is L-Lipschitz with L < 1, the effect of ADAM-DPGAN-based noise disturbance on local convergence is investigated and an upper bound for the convergence rate is provided. The analysis highlights the significant impact of differential privacy parameters, the number of training iterations, the discriminator’s learning rate, and the ADAM hyper-parameters on the convergence rate. The theoretical analysis is further validated through empirical analysis. Both theoretical and empirical analyses reveal that a stronger privacy guarantee leads to a slower convergence, highlighting the trade-off between privacy and performance. The findings also indicate that there exists an optimal value for the number of training iterations regarding the privacy needs. The optimal settings for each parameter are calculated and outlined in the paper.
期刊介绍:
Expert Systems With Applications is an international journal dedicated to the exchange of information on expert and intelligent systems used globally in industry, government, and universities. The journal emphasizes original papers covering the design, development, testing, implementation, and management of these systems, offering practical guidelines. It spans various sectors such as finance, engineering, marketing, law, project management, information management, medicine, and more. The journal also welcomes papers on multi-agent systems, knowledge management, neural networks, knowledge discovery, data mining, and other related areas, excluding applications to military/defense systems.