On the local convergence of ADAM-DPGAN with simultaneous and alternating gradient decent training methods

IF 7.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Expert Systems with Applications Pub Date : 2024-11-02 DOI:10.1016/j.eswa.2024.125646
Maryam Azadmanesh, Behrouz Shahgholi Ghahfarokhi , Maede Ashouri Talouki
{"title":"On the local convergence of ADAM-DPGAN with simultaneous and alternating gradient decent training methods","authors":"Maryam Azadmanesh,&nbsp;Behrouz Shahgholi Ghahfarokhi ,&nbsp;Maede Ashouri Talouki","doi":"10.1016/j.eswa.2024.125646","DOIUrl":null,"url":null,"abstract":"<div><div>Generative Adversarial Networks (GANs) do not ensure the privacy of the training datasets and may memorize sensitive details. To maintain privacy of data during inference, various privacy-preserving GAN mechanisms have been proposed. Despite the different approaches and their characteristics, advantages, and disadvantages, there is a lack of a systematic review on them. This paper first presents a comprehensive survey on privacy-preserving mechanisms and offers a taxonomy based on their characteristics. The survey reveals that many of these mechanisms modify the GAN learning algorithm to enhance privacy, highlighting the need for theoretical and empirical analysis of the impact of these modifications on GAN convergence. Among the surveyed methods, ADAM-DPGAN is a promising approach that ensures differential privacy in GANs for both the discriminator and the generator networks when using the ADAM optimizer, by introducing appropriate noise based on the global sensitivity of discriminator parameters. Therefore, this paper conducts a theoretical and empirical analysis of the convergence of ADAM-DPGAN. In the presented theoretical analysis, assuming that simultaneous/alternating gradient descent method with ADAM optimizer converges locally to a fixed point and its operator is L-Lipschitz with L &lt; 1, the effect of ADAM-DPGAN-based noise disturbance on local convergence is investigated and an upper bound for the convergence rate is provided. The analysis highlights the significant impact of differential privacy parameters, the number of training iterations, the discriminator’s learning rate, and the ADAM hyper-parameters on the convergence rate. The theoretical analysis is further validated through empirical analysis. Both theoretical and empirical analyses reveal that a stronger privacy guarantee leads to a slower convergence, highlighting the trade-off between privacy and performance. The findings also indicate that there exists an optimal value for the number of training iterations regarding the privacy needs. The optimal settings for each parameter are calculated and outlined in the paper.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"262 ","pages":"Article 125646"},"PeriodicalIF":7.5000,"publicationDate":"2024-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Expert Systems with Applications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0957417424025132","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Generative Adversarial Networks (GANs) do not ensure the privacy of the training datasets and may memorize sensitive details. To maintain privacy of data during inference, various privacy-preserving GAN mechanisms have been proposed. Despite the different approaches and their characteristics, advantages, and disadvantages, there is a lack of a systematic review on them. This paper first presents a comprehensive survey on privacy-preserving mechanisms and offers a taxonomy based on their characteristics. The survey reveals that many of these mechanisms modify the GAN learning algorithm to enhance privacy, highlighting the need for theoretical and empirical analysis of the impact of these modifications on GAN convergence. Among the surveyed methods, ADAM-DPGAN is a promising approach that ensures differential privacy in GANs for both the discriminator and the generator networks when using the ADAM optimizer, by introducing appropriate noise based on the global sensitivity of discriminator parameters. Therefore, this paper conducts a theoretical and empirical analysis of the convergence of ADAM-DPGAN. In the presented theoretical analysis, assuming that simultaneous/alternating gradient descent method with ADAM optimizer converges locally to a fixed point and its operator is L-Lipschitz with L < 1, the effect of ADAM-DPGAN-based noise disturbance on local convergence is investigated and an upper bound for the convergence rate is provided. The analysis highlights the significant impact of differential privacy parameters, the number of training iterations, the discriminator’s learning rate, and the ADAM hyper-parameters on the convergence rate. The theoretical analysis is further validated through empirical analysis. Both theoretical and empirical analyses reveal that a stronger privacy guarantee leads to a slower convergence, highlighting the trade-off between privacy and performance. The findings also indicate that there exists an optimal value for the number of training iterations regarding the privacy needs. The optimal settings for each parameter are calculated and outlined in the paper.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
关于 ADAM-DPGAN 的局部收敛与同步和交替梯度正交训练方法
生成对抗网络(GAN)无法确保训练数据集的隐私性,并可能记住敏感细节。为了在推理过程中维护数据隐私,人们提出了各种保护隐私的 GAN 机制。尽管有各种不同的方法及其特点、优点和缺点,但目前还缺乏对这些方法的系统综述。本文首先对隐私保护机制进行了全面调查,并根据其特点进行了分类。调查显示,这些机制中有很多都修改了 GAN 学习算法以增强隐私保护,这就凸显了对这些修改对 GAN 收敛性的影响进行理论和实证分析的必要性。在所调查的方法中,ADAM-DPGAN 是一种很有前途的方法,它通过根据判别器参数的全局敏感性引入适当的噪声,在使用 ADAM 优化器时确保 GAN 中判别器和生成器网络的不同隐私。因此,本文对 ADAM-DPGAN 的收敛性进行了理论和实证分析。在本文的理论分析中,假定带有 ADAM 优化器的同步/交替梯度下降法局部收敛到一个固定点,且其算子为 L Lipschitz,L < 1,研究了基于 ADAM-DPGAN 的噪声干扰对局部收敛的影响,并给出了收敛率的上限。分析强调了差分隐私参数、训练迭代次数、判别器学习率和 ADAM 超参数对收敛率的重要影响。实证分析进一步验证了理论分析。理论和实证分析都表明,更强的隐私保证会导致收敛速度减慢,从而突出了隐私和性能之间的权衡。研究结果还表明,就隐私需求而言,训练迭代次数存在一个最佳值。本文计算并概述了每个参数的最佳设置。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Expert Systems with Applications
Expert Systems with Applications 工程技术-工程:电子与电气
CiteScore
13.80
自引率
10.60%
发文量
2045
审稿时长
8.7 months
期刊介绍: Expert Systems With Applications is an international journal dedicated to the exchange of information on expert and intelligent systems used globally in industry, government, and universities. The journal emphasizes original papers covering the design, development, testing, implementation, and management of these systems, offering practical guidelines. It spans various sectors such as finance, engineering, marketing, law, project management, information management, medicine, and more. The journal also welcomes papers on multi-agent systems, knowledge management, neural networks, knowledge discovery, data mining, and other related areas, excluding applications to military/defense systems.
期刊最新文献
Maximizing failure occurrence in water distribution Systems: A Multi-Objective approach considering Reliability, Economic, and environmental aspects Multi-objective grey wolf optimizer based on reinforcement learning for distributed hybrid flowshop scheduling towards mass personalized manufacturing Scalable order dispatching through Federated Multi-Agent Deep Reinforcement Learning Deep multi-negative supervised hashing for large-scale image retrieval Machine-agnostic automated lumbar MRI segmentation using a cascaded model based on generative neurons
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1