Privacy-Preserving Push-Pull Method for Decentralized Optimization via State Decomposition

IF 3 3区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC IEEE Transactions on Signal and Information Processing over Networks Pub Date : 2024-03-20 DOI:10.1109/TSIPN.2024.3402430
Huqiang Cheng;Xiaofeng Liao;Huaqing Li;Qingguo Lü;You Zhao
{"title":"Privacy-Preserving Push-Pull Method for Decentralized Optimization via State Decomposition","authors":"Huqiang Cheng;Xiaofeng Liao;Huaqing Li;Qingguo Lü;You Zhao","doi":"10.1109/TSIPN.2024.3402430","DOIUrl":null,"url":null,"abstract":"Distributed optimization is manifesting great potential in multiple fields, e.g., machine learning, control, resource allocation, etc. Existing decentralized optimization algorithms require sharing explicit state information among the agents, which raises the risk of private information leakage. To ensure privacy security, combining information security mechanisms, such as differential privacy and homomorphic encryption, with traditional decentralized optimization algorithms is a commonly used means. However, this may either sacrifice optimization accuracy or incur a heavy computational burden. To overcome these shortcomings, we develop a novel privacy-preserving decentralized optimization algorithm, named PPSD, that combines gradient tracking with a state decomposition mechanism. Specifically, each agent decomposes its state associated with the gradient into two substates. One substate is used for interaction with neighboring agents, and the other substate containing private information acts only on the first substate and thus is entirely agnostic to other agents. When the objective function is smooth and satisfies the Polyak-Łojasiewicz (PL) condition, PPSD attains an \n<inline-formula><tex-math>$R$</tex-math></inline-formula>\n-linear convergence rate. Moreover, the algorithm can preserve the normal agents' private information from being leaked to honest-but-curious attackers. Simulations further confirm the results.","PeriodicalId":56268,"journal":{"name":"IEEE Transactions on Signal and Information Processing over Networks","volume":"10 ","pages":"513-526"},"PeriodicalIF":3.0000,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Signal and Information Processing over Networks","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10535197/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Distributed optimization is manifesting great potential in multiple fields, e.g., machine learning, control, resource allocation, etc. Existing decentralized optimization algorithms require sharing explicit state information among the agents, which raises the risk of private information leakage. To ensure privacy security, combining information security mechanisms, such as differential privacy and homomorphic encryption, with traditional decentralized optimization algorithms is a commonly used means. However, this may either sacrifice optimization accuracy or incur a heavy computational burden. To overcome these shortcomings, we develop a novel privacy-preserving decentralized optimization algorithm, named PPSD, that combines gradient tracking with a state decomposition mechanism. Specifically, each agent decomposes its state associated with the gradient into two substates. One substate is used for interaction with neighboring agents, and the other substate containing private information acts only on the first substate and thus is entirely agnostic to other agents. When the objective function is smooth and satisfies the Polyak-Łojasiewicz (PL) condition, PPSD attains an $R$ -linear convergence rate. Moreover, the algorithm can preserve the normal agents' private information from being leaked to honest-but-curious attackers. Simulations further confirm the results.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
通过状态分解实现分散优化的隐私保护推拉法
分布式优化在机器学习、控制、资源分配等多个领域展现出巨大潜力。现有的分布式优化算法需要在代理之间共享明确的状态信息,这就增加了隐私信息泄露的风险。为了确保隐私安全,将信息安全机制(如差分隐私和同态加密)与传统的分散优化算法相结合是一种常用的手段。然而,这可能会牺牲优化的准确性,或者带来沉重的计算负担。为了克服这些缺点,我们开发了一种新型隐私保护分散优化算法,名为 PPSD,它将梯度跟踪与状态分解机制相结合。具体来说,每个代理将其与梯度相关的状态分解为两个子状态。其中一个子状态用于与邻近的代理互动,而另一个包含私人信息的子状态只作用于第一个子状态,因此与其他代理完全无关。当目标函数平滑并满足 Polyak-Łojasiewicz (PL) 条件时,PPSD 会达到 $R$ 线性收敛率。此外,该算法还能保护正常代理的私人信息不被诚实但好奇的攻击者泄露。模拟进一步证实了这些结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Signal and Information Processing over Networks
IEEE Transactions on Signal and Information Processing over Networks Computer Science-Computer Networks and Communications
CiteScore
5.80
自引率
12.50%
发文量
56
期刊介绍: The IEEE Transactions on Signal and Information Processing over Networks publishes high-quality papers that extend the classical notions of processing of signals defined over vector spaces (e.g. time and space) to processing of signals and information (data) defined over networks, potentially dynamically varying. In signal processing over networks, the topology of the network may define structural relationships in the data, or may constrain processing of the data. Topics include distributed algorithms for filtering, detection, estimation, adaptation and learning, model selection, data fusion, and diffusion or evolution of information over such networks, and applications of distributed signal processing.
期刊最新文献
Reinforcement Learning-Based Event-Triggered Constrained Containment Control for Perturbed Multiagent Systems Finite-Time Performance Mask Function-Based Distributed Privacy-Preserving Consensus: Case Study on Optimal Dispatch of Energy System Discrete-Time Controllability of Cartesian Product Networks Generalized Simplicial Attention Neural Networks A Continuous-Time Algorithm for Distributed Optimization With Nonuniform Time-Delay Under Switching and Unbalanced Digraphs
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1