Distributed Online Convex Optimization With Statistical Privacy

IF 8.9 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE IEEE transactions on neural networks and learning systems Pub Date : 2024-11-19 DOI:10.1109/TNNLS.2024.3492144
Mingcheng Dai;Daniel W. C. Ho;Baoyong Zhang;Deming Yuan;Shengyuan Xu
{"title":"Distributed Online Convex Optimization With Statistical Privacy","authors":"Mingcheng Dai;Daniel W. C. Ho;Baoyong Zhang;Deming Yuan;Shengyuan Xu","doi":"10.1109/TNNLS.2024.3492144","DOIUrl":null,"url":null,"abstract":"We focus on the problem of distributed online constrained convex optimization with statistical privacy in multiagent systems. The participating agents aim to collaboratively minimize the cumulative system-wide cost while a passive adversary corrupts some of them. The passive adversary collects information from corrupted agents and attempts to estimate the private information of the uncorrupted ones. In this scenario, we adopt a correlated perturbation mechanism with globally balanced property to cover the local information of agents to enable privacy preservation. This work is the first attempt to integrate such a mechanism into the distributed online (sub)gradient descent algorithm, and then a new algorithm called privacy-preserving distributed online convex optimization (PP-DOCO) is designed. It is proved that the designed algorithm provides a statistical privacy guarantee for uncorrupted agents and achieves an expected regret in <inline-formula> <tex-math>$\\mathcal {O}(\\sqrt {K})$ </tex-math></inline-formula> for convex cost functions, where K denotes the time horizon. Furthermore, an improved expected regret in <inline-formula> <tex-math>$\\mathcal {O}(\\log (K))$ </tex-math></inline-formula> is derived for strongly convex cost functions. The obtained results are equivalent to the best regret scalings achieved by state-of-the-art algorithms. The privacy bound is established to describe the level of statistical privacy using the notion of Kullback–Leibler divergence (KLD). In addition, we observe that a tradeoff exists between our algorithm’s expected regret and statistical privacy. Finally, the effectiveness of our algorithm is validated by simulation results.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"36 6","pages":"9919-9932"},"PeriodicalIF":8.9000,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10758358/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

We focus on the problem of distributed online constrained convex optimization with statistical privacy in multiagent systems. The participating agents aim to collaboratively minimize the cumulative system-wide cost while a passive adversary corrupts some of them. The passive adversary collects information from corrupted agents and attempts to estimate the private information of the uncorrupted ones. In this scenario, we adopt a correlated perturbation mechanism with globally balanced property to cover the local information of agents to enable privacy preservation. This work is the first attempt to integrate such a mechanism into the distributed online (sub)gradient descent algorithm, and then a new algorithm called privacy-preserving distributed online convex optimization (PP-DOCO) is designed. It is proved that the designed algorithm provides a statistical privacy guarantee for uncorrupted agents and achieves an expected regret in $\mathcal {O}(\sqrt {K})$ for convex cost functions, where K denotes the time horizon. Furthermore, an improved expected regret in $\mathcal {O}(\log (K))$ is derived for strongly convex cost functions. The obtained results are equivalent to the best regret scalings achieved by state-of-the-art algorithms. The privacy bound is established to describe the level of statistical privacy using the notion of Kullback–Leibler divergence (KLD). In addition, we observe that a tradeoff exists between our algorithm’s expected regret and statistical privacy. Finally, the effectiveness of our algorithm is validated by simulation results.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
带统计隐私的分布式在线凸优化
研究了多智能体系统中具有统计隐私性的分布式在线约束凸优化问题。参与者的目标是协作最小化累积的系统范围成本,而被动的对手会破坏其中的一些。被动攻击者从被破坏的代理那里收集信息,并试图估计未被破坏的代理的私有信息。在这种情况下,我们采用具有全局平衡性质的相关摄动机制来覆盖代理的局部信息,从而实现隐私保护。本文首次尝试将这种机制整合到分布式在线(子)梯度下降算法中,并设计了一种新的算法——隐私保护分布式在线凸优化算法(PP-DOCO)。证明了所设计的算法为廉洁代理提供了统计隐私保证,并在$\mathcal {O}(\sqrt {K})$中实现了凸代价函数的期望遗憾,其中K表示时间范围。此外,对于强凸代价函数,推导了$\mathcal {O}(\log (K))$中改进的期望后悔。所获得的结果相当于由最先进的算法实现的最佳遗憾缩放。利用Kullback-Leibler散度(KLD)的概念建立了隐私界来描述统计隐私的水平。此外,我们观察到我们的算法的预期遗憾和统计隐私之间存在权衡。最后,通过仿真结果验证了算法的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE transactions on neural networks and learning systems
IEEE transactions on neural networks and learning systems COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
CiteScore
23.80
自引率
9.60%
发文量
2102
审稿时长
3-8 weeks
期刊介绍: The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.
期刊最新文献
Modality-Mix Learning: Promoting Multimodal Learning Through Multilabel Objective. Evaluating Large Language Models on Named Entity Recognition. Rethinking Topic Modeling With Information Bottleneck Principle. Quantum Convolutional Neural Networks: A Survey on Architectures, Applications, and Future Directions A Joint Learning Framework for Document-Level Event Extraction
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1