Mingcheng Dai;Daniel W. C. Ho;Baoyong Zhang;Deming Yuan;Shengyuan Xu
{"title":"Distributed Online Convex Optimization With Statistical Privacy","authors":"Mingcheng Dai;Daniel W. C. Ho;Baoyong Zhang;Deming Yuan;Shengyuan Xu","doi":"10.1109/TNNLS.2024.3492144","DOIUrl":null,"url":null,"abstract":"We focus on the problem of distributed online constrained convex optimization with statistical privacy in multiagent systems. The participating agents aim to collaboratively minimize the cumulative system-wide cost while a passive adversary corrupts some of them. The passive adversary collects information from corrupted agents and attempts to estimate the private information of the uncorrupted ones. In this scenario, we adopt a correlated perturbation mechanism with globally balanced property to cover the local information of agents to enable privacy preservation. This work is the first attempt to integrate such a mechanism into the distributed online (sub)gradient descent algorithm, and then a new algorithm called privacy-preserving distributed online convex optimization (PP-DOCO) is designed. It is proved that the designed algorithm provides a statistical privacy guarantee for uncorrupted agents and achieves an expected regret in <inline-formula> <tex-math>$\\mathcal {O}(\\sqrt {K})$ </tex-math></inline-formula> for convex cost functions, where K denotes the time horizon. Furthermore, an improved expected regret in <inline-formula> <tex-math>$\\mathcal {O}(\\log (K))$ </tex-math></inline-formula> is derived for strongly convex cost functions. The obtained results are equivalent to the best regret scalings achieved by state-of-the-art algorithms. The privacy bound is established to describe the level of statistical privacy using the notion of Kullback–Leibler divergence (KLD). In addition, we observe that a tradeoff exists between our algorithm’s expected regret and statistical privacy. Finally, the effectiveness of our algorithm is validated by simulation results.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"36 6","pages":"9919-9932"},"PeriodicalIF":8.9000,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10758358/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
We focus on the problem of distributed online constrained convex optimization with statistical privacy in multiagent systems. The participating agents aim to collaboratively minimize the cumulative system-wide cost while a passive adversary corrupts some of them. The passive adversary collects information from corrupted agents and attempts to estimate the private information of the uncorrupted ones. In this scenario, we adopt a correlated perturbation mechanism with globally balanced property to cover the local information of agents to enable privacy preservation. This work is the first attempt to integrate such a mechanism into the distributed online (sub)gradient descent algorithm, and then a new algorithm called privacy-preserving distributed online convex optimization (PP-DOCO) is designed. It is proved that the designed algorithm provides a statistical privacy guarantee for uncorrupted agents and achieves an expected regret in $\mathcal {O}(\sqrt {K})$ for convex cost functions, where K denotes the time horizon. Furthermore, an improved expected regret in $\mathcal {O}(\log (K))$ is derived for strongly convex cost functions. The obtained results are equivalent to the best regret scalings achieved by state-of-the-art algorithms. The privacy bound is established to describe the level of statistical privacy using the notion of Kullback–Leibler divergence (KLD). In addition, we observe that a tradeoff exists between our algorithm’s expected regret and statistical privacy. Finally, the effectiveness of our algorithm is validated by simulation results.
期刊介绍:
The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.