Locally Differentially Private Distributed Online Learning With Guaranteed Optimality

IF 7 1区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS IEEE Transactions on Automatic Control Pub Date : 2024-10-17 DOI:10.1109/TAC.2024.3482977
Ziqin Chen;Yongqiang Wang
{"title":"Locally Differentially Private Distributed Online Learning With Guaranteed Optimality","authors":"Ziqin Chen;Yongqiang Wang","doi":"10.1109/TAC.2024.3482977","DOIUrl":null,"url":null,"abstract":"Distributed online learning is gaining increased traction due to its unique ability to process large-scale datasets and streaming data. To address the growing public awareness and concern about privacy protection, plenty of algorithms have been proposed to enable differential privacy in distributed online optimization and learning. However, these algorithms often face the dilemma of trading learning accuracy for privacy. By exploiting the unique characteristics of online learning, this article proposes an approach that tackles the dilemma and ensures both differential privacy and learning accuracy in distributed online learning. More specifically, while ensuring a diminishing expected instantaneous regret, the approach can simultaneously ensure a finite cumulative privacy budget, even in the infinite time horizon. To cater for the fully distributed setting, we adopt the local differential-privacy framework, which avoids the reliance on a trusted data curator that is required in the classic “centralized” (global) differential-privacy framework. To the best of our knowledge, this is the first algorithm that successfully ensures both rigorous local differential privacy and learning accuracy. The effectiveness of the proposed algorithm is evaluated using machine learning tasks, including logistic regression on the the “mushrooms” datasets and convolutional neural network-based image classification on the “MNIST” and “CIFAR-10” datasets.","PeriodicalId":13201,"journal":{"name":"IEEE Transactions on Automatic Control","volume":"70 4","pages":"2521-2536"},"PeriodicalIF":7.0000,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Automatic Control","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10720828/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Distributed online learning is gaining increased traction due to its unique ability to process large-scale datasets and streaming data. To address the growing public awareness and concern about privacy protection, plenty of algorithms have been proposed to enable differential privacy in distributed online optimization and learning. However, these algorithms often face the dilemma of trading learning accuracy for privacy. By exploiting the unique characteristics of online learning, this article proposes an approach that tackles the dilemma and ensures both differential privacy and learning accuracy in distributed online learning. More specifically, while ensuring a diminishing expected instantaneous regret, the approach can simultaneously ensure a finite cumulative privacy budget, even in the infinite time horizon. To cater for the fully distributed setting, we adopt the local differential-privacy framework, which avoids the reliance on a trusted data curator that is required in the classic “centralized” (global) differential-privacy framework. To the best of our knowledge, this is the first algorithm that successfully ensures both rigorous local differential privacy and learning accuracy. The effectiveness of the proposed algorithm is evaluated using machine learning tasks, including logistic regression on the the “mushrooms” datasets and convolutional neural network-based image classification on the “MNIST” and “CIFAR-10” datasets.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
具有保证最优性的局部差分私有分布式在线学习
分布式在线学习由于其处理大规模数据集和流数据的独特能力而获得越来越多的关注。为了解决日益增长的公众对隐私保护的意识和关注,人们提出了许多算法来实现分布式在线优化和学习中的差异隐私。然而,这些算法往往面临着以学习准确性换取隐私的困境。通过利用在线学习的独特特性,本文提出了一种解决分布式在线学习中差异隐私和学习准确性的方法。更具体地说,在确保减少预期的瞬时后悔的同时,该方法可以同时确保有限的累积隐私预算,即使在无限的时间范围内也是如此。为了适应完全分布式的环境,我们采用了局部差分隐私框架,避免了在经典的“集中式”(全局)差分隐私框架中对可信数据管理员的依赖。据我们所知,这是第一个成功地确保严格的局部差分隐私和学习准确性的算法。使用机器学习任务评估了所提出算法的有效性,包括“蘑菇”数据集上的逻辑回归和“MNIST”和“CIFAR-10”数据集上基于卷积神经网络的图像分类。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Automatic Control
IEEE Transactions on Automatic Control 工程技术-工程:电子与电气
CiteScore
11.30
自引率
5.90%
发文量
824
审稿时长
9 months
期刊介绍: In the IEEE Transactions on Automatic Control, the IEEE Control Systems Society publishes high-quality papers on the theory, design, and applications of control engineering. Two types of contributions are regularly considered: 1) Papers: Presentation of significant research, development, or application of control concepts. 2) Technical Notes and Correspondence: Brief technical notes, comments on published areas or established control topics, corrections to papers and notes published in the Transactions. In addition, special papers (tutorials, surveys, and perspectives on the theory and applications of control systems topics) are solicited.
期刊最新文献
Reaching Resilient Leader-Follower Consensus in Time-Varying Networks via Multi-Hop Relays Dynamical System Approach for Optimal Control Problems with Equilibrium Constraints Using Gap-Constraint-Based Reformulation Set-Based State Estimation for Discrete-Time Semi-Markov Jump Linear Systems Using Zonotopes Safe Event-triggered Gaussian Process Learning for Barrier-Constrained Control Energy-Gain Control of Time-Varying Systems: Receding Horizon Approximation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1