Triggered Gradient Tracking for Asynchronous Distributed Optimization

Guido Carnevale, Ivano Notarnicola, L. Marconi, G. Notarstefano
{"title":"Triggered Gradient Tracking for Asynchronous Distributed Optimization","authors":"Guido Carnevale, Ivano Notarnicola, L. Marconi, G. Notarstefano","doi":"10.48550/arXiv.2203.02210","DOIUrl":null,"url":null,"abstract":"This paper proposes Asynchronous Triggered Gradient Tracking, i.e., a distributed optimization algorithm to solve consensus optimization over networks with asynchronous communication. As a building block, we devise the continuous-time counterpart of the recently proposed (discrete-time) distributed gradient tracking called Continuous Gradient Tracking. By using a Lyapunov approach, we prove exponential stability of the equilibrium corresponding to agents' estimates being consensual to the optimal solution, with arbitrary initialization of the local estimates. Then, we propose two triggered versions of the algorithm. In the first one, the agents continuously integrate their local dynamics and exchange with neighbors their current local variables in a synchronous way. In Asynchronous Triggered Gradient Tracking, we propose a totally asynchronous scheme in which each agent sends to neighbors its current local variables based on a triggering condition that depends on a locally verifiable condition. The triggering protocol preserves the linear convergence of the algorithm and avoids the Zeno behavior, i.e., an infinite number of triggering events over a finite interval of time is excluded. By using the stability analysis of Continuous Gradient Tracking as a preparatory result, we show exponential stability of the equilibrium point holds for both triggered algorithms and any estimate initialization. Finally, the simulations validate the effectiveness of the proposed methods on a data analytics problem, showing also improved performance in terms of inter-agent communication.","PeriodicalId":13196,"journal":{"name":"IEEE Robotics Autom. Mag.","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Robotics Autom. Mag.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2203.02210","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

This paper proposes Asynchronous Triggered Gradient Tracking, i.e., a distributed optimization algorithm to solve consensus optimization over networks with asynchronous communication. As a building block, we devise the continuous-time counterpart of the recently proposed (discrete-time) distributed gradient tracking called Continuous Gradient Tracking. By using a Lyapunov approach, we prove exponential stability of the equilibrium corresponding to agents' estimates being consensual to the optimal solution, with arbitrary initialization of the local estimates. Then, we propose two triggered versions of the algorithm. In the first one, the agents continuously integrate their local dynamics and exchange with neighbors their current local variables in a synchronous way. In Asynchronous Triggered Gradient Tracking, we propose a totally asynchronous scheme in which each agent sends to neighbors its current local variables based on a triggering condition that depends on a locally verifiable condition. The triggering protocol preserves the linear convergence of the algorithm and avoids the Zeno behavior, i.e., an infinite number of triggering events over a finite interval of time is excluded. By using the stability analysis of Continuous Gradient Tracking as a preparatory result, we show exponential stability of the equilibrium point holds for both triggered algorithms and any estimate initialization. Finally, the simulations validate the effectiveness of the proposed methods on a data analytics problem, showing also improved performance in terms of inter-agent communication.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
异步分布式优化的触发梯度跟踪
本文提出了异步触发梯度跟踪,即一种分布式优化算法,用于解决异步通信网络上的一致性优化问题。作为构建块,我们设计了最近提出的(离散时间)分布式梯度跟踪的连续时间对立物,称为连续梯度跟踪。利用Lyapunov方法,我们证明了在局部估计任意初始化的情况下,agent的估计对最优解是一致的,对应均衡的指数稳定性。然后,我们提出了该算法的两个触发版本。在第一种方法中,智能体不断地集成它们的局部动态,并以同步的方式与邻居交换它们当前的局部变量。在异步触发梯度跟踪中,我们提出了一种完全异步的方案,其中每个代理根据依赖于本地可验证条件的触发条件向邻居发送其当前局部变量。触发协议保留了算法的线性收敛性,避免了芝诺行为,即在有限的时间间隔内排除了无限数量的触发事件。通过对连续梯度跟踪的稳定性分析作为准备结果,我们证明了触发算法和任何估计初始化的平衡点保持指数稳定性。最后,仿真验证了所提出方法在数据分析问题上的有效性,并显示了在智能体间通信方面的性能改进。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Auction algorithm sensitivity for multi-robot task allocation Sensor Selection for Remote State Estimation with QoS Requirement Constraints Industry 4.0: What's Next? [Young Professionals] Becoming a Plenary or Keynote Speaker in an International Robotics Conference: Perspectives From an IEEE RAS Women in Engineering Panel [Women in Engineering] Industry 4.0: Opinion of a Roboticist on Machine Learning [Student's Corner]
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1