随机网络上分布亚梯度方法的收敛性分析

I. Lobel, A. Ozdaglar
{"title":"随机网络上分布亚梯度方法的收敛性分析","authors":"I. Lobel, A. Ozdaglar","doi":"10.1109/ALLERTON.2008.4797579","DOIUrl":null,"url":null,"abstract":"We consider the problem of cooperatively minimizing the sum of convex functions, where the functions represent local objective functions of the agents. We assume that each agent has information about his local function, and communicate with the other agents over a time-varying network topology. For this problem, we propose a distributed subgradient method that uses averaging algorithms for locally sharing information among the agents. In contrast to previous works that make worst-case assumptions about the connectivity of the agents (such as bounded communication intervals between nodes), we assume that links fail according to a given stochastic process. Under the assumption that the link failures are independent and identically distributed over time (possibly correlated across links), we provide convergence results and convergence rate estimates for our subgradient algorithm.","PeriodicalId":120561,"journal":{"name":"2008 46th Annual Allerton Conference on Communication, Control, and Computing","volume":"79 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2008-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"57","resultStr":"{\"title\":\"Convergence analysis of distributed subgradient methods over random networks\",\"authors\":\"I. Lobel, A. Ozdaglar\",\"doi\":\"10.1109/ALLERTON.2008.4797579\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We consider the problem of cooperatively minimizing the sum of convex functions, where the functions represent local objective functions of the agents. We assume that each agent has information about his local function, and communicate with the other agents over a time-varying network topology. For this problem, we propose a distributed subgradient method that uses averaging algorithms for locally sharing information among the agents. In contrast to previous works that make worst-case assumptions about the connectivity of the agents (such as bounded communication intervals between nodes), we assume that links fail according to a given stochastic process. Under the assumption that the link failures are independent and identically distributed over time (possibly correlated across links), we provide convergence results and convergence rate estimates for our subgradient algorithm.\",\"PeriodicalId\":120561,\"journal\":{\"name\":\"2008 46th Annual Allerton Conference on Communication, Control, and Computing\",\"volume\":\"79 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2008-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"57\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2008 46th Annual Allerton Conference on Communication, Control, and Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ALLERTON.2008.4797579\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2008 46th Annual Allerton Conference on Communication, Control, and Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ALLERTON.2008.4797579","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 57

摘要

考虑凸函数和的协同极小化问题,其中凸函数表示智能体的局部目标函数。我们假设每个代理都有关于其本地功能的信息,并通过时变的网络拓扑与其他代理通信。针对这一问题,我们提出了一种分布式亚梯度方法,该方法使用平均算法在智能体之间局部共享信息。与之前对代理的连通性做出最坏情况假设(例如节点之间有界的通信间隔)的工作相反,我们假设链接根据给定的随机过程失效。在假设链路故障是独立的且随时间分布相同(可能在链路之间相关)的情况下,我们为我们的子梯度算法提供了收敛结果和收敛速率估计。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Convergence analysis of distributed subgradient methods over random networks
We consider the problem of cooperatively minimizing the sum of convex functions, where the functions represent local objective functions of the agents. We assume that each agent has information about his local function, and communicate with the other agents over a time-varying network topology. For this problem, we propose a distributed subgradient method that uses averaging algorithms for locally sharing information among the agents. In contrast to previous works that make worst-case assumptions about the connectivity of the agents (such as bounded communication intervals between nodes), we assume that links fail according to a given stochastic process. Under the assumption that the link failures are independent and identically distributed over time (possibly correlated across links), we provide convergence results and convergence rate estimates for our subgradient algorithm.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Learning sparse doubly-selective channels Ergodic two-user interference channels: Is separability optimal? Weight distribution of codes on hypergraphs Compound multiple access channels with conferencing decoders Transmission techniques for relay-interference networks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1