DR 次模态最大化的随机方差降低

IF 0.9 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Algorithmica Pub Date : 2023-12-19 DOI:10.1007/s00453-023-01195-z
Yuefang Lian, Donglei Du, Xiao Wang, Dachuan Xu, Yang Zhou
{"title":"DR 次模态最大化的随机方差降低","authors":"Yuefang Lian,&nbsp;Donglei Du,&nbsp;Xiao Wang,&nbsp;Dachuan Xu,&nbsp;Yang Zhou","doi":"10.1007/s00453-023-01195-z","DOIUrl":null,"url":null,"abstract":"<div><p>Stochastic optimization has experienced significant growth in recent decades, with the increasing prevalence of variance reduction techniques in stochastic optimization algorithms to enhance computational efficiency. In this paper, we introduce two projection-free stochastic approximation algorithms for maximizing diminishing return (DR) submodular functions over convex constraints, building upon the Stochastic Path Integrated Differential EstimatoR (SPIDER) and its variants. Firstly, we present a SPIDER Continuous Greedy (SPIDER-CG) algorithm for the monotone case that guarantees a <span>\\((1-e^{-1})\\text {OPT}-\\varepsilon \\)</span> approximation after <span>\\(\\mathcal {O}(\\varepsilon ^{-1})\\)</span> iterations and <span>\\(\\mathcal {O}(\\varepsilon ^{-2})\\)</span> stochastic gradient computations under the mean-squared smoothness assumption. For the non-monotone case, we develop a SPIDER Frank–Wolfe (SPIDER-FW) algorithm that guarantees a <span>\\(\\frac{1}{4}(1-\\min _{x\\in \\mathcal {C}}{\\Vert x\\Vert _{\\infty }})\\text {OPT}-\\varepsilon \\)</span> approximation with <span>\\(\\mathcal {O}(\\varepsilon ^{-1})\\)</span> iterations and <span>\\(\\mathcal {O}(\\varepsilon ^{-2})\\)</span> stochastic gradient estimates. To address the practical challenge associated with a large number of samples per iteration, we introduce a modified gradient estimator based on SPIDER, leading to a Hybrid SPIDER-FW (Hybrid SPIDER-CG) algorithm, which achieves the same approximation guarantee as SPIDER-FW (SPIDER-CG) algorithm with only <span>\\(\\mathcal {O}(1)\\)</span> samples per iteration. Numerical experiments on both simulated and real data demonstrate the efficiency of the proposed methods.\n</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 5","pages":"1335 - 1364"},"PeriodicalIF":0.9000,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00453-023-01195-z.pdf","citationCount":"0","resultStr":"{\"title\":\"Stochastic Variance Reduction for DR-Submodular Maximization\",\"authors\":\"Yuefang Lian,&nbsp;Donglei Du,&nbsp;Xiao Wang,&nbsp;Dachuan Xu,&nbsp;Yang Zhou\",\"doi\":\"10.1007/s00453-023-01195-z\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Stochastic optimization has experienced significant growth in recent decades, with the increasing prevalence of variance reduction techniques in stochastic optimization algorithms to enhance computational efficiency. In this paper, we introduce two projection-free stochastic approximation algorithms for maximizing diminishing return (DR) submodular functions over convex constraints, building upon the Stochastic Path Integrated Differential EstimatoR (SPIDER) and its variants. Firstly, we present a SPIDER Continuous Greedy (SPIDER-CG) algorithm for the monotone case that guarantees a <span>\\\\((1-e^{-1})\\\\text {OPT}-\\\\varepsilon \\\\)</span> approximation after <span>\\\\(\\\\mathcal {O}(\\\\varepsilon ^{-1})\\\\)</span> iterations and <span>\\\\(\\\\mathcal {O}(\\\\varepsilon ^{-2})\\\\)</span> stochastic gradient computations under the mean-squared smoothness assumption. For the non-monotone case, we develop a SPIDER Frank–Wolfe (SPIDER-FW) algorithm that guarantees a <span>\\\\(\\\\frac{1}{4}(1-\\\\min _{x\\\\in \\\\mathcal {C}}{\\\\Vert x\\\\Vert _{\\\\infty }})\\\\text {OPT}-\\\\varepsilon \\\\)</span> approximation with <span>\\\\(\\\\mathcal {O}(\\\\varepsilon ^{-1})\\\\)</span> iterations and <span>\\\\(\\\\mathcal {O}(\\\\varepsilon ^{-2})\\\\)</span> stochastic gradient estimates. To address the practical challenge associated with a large number of samples per iteration, we introduce a modified gradient estimator based on SPIDER, leading to a Hybrid SPIDER-FW (Hybrid SPIDER-CG) algorithm, which achieves the same approximation guarantee as SPIDER-FW (SPIDER-CG) algorithm with only <span>\\\\(\\\\mathcal {O}(1)\\\\)</span> samples per iteration. Numerical experiments on both simulated and real data demonstrate the efficiency of the proposed methods.\\n</p></div>\",\"PeriodicalId\":50824,\"journal\":{\"name\":\"Algorithmica\",\"volume\":\"86 5\",\"pages\":\"1335 - 1364\"},\"PeriodicalIF\":0.9000,\"publicationDate\":\"2023-12-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://link.springer.com/content/pdf/10.1007/s00453-023-01195-z.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Algorithmica\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s00453-023-01195-z\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Algorithmica","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s00453-023-01195-z","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0

摘要

近几十年来,随机优化技术得到了长足发展,随机优化算法中的方差缩小技术也越来越普遍,从而提高了计算效率。本文在随机路径集成微分估计算法(SPIDER)及其变体的基础上,介绍了两种无投影随机近似算法,用于最大化凸约束下的递减收益(DR)亚模函数。首先首先,我们提出了单调情况下的 SPIDER 连续贪婪算法(SPIDER-CG),该算法在经过 \(\mathcal {O}(\varepsilon ^{-)) \(1-e^{-1})text {OPT}-\varepsilon \)迭代之后保证了 \((1-e^{-1})text {OPT}-\varepsilon \)近似值。1})\) 迭代和 \(\mathcal {O}(\varepsilon ^{-2})\) 在均方平滑假设下的随机梯度计算。对于非单调情况、我们开发了一种 SPIDER Frank-Wolfe 算法(SPIDER-FW),它可以保证(\frac{1}{4}(1-min _{x\in \mathcal {C}}{Vert x\Vert _{\infty }})\text {OPT}-\(\mathcal{O}(\varepsilon ^{-1})\)迭代近似和(\mathcal{O}(\varepsilon ^{-2})\)随机梯度估计。为了解决每次迭代需要大量样本的实际挑战,我们引入了基于SPIDER的改进梯度估计器,从而产生了混合SPIDER-FW(Hybrid SPIDER-CG)算法,该算法每次迭代只需要\(\mathcal {O}(1)\) 个样本就能实现与SPIDER-FW(SPIDER-CG)算法相同的近似保证。模拟数据和真实数据的数值实验证明了所提方法的高效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Stochastic Variance Reduction for DR-Submodular Maximization

Stochastic optimization has experienced significant growth in recent decades, with the increasing prevalence of variance reduction techniques in stochastic optimization algorithms to enhance computational efficiency. In this paper, we introduce two projection-free stochastic approximation algorithms for maximizing diminishing return (DR) submodular functions over convex constraints, building upon the Stochastic Path Integrated Differential EstimatoR (SPIDER) and its variants. Firstly, we present a SPIDER Continuous Greedy (SPIDER-CG) algorithm for the monotone case that guarantees a \((1-e^{-1})\text {OPT}-\varepsilon \) approximation after \(\mathcal {O}(\varepsilon ^{-1})\) iterations and \(\mathcal {O}(\varepsilon ^{-2})\) stochastic gradient computations under the mean-squared smoothness assumption. For the non-monotone case, we develop a SPIDER Frank–Wolfe (SPIDER-FW) algorithm that guarantees a \(\frac{1}{4}(1-\min _{x\in \mathcal {C}}{\Vert x\Vert _{\infty }})\text {OPT}-\varepsilon \) approximation with \(\mathcal {O}(\varepsilon ^{-1})\) iterations and \(\mathcal {O}(\varepsilon ^{-2})\) stochastic gradient estimates. To address the practical challenge associated with a large number of samples per iteration, we introduce a modified gradient estimator based on SPIDER, leading to a Hybrid SPIDER-FW (Hybrid SPIDER-CG) algorithm, which achieves the same approximation guarantee as SPIDER-FW (SPIDER-CG) algorithm with only \(\mathcal {O}(1)\) samples per iteration. Numerical experiments on both simulated and real data demonstrate the efficiency of the proposed methods.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Algorithmica
Algorithmica 工程技术-计算机:软件工程
CiteScore
2.80
自引率
9.10%
发文量
158
审稿时长
12 months
期刊介绍: Algorithmica is an international journal which publishes theoretical papers on algorithms that address problems arising in practical areas, and experimental papers of general appeal for practical importance or techniques. The development of algorithms is an integral part of computer science. The increasing complexity and scope of computer applications makes the design of efficient algorithms essential. Algorithmica covers algorithms in applied areas such as: VLSI, distributed computing, parallel processing, automated design, robotics, graphics, data base design, software tools, as well as algorithms in fundamental areas such as sorting, searching, data structures, computational geometry, and linear programming. In addition, the journal features two special sections: Application Experience, presenting findings obtained from applications of theoretical results to practical situations, and Problems, offering short papers presenting problems on selected topics of computer science.
期刊最新文献
Energy Constrained Depth First Search Recovering the Original Simplicity: Succinct and Exact Quantum Algorithm for the Welded Tree Problem Permutation-constrained Common String Partitions with Applications Reachability of Fair Allocations via Sequential Exchanges On Flipping the Fréchet Distance
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1