Yuefang Lian, Donglei Du, Xiao Wang, Dachuan Xu, Yang Zhou
{"title":"Stochastic Variance Reduction for DR-Submodular Maximization","authors":"Yuefang Lian, Donglei Du, Xiao Wang, Dachuan Xu, Yang Zhou","doi":"10.1007/s00453-023-01195-z","DOIUrl":null,"url":null,"abstract":"<div><p>Stochastic optimization has experienced significant growth in recent decades, with the increasing prevalence of variance reduction techniques in stochastic optimization algorithms to enhance computational efficiency. In this paper, we introduce two projection-free stochastic approximation algorithms for maximizing diminishing return (DR) submodular functions over convex constraints, building upon the Stochastic Path Integrated Differential EstimatoR (SPIDER) and its variants. Firstly, we present a SPIDER Continuous Greedy (SPIDER-CG) algorithm for the monotone case that guarantees a <span>\\((1-e^{-1})\\text {OPT}-\\varepsilon \\)</span> approximation after <span>\\(\\mathcal {O}(\\varepsilon ^{-1})\\)</span> iterations and <span>\\(\\mathcal {O}(\\varepsilon ^{-2})\\)</span> stochastic gradient computations under the mean-squared smoothness assumption. For the non-monotone case, we develop a SPIDER Frank–Wolfe (SPIDER-FW) algorithm that guarantees a <span>\\(\\frac{1}{4}(1-\\min _{x\\in \\mathcal {C}}{\\Vert x\\Vert _{\\infty }})\\text {OPT}-\\varepsilon \\)</span> approximation with <span>\\(\\mathcal {O}(\\varepsilon ^{-1})\\)</span> iterations and <span>\\(\\mathcal {O}(\\varepsilon ^{-2})\\)</span> stochastic gradient estimates. To address the practical challenge associated with a large number of samples per iteration, we introduce a modified gradient estimator based on SPIDER, leading to a Hybrid SPIDER-FW (Hybrid SPIDER-CG) algorithm, which achieves the same approximation guarantee as SPIDER-FW (SPIDER-CG) algorithm with only <span>\\(\\mathcal {O}(1)\\)</span> samples per iteration. Numerical experiments on both simulated and real data demonstrate the efficiency of the proposed methods.\n</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 5","pages":"1335 - 1364"},"PeriodicalIF":0.9000,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00453-023-01195-z.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Algorithmica","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s00453-023-01195-z","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
Stochastic optimization has experienced significant growth in recent decades, with the increasing prevalence of variance reduction techniques in stochastic optimization algorithms to enhance computational efficiency. In this paper, we introduce two projection-free stochastic approximation algorithms for maximizing diminishing return (DR) submodular functions over convex constraints, building upon the Stochastic Path Integrated Differential EstimatoR (SPIDER) and its variants. Firstly, we present a SPIDER Continuous Greedy (SPIDER-CG) algorithm for the monotone case that guarantees a \((1-e^{-1})\text {OPT}-\varepsilon \) approximation after \(\mathcal {O}(\varepsilon ^{-1})\) iterations and \(\mathcal {O}(\varepsilon ^{-2})\) stochastic gradient computations under the mean-squared smoothness assumption. For the non-monotone case, we develop a SPIDER Frank–Wolfe (SPIDER-FW) algorithm that guarantees a \(\frac{1}{4}(1-\min _{x\in \mathcal {C}}{\Vert x\Vert _{\infty }})\text {OPT}-\varepsilon \) approximation with \(\mathcal {O}(\varepsilon ^{-1})\) iterations and \(\mathcal {O}(\varepsilon ^{-2})\) stochastic gradient estimates. To address the practical challenge associated with a large number of samples per iteration, we introduce a modified gradient estimator based on SPIDER, leading to a Hybrid SPIDER-FW (Hybrid SPIDER-CG) algorithm, which achieves the same approximation guarantee as SPIDER-FW (SPIDER-CG) algorithm with only \(\mathcal {O}(1)\) samples per iteration. Numerical experiments on both simulated and real data demonstrate the efficiency of the proposed methods.
期刊介绍:
Algorithmica is an international journal which publishes theoretical papers on algorithms that address problems arising in practical areas, and experimental papers of general appeal for practical importance or techniques. The development of algorithms is an integral part of computer science. The increasing complexity and scope of computer applications makes the design of efficient algorithms essential.
Algorithmica covers algorithms in applied areas such as: VLSI, distributed computing, parallel processing, automated design, robotics, graphics, data base design, software tools, as well as algorithms in fundamental areas such as sorting, searching, data structures, computational geometry, and linear programming.
In addition, the journal features two special sections: Application Experience, presenting findings obtained from applications of theoretical results to practical situations, and Problems, offering short papers presenting problems on selected topics of computer science.