Andrey Veprikov, Alexander Bogdanov, Vladislav Minashkin, Alexander Beznosikov
{"title":"New Aspects of Black Box Conditional Gradient: Variance Reduction and One Point Feedback","authors":"Andrey Veprikov, Alexander Bogdanov, Vladislav Minashkin, Alexander Beznosikov","doi":"arxiv-2409.10442","DOIUrl":null,"url":null,"abstract":"This paper deals with the black-box optimization problem. In this setup, we\ndo not have access to the gradient of the objective function, therefore, we\nneed to estimate it somehow. We propose a new type of approximation JAGUAR,\nthat memorizes information from previous iterations and requires\n$\\mathcal{O}(1)$ oracle calls. We implement this approximation in the\nFrank-Wolfe and Gradient Descent algorithms and prove the convergence of these\nmethods with different types of zero-order oracle. Our theoretical analysis\ncovers scenarios of non-convex, convex and PL-condition cases. Also in this\npaper, we consider the stochastic minimization problem on the set $Q$ with\nnoise in the zero-order oracle; this setup is quite unpopular in the\nliterature, but we prove that the JAGUAR approximation is robust not only in\ndeterministic minimization problems, but also in the stochastic case. We\nperform experiments to compare our gradient estimator with those already known\nin the literature and confirm the dominance of our methods.","PeriodicalId":501286,"journal":{"name":"arXiv - MATH - Optimization and Control","volume":"16 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - MATH - Optimization and Control","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.10442","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper deals with the black-box optimization problem. In this setup, we
do not have access to the gradient of the objective function, therefore, we
need to estimate it somehow. We propose a new type of approximation JAGUAR,
that memorizes information from previous iterations and requires
$\mathcal{O}(1)$ oracle calls. We implement this approximation in the
Frank-Wolfe and Gradient Descent algorithms and prove the convergence of these
methods with different types of zero-order oracle. Our theoretical analysis
covers scenarios of non-convex, convex and PL-condition cases. Also in this
paper, we consider the stochastic minimization problem on the set $Q$ with
noise in the zero-order oracle; this setup is quite unpopular in the
literature, but we prove that the JAGUAR approximation is robust not only in
deterministic minimization problems, but also in the stochastic case. We
perform experiments to compare our gradient estimator with those already known
in the literature and confirm the dominance of our methods.