Ekaterina Borodich , Vladislav Tominin , Yaroslav Tominin , Dmitry Kovalev , Alexander Gasnikov , Pavel Dvurechensky
{"title":"鞍点问题的加速方差缩减方法","authors":"Ekaterina Borodich , Vladislav Tominin , Yaroslav Tominin , Dmitry Kovalev , Alexander Gasnikov , Pavel Dvurechensky","doi":"10.1016/j.ejco.2022.100048","DOIUrl":null,"url":null,"abstract":"<div><p>We consider composite minimax optimization problems where the goal is to find a saddle-point of a large sum of non-bilinear objective functions augmented by simple composite regularizers for the primal and dual variables. For such problems, under the average-smoothness assumption, we propose accelerated stochastic variance-reduced algorithms with optimal up to logarithmic factors complexity bounds. In particular, we consider strongly-convex-strongly-concave, convex-strongly-concave, and convex-concave objectives. To the best of our knowledge, these are the first nearly-optimal algorithms for this setting.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"10 ","pages":"Article 100048"},"PeriodicalIF":2.6000,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2192440622000247/pdfft?md5=41248ad222d5ad361783568adf860824&pid=1-s2.0-S2192440622000247-main.pdf","citationCount":"1","resultStr":"{\"title\":\"Accelerated variance-reduced methods for saddle-point problems\",\"authors\":\"Ekaterina Borodich , Vladislav Tominin , Yaroslav Tominin , Dmitry Kovalev , Alexander Gasnikov , Pavel Dvurechensky\",\"doi\":\"10.1016/j.ejco.2022.100048\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>We consider composite minimax optimization problems where the goal is to find a saddle-point of a large sum of non-bilinear objective functions augmented by simple composite regularizers for the primal and dual variables. For such problems, under the average-smoothness assumption, we propose accelerated stochastic variance-reduced algorithms with optimal up to logarithmic factors complexity bounds. In particular, we consider strongly-convex-strongly-concave, convex-strongly-concave, and convex-concave objectives. To the best of our knowledge, these are the first nearly-optimal algorithms for this setting.</p></div>\",\"PeriodicalId\":51880,\"journal\":{\"name\":\"EURO Journal on Computational Optimization\",\"volume\":\"10 \",\"pages\":\"Article 100048\"},\"PeriodicalIF\":2.6000,\"publicationDate\":\"2022-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2192440622000247/pdfft?md5=41248ad222d5ad361783568adf860824&pid=1-s2.0-S2192440622000247-main.pdf\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"EURO Journal on Computational Optimization\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2192440622000247\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"OPERATIONS RESEARCH & MANAGEMENT SCIENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"EURO Journal on Computational Optimization","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2192440622000247","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"OPERATIONS RESEARCH & MANAGEMENT SCIENCE","Score":null,"Total":0}
Accelerated variance-reduced methods for saddle-point problems
We consider composite minimax optimization problems where the goal is to find a saddle-point of a large sum of non-bilinear objective functions augmented by simple composite regularizers for the primal and dual variables. For such problems, under the average-smoothness assumption, we propose accelerated stochastic variance-reduced algorithms with optimal up to logarithmic factors complexity bounds. In particular, we consider strongly-convex-strongly-concave, convex-strongly-concave, and convex-concave objectives. To the best of our knowledge, these are the first nearly-optimal algorithms for this setting.
期刊介绍:
The aim of this journal is to contribute to the many areas in which Operations Research and Computer Science are tightly connected with each other. More precisely, the common element in all contributions to this journal is the use of computers for the solution of optimization problems. Both methodological contributions and innovative applications are considered, but validation through convincing computational experiments is desirable. The journal publishes three types of articles (i) research articles, (ii) tutorials, and (iii) surveys. A research article presents original methodological contributions. A tutorial provides an introduction to an advanced topic designed to ease the use of the relevant methodology. A survey provides a wide overview of a given subject by summarizing and organizing research results.