{"title":"A simplified convergence theory for Byzantine resilient stochastic gradient descent","authors":"Lindon Roberts , Edward Smyth","doi":"10.1016/j.ejco.2022.100038","DOIUrl":null,"url":null,"abstract":"<div><p>In distributed learning, a central server trains a model according to updates provided by nodes holding local data samples. In the presence of one or more malicious servers sending incorrect information (a Byzantine adversary), standard algorithms for model training such as stochastic gradient descent (SGD) fail to converge. In this paper, we present a simplified convergence theory for the generic Byzantine Resilient SGD method originally proposed by Blanchard et al. (2017) <span>[3]</span>. Compared to the existing analysis, we shown convergence to a stationary point in expectation under standard assumptions on the (possibly nonconvex) objective function and flexible assumptions on the stochastic gradients.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"10 ","pages":"Article 100038"},"PeriodicalIF":2.6000,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2192440622000144/pdfft?md5=bbd4aa4ea37b8349470f121ce86051dd&pid=1-s2.0-S2192440622000144-main.pdf","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"EURO Journal on Computational Optimization","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2192440622000144","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"OPERATIONS RESEARCH & MANAGEMENT SCIENCE","Score":null,"Total":0}
引用次数: 1
Abstract
In distributed learning, a central server trains a model according to updates provided by nodes holding local data samples. In the presence of one or more malicious servers sending incorrect information (a Byzantine adversary), standard algorithms for model training such as stochastic gradient descent (SGD) fail to converge. In this paper, we present a simplified convergence theory for the generic Byzantine Resilient SGD method originally proposed by Blanchard et al. (2017) [3]. Compared to the existing analysis, we shown convergence to a stationary point in expectation under standard assumptions on the (possibly nonconvex) objective function and flexible assumptions on the stochastic gradients.
期刊介绍:
The aim of this journal is to contribute to the many areas in which Operations Research and Computer Science are tightly connected with each other. More precisely, the common element in all contributions to this journal is the use of computers for the solution of optimization problems. Both methodological contributions and innovative applications are considered, but validation through convincing computational experiments is desirable. The journal publishes three types of articles (i) research articles, (ii) tutorials, and (iii) surveys. A research article presents original methodological contributions. A tutorial provides an introduction to an advanced topic designed to ease the use of the relevant methodology. A survey provides a wide overview of a given subject by summarizing and organizing research results.