{"title":"An Optimization Framework for Federated Edge Learning","authors":"Yangchen Li, Ying Cui, Vincent K. N. Lau","doi":"10.1109/spawc51304.2022.9834013","DOIUrl":null,"url":null,"abstract":"This paper intends to optimize the overall implementing process of federated learning (FL) in practical edge computing systems. First, we present a general FL algorithm, namely GenQSGD+, whose parameters include the numbers of global and local iterations, mini-batch size, and step size sequence. Then, we analyze the convergence of GenQSGD+ with arbitrary algorithm parameters. Next, we optimize all the algorithm parameters of GenQSGD+ to minimize the energy cost under the constraints on the time cost, convergence error, and step size sequence. The resulting optimization problem is challenging due to its non-convexity and the presence of a dimension-varying vector variable and non-differentiable constraint functions. We transform the complicated problem into a more tractable nonconvex problem using the structural properties of the original problem and propose an iterative algorithm using general inner approximation (GIA) and complementary geometric programming (CGP) to obtain a KKT point. Finally, we numerically demonstrate remarkable gains of optimization-based GenQSGD+ over typical FL algorithms and the advancement of the proposed optimization framework for federated edge learning.","PeriodicalId":423807,"journal":{"name":"2022 IEEE 23rd International Workshop on Signal Processing Advances in Wireless Communication (SPAWC)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 23rd International Workshop on Signal Processing Advances in Wireless Communication (SPAWC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/spawc51304.2022.9834013","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper intends to optimize the overall implementing process of federated learning (FL) in practical edge computing systems. First, we present a general FL algorithm, namely GenQSGD+, whose parameters include the numbers of global and local iterations, mini-batch size, and step size sequence. Then, we analyze the convergence of GenQSGD+ with arbitrary algorithm parameters. Next, we optimize all the algorithm parameters of GenQSGD+ to minimize the energy cost under the constraints on the time cost, convergence error, and step size sequence. The resulting optimization problem is challenging due to its non-convexity and the presence of a dimension-varying vector variable and non-differentiable constraint functions. We transform the complicated problem into a more tractable nonconvex problem using the structural properties of the original problem and propose an iterative algorithm using general inner approximation (GIA) and complementary geometric programming (CGP) to obtain a KKT point. Finally, we numerically demonstrate remarkable gains of optimization-based GenQSGD+ over typical FL algorithms and the advancement of the proposed optimization framework for federated edge learning.