{"title":"异构量化的联邦学习","authors":"Cong Shen, Shengbo Chen","doi":"10.1109/SEC50012.2020.00060","DOIUrl":null,"url":null,"abstract":"Quantization of local model updates before uploading to the parameter server is a primary solution to reduce the communication overhead in federated learning. However, prior literature always assumes homogeneous quantization for all clients, while in reality devices are heterogeneous and they support different levels of quantization precision. This heterogeneity of quantization poses a new challenge: fine-quantized model updates are more accurate than coarse-quantized ones, and how to optimally aggregate them at the server is an unsolved problem. In this paper, we propose FEDHQ: Federated Learning with Heterogeneous Quantization. In particular, FEDHQ allocates different weights to clients by minimizing the convergence rate upper bound, which is a function of quantization errors of all clients. We derive the convergence rate of FEDHQ under strongly convex loss functions. To further accelerate the convergence, the instantaneous quantization error is computed and piggybacked when each client uploads the local model update, and the server dynamically calculates the weight accordingly for the current round. Numerical experiments demonstrate the performance advantages of FEDHQ+ over conventional FEDAVG with standard equal weights and a heuristic scheme which assigns weights linearly proportional to the clients’ quantization precision.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Federated Learning with Heterogeneous Quantization\",\"authors\":\"Cong Shen, Shengbo Chen\",\"doi\":\"10.1109/SEC50012.2020.00060\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Quantization of local model updates before uploading to the parameter server is a primary solution to reduce the communication overhead in federated learning. However, prior literature always assumes homogeneous quantization for all clients, while in reality devices are heterogeneous and they support different levels of quantization precision. This heterogeneity of quantization poses a new challenge: fine-quantized model updates are more accurate than coarse-quantized ones, and how to optimally aggregate them at the server is an unsolved problem. In this paper, we propose FEDHQ: Federated Learning with Heterogeneous Quantization. In particular, FEDHQ allocates different weights to clients by minimizing the convergence rate upper bound, which is a function of quantization errors of all clients. We derive the convergence rate of FEDHQ under strongly convex loss functions. To further accelerate the convergence, the instantaneous quantization error is computed and piggybacked when each client uploads the local model update, and the server dynamically calculates the weight accordingly for the current round. Numerical experiments demonstrate the performance advantages of FEDHQ+ over conventional FEDAVG with standard equal weights and a heuristic scheme which assigns weights linearly proportional to the clients’ quantization precision.\",\"PeriodicalId\":375577,\"journal\":{\"name\":\"2020 IEEE/ACM Symposium on Edge Computing (SEC)\",\"volume\":\"6 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE/ACM Symposium on Edge Computing (SEC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SEC50012.2020.00060\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SEC50012.2020.00060","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Federated Learning with Heterogeneous Quantization
Quantization of local model updates before uploading to the parameter server is a primary solution to reduce the communication overhead in federated learning. However, prior literature always assumes homogeneous quantization for all clients, while in reality devices are heterogeneous and they support different levels of quantization precision. This heterogeneity of quantization poses a new challenge: fine-quantized model updates are more accurate than coarse-quantized ones, and how to optimally aggregate them at the server is an unsolved problem. In this paper, we propose FEDHQ: Federated Learning with Heterogeneous Quantization. In particular, FEDHQ allocates different weights to clients by minimizing the convergence rate upper bound, which is a function of quantization errors of all clients. We derive the convergence rate of FEDHQ under strongly convex loss functions. To further accelerate the convergence, the instantaneous quantization error is computed and piggybacked when each client uploads the local model update, and the server dynamically calculates the weight accordingly for the current round. Numerical experiments demonstrate the performance advantages of FEDHQ+ over conventional FEDAVG with standard equal weights and a heuristic scheme which assigns weights linearly proportional to the clients’ quantization precision.