{"title":"Distribution-Aware Weight Compression for Federated Averaging Learning Over Wireless Edge Networks","authors":"Shuheng Lv, Shuaishuai Guo, Haixia Zhang","doi":"10.1109/iccc52777.2021.9580436","DOIUrl":null,"url":null,"abstract":"Recently, federated learning (FL) over wireless edge networks has aroused much research interest due to its merits in mitigating the privacy risks. On the basis of the standard FL, a federated averaging (FedAvg) learning algorithm emerges to reduce the communication rounds between the edge nodes and the central server. Even though the number of communication rounds of FedAvg learning is significantly reduced, exchanging all model parameters is still of heavy communication cost. To reduce the communication cost, this paper proposes a model compression method for FedAvg learning that adapts to the model weights distribution, namely distribution-aware weight compression (DAWC). In the proposed DAWC, we propose a parameter-oriented quantization algorithm (POQA) according to the distribution properties of different parameters of the model weights to iterate out the optimal quantization intervals, with the target of minimizing the mean square quantization errors. When the quantization is finished, Huffman coding is used to minimize the average code length. It is analyzed that FedAvg using the proposed DAWC converges at a fast speed. Experiment results show that DAWC exhibits the optimal performance in comparison with existing benchmarks.","PeriodicalId":425118,"journal":{"name":"2021 IEEE/CIC International Conference on Communications in China (ICCC)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE/CIC International Conference on Communications in China (ICCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/iccc52777.2021.9580436","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Recently, federated learning (FL) over wireless edge networks has aroused much research interest due to its merits in mitigating the privacy risks. On the basis of the standard FL, a federated averaging (FedAvg) learning algorithm emerges to reduce the communication rounds between the edge nodes and the central server. Even though the number of communication rounds of FedAvg learning is significantly reduced, exchanging all model parameters is still of heavy communication cost. To reduce the communication cost, this paper proposes a model compression method for FedAvg learning that adapts to the model weights distribution, namely distribution-aware weight compression (DAWC). In the proposed DAWC, we propose a parameter-oriented quantization algorithm (POQA) according to the distribution properties of different parameters of the model weights to iterate out the optimal quantization intervals, with the target of minimizing the mean square quantization errors. When the quantization is finished, Huffman coding is used to minimize the average code length. It is analyzed that FedAvg using the proposed DAWC converges at a fast speed. Experiment results show that DAWC exhibits the optimal performance in comparison with existing benchmarks.