{"title":"Communication Reducing Quantization for Federated Learning with Local Differential Privacy Mechanism","authors":"Huixuan Zong, Qing Wang, Xiaofeng Liu, Yinchuan Li, Yunfeng Shao","doi":"10.1109/iccc52777.2021.9580315","DOIUrl":null,"url":null,"abstract":"As an emerging framework of distributed learning, federated learning (FL) has been a research focus since it enables clients to train deep learning models collaboratively without exposing their original data. Nevertheless, private information can still be inferred from the communicated model parameters by adversaries. In addition, due to the limited channel bandwidth, the model communication between clients and the server has become a serious bottleneck. In this paper, we consider an FL framework that utilizes local differential privacy, where the client adds artificial Gaussian noise to the local model update before aggregation. To reduce the communication overhead of the differential privacy-protected model, we propose the universal vector quantization for FL with local differential privacy mechanism, which quantizes the model parameters in a universal vector quantization approach. Furthermore, we analyze the privacy performance of the proposed approach and track the privacy loss by accounting the log moments. Experiments show that even if the quantization bit is relatively small, our method can achieve model compression without reducing the accuracy of the global model.","PeriodicalId":425118,"journal":{"name":"2021 IEEE/CIC International Conference on Communications in China (ICCC)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE/CIC International Conference on Communications in China (ICCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/iccc52777.2021.9580315","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8
Abstract
As an emerging framework of distributed learning, federated learning (FL) has been a research focus since it enables clients to train deep learning models collaboratively without exposing their original data. Nevertheless, private information can still be inferred from the communicated model parameters by adversaries. In addition, due to the limited channel bandwidth, the model communication between clients and the server has become a serious bottleneck. In this paper, we consider an FL framework that utilizes local differential privacy, where the client adds artificial Gaussian noise to the local model update before aggregation. To reduce the communication overhead of the differential privacy-protected model, we propose the universal vector quantization for FL with local differential privacy mechanism, which quantizes the model parameters in a universal vector quantization approach. Furthermore, we analyze the privacy performance of the proposed approach and track the privacy loss by accounting the log moments. Experiments show that even if the quantization bit is relatively small, our method can achieve model compression without reducing the accuracy of the global model.