Siping Shi, Chuang Hu, Dan Wang, Yifei Zhu, Zhu Han
{"title":"Distributionally Robust Federated Learning for Differentially Private Data","authors":"Siping Shi, Chuang Hu, Dan Wang, Yifei Zhu, Zhu Han","doi":"10.1109/ICDCS54860.2022.00086","DOIUrl":null,"url":null,"abstract":"Local differential privacy (LDP) is a prominent approach and widely adopted in federated learning (FL) to preserve the privacy of local training data. It also nicely provides a rigorous privacy guarantee with computational efficiency in theory. However, a strong privacy guarantee with local differential privacy can degrade the adversarial robustness of the learned global model. To date, very few studies focus on the interplay between LDP and the adversarial robustness of federated learning. In this paper, we observe that LDP adds random noise to the data to achieve privacy guarantee of local data, and thus introduces uncertainty to the training dataset of federated learning. This leads to decreased robustness. To solve this robustness problem caused by uncertainty, we propose to leverage the promising distributionally robust optimization (DRO) modeling approach. Specifically, we first formulate a distributionally robust and private federated learning problem (DRPri). While our formulation successfully captures the uncertainty generated by the LDP, we show that it is not easily tractable. We thus transform our DRPri problem to another equivalent problem, under the Wasserstein distance-based uncertainty set, which is named the DRPri-W problem. We then design a robust and private federated learning algorithm, RPFL, to solve the DRPri-W problem. We analyze RPFL and theoretically show it satisfies differential privacy with a robustness guarantee. We evaluate algorithm RPFL by training classifiers on real-world datasets under a set of well-known attacks. Our experimental results show our algorithm RPFL can significantly improve the robustness of the trained global model under differentially private data by up to 4.33 times.","PeriodicalId":225883,"journal":{"name":"2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDCS54860.2022.00086","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Local differential privacy (LDP) is a prominent approach and widely adopted in federated learning (FL) to preserve the privacy of local training data. It also nicely provides a rigorous privacy guarantee with computational efficiency in theory. However, a strong privacy guarantee with local differential privacy can degrade the adversarial robustness of the learned global model. To date, very few studies focus on the interplay between LDP and the adversarial robustness of federated learning. In this paper, we observe that LDP adds random noise to the data to achieve privacy guarantee of local data, and thus introduces uncertainty to the training dataset of federated learning. This leads to decreased robustness. To solve this robustness problem caused by uncertainty, we propose to leverage the promising distributionally robust optimization (DRO) modeling approach. Specifically, we first formulate a distributionally robust and private federated learning problem (DRPri). While our formulation successfully captures the uncertainty generated by the LDP, we show that it is not easily tractable. We thus transform our DRPri problem to another equivalent problem, under the Wasserstein distance-based uncertainty set, which is named the DRPri-W problem. We then design a robust and private federated learning algorithm, RPFL, to solve the DRPri-W problem. We analyze RPFL and theoretically show it satisfies differential privacy with a robustness guarantee. We evaluate algorithm RPFL by training classifiers on real-world datasets under a set of well-known attacks. Our experimental results show our algorithm RPFL can significantly improve the robustness of the trained global model under differentially private data by up to 4.33 times.