{"title":"FedGR:安全联邦学习的无损混淆方法","authors":"Wenjing Qin, Li Yang, Jianfeng Ma","doi":"10.1109/GLOBECOM46510.2021.9686029","DOIUrl":null,"url":null,"abstract":"Federated learning is a promising new technology in the field of artificial intelligence. However, the unprotected model gradient parameters in federated learning may reveal sensitive participants information. To address this problem, we present a secure federated learning framework called FedGR. We use Paillier homomorphic encryption to design a new gradient security replacement algorithm, which eliminates the connections between gradient parameters and user sensitive data. In addition, we revisit the previous work by Aono and Hayashi(IEEE TIFS 2017) and show that, with their method, the user's local computing burden is too heavy. We then proved FedGR has the following characteristics to solve this problem: 1) The system does not leak any information to the server. 2) Compared with that of ordinary deep learning systems, the accuracy of federated training results yielded by our system remains unchanged. 3)The proposed approach greatly reduces the user's local computing overhead.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"FedGR: A Lossless-Obfuscation Approach for Secure Federated Learning\",\"authors\":\"Wenjing Qin, Li Yang, Jianfeng Ma\",\"doi\":\"10.1109/GLOBECOM46510.2021.9686029\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated learning is a promising new technology in the field of artificial intelligence. However, the unprotected model gradient parameters in federated learning may reveal sensitive participants information. To address this problem, we present a secure federated learning framework called FedGR. We use Paillier homomorphic encryption to design a new gradient security replacement algorithm, which eliminates the connections between gradient parameters and user sensitive data. In addition, we revisit the previous work by Aono and Hayashi(IEEE TIFS 2017) and show that, with their method, the user's local computing burden is too heavy. We then proved FedGR has the following characteristics to solve this problem: 1) The system does not leak any information to the server. 2) Compared with that of ordinary deep learning systems, the accuracy of federated training results yielded by our system remains unchanged. 3)The proposed approach greatly reduces the user's local computing overhead.\",\"PeriodicalId\":200641,\"journal\":{\"name\":\"2021 IEEE Global Communications Conference (GLOBECOM)\",\"volume\":\"2 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE Global Communications Conference (GLOBECOM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/GLOBECOM46510.2021.9686029\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Global Communications Conference (GLOBECOM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/GLOBECOM46510.2021.9686029","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
FedGR: A Lossless-Obfuscation Approach for Secure Federated Learning
Federated learning is a promising new technology in the field of artificial intelligence. However, the unprotected model gradient parameters in federated learning may reveal sensitive participants information. To address this problem, we present a secure federated learning framework called FedGR. We use Paillier homomorphic encryption to design a new gradient security replacement algorithm, which eliminates the connections between gradient parameters and user sensitive data. In addition, we revisit the previous work by Aono and Hayashi(IEEE TIFS 2017) and show that, with their method, the user's local computing burden is too heavy. We then proved FedGR has the following characteristics to solve this problem: 1) The system does not leak any information to the server. 2) Compared with that of ordinary deep learning systems, the accuracy of federated training results yielded by our system remains unchanged. 3)The proposed approach greatly reduces the user's local computing overhead.