{"title":"Toward Secure Weighted Aggregation for Privacy-Preserving Federated Learning","authors":"Yunlong He;Jia Yu","doi":"10.1109/TIFS.2025.3550787","DOIUrl":null,"url":null,"abstract":"Privacy-preserving federated learning can protect the privacy of model gradients/parameters in the model aggregation phase. Most existing schemes only consider the scenario where user models have the same weight in model aggregation. However, users often hold different numbers of training samples in practice. This makes the model convergence speed of existing schemes very slow. To solve this problem, we propose a privacy-preserving federated learning scheme with secure weighted aggregation. It is able to allocate appropriate user weights based on the user’s local data size with privacy protection. In addition, it is impossible for the cloud server to obtain the user’s original model parameters and local data size in the proposed scheme. Specifically, we use Lagrange interpolation to combine the model parameters and local data size into a set of ciphertexts. The cloud server can smoothly perform weighted aggregation based on these ciphertexts. Leveraging the Chinese Remainder Theorem, we convert the local data size into a series of verification values. This enables the user to verify the correctness of results returned from the server. We provide a theoretical analysis for the proposed scheme, demonstrating its effectiveness, privacy, and verifiability. We perform extensive experiments on the MNIST dataset. Experimental results demonstrate its model performance, computation overhead, and communication overhead.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"3475-3488"},"PeriodicalIF":8.0000,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Forensics and Security","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10924274/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0
Abstract
Privacy-preserving federated learning can protect the privacy of model gradients/parameters in the model aggregation phase. Most existing schemes only consider the scenario where user models have the same weight in model aggregation. However, users often hold different numbers of training samples in practice. This makes the model convergence speed of existing schemes very slow. To solve this problem, we propose a privacy-preserving federated learning scheme with secure weighted aggregation. It is able to allocate appropriate user weights based on the user’s local data size with privacy protection. In addition, it is impossible for the cloud server to obtain the user’s original model parameters and local data size in the proposed scheme. Specifically, we use Lagrange interpolation to combine the model parameters and local data size into a set of ciphertexts. The cloud server can smoothly perform weighted aggregation based on these ciphertexts. Leveraging the Chinese Remainder Theorem, we convert the local data size into a series of verification values. This enables the user to verify the correctness of results returned from the server. We provide a theoretical analysis for the proposed scheme, demonstrating its effectiveness, privacy, and verifiability. We perform extensive experiments on the MNIST dataset. Experimental results demonstrate its model performance, computation overhead, and communication overhead.
期刊介绍:
The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features