{"title":"利用基于相似性的模型聚合实现安全、准确的个性化联合学习","authors":"Zhouyong Tan;Junqing Le;Fan Yang;Min Huang;Tao Xiang;Xiaofeng Liao","doi":"10.1109/TSUSC.2024.3403427","DOIUrl":null,"url":null,"abstract":"Personalized federated learning (PFL) combines client needs and data characteristics to train personalized models for local clients. However, the most of previous PFL schemes encountered challenges such as low model prediction accuracy and privacy leakage when applied to practical datasets. Besides, the existing privacy protection methods fail to achieve satisfactory results in terms of model prediction accuracy and security simultaneously. In this paper, we propose a <u>P</u>rivacy-preserving <u>P</u>ersonalized <u>F</u>ederated <u>L</u>earning under <u>S</u>ecure <u>M</u>ulti-party <u>C</u>omputation (SMC-PPFL), which can preserve privacy while obtaining a local personalized model with high prediction accuracy. In SMC-PPFL, noise perturbation is utilized to protect similarity computation, and secure multi-party computation is employed for model sub-aggregations. This combination ensures that clients’ privacy is preserved, and the computed values remain unbiased without compromising security. Then, we propose a weighted sub-aggregation strategy based on the similarity of clients and introduce a regularization term in the local training to improve prediction accuracy. Finally, we evaluate the performance of SMC-PPFL on three common datasets. The experimental results show that SMC-PPFL achieves <inline-formula><tex-math>$2\\%\\!\\sim\\! 15\\%$</tex-math></inline-formula> higher prediction accuracy compared to the previous PFL schemes. Besides, the security analysis also verifies that SMC-PPFL can resist model inversion attacks and membership inference attacks.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"10 1","pages":"132-145"},"PeriodicalIF":3.0000,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Secure and Accurate Personalized Federated Learning With Similarity-Based Model Aggregation\",\"authors\":\"Zhouyong Tan;Junqing Le;Fan Yang;Min Huang;Tao Xiang;Xiaofeng Liao\",\"doi\":\"10.1109/TSUSC.2024.3403427\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Personalized federated learning (PFL) combines client needs and data characteristics to train personalized models for local clients. However, the most of previous PFL schemes encountered challenges such as low model prediction accuracy and privacy leakage when applied to practical datasets. Besides, the existing privacy protection methods fail to achieve satisfactory results in terms of model prediction accuracy and security simultaneously. In this paper, we propose a <u>P</u>rivacy-preserving <u>P</u>ersonalized <u>F</u>ederated <u>L</u>earning under <u>S</u>ecure <u>M</u>ulti-party <u>C</u>omputation (SMC-PPFL), which can preserve privacy while obtaining a local personalized model with high prediction accuracy. In SMC-PPFL, noise perturbation is utilized to protect similarity computation, and secure multi-party computation is employed for model sub-aggregations. This combination ensures that clients’ privacy is preserved, and the computed values remain unbiased without compromising security. Then, we propose a weighted sub-aggregation strategy based on the similarity of clients and introduce a regularization term in the local training to improve prediction accuracy. Finally, we evaluate the performance of SMC-PPFL on three common datasets. The experimental results show that SMC-PPFL achieves <inline-formula><tex-math>$2\\\\%\\\\!\\\\sim\\\\! 15\\\\%$</tex-math></inline-formula> higher prediction accuracy compared to the previous PFL schemes. Besides, the security analysis also verifies that SMC-PPFL can resist model inversion attacks and membership inference attacks.\",\"PeriodicalId\":13268,\"journal\":{\"name\":\"IEEE Transactions on Sustainable Computing\",\"volume\":\"10 1\",\"pages\":\"132-145\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2024-03-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Sustainable Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10535193/\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Sustainable Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10535193/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
Secure and Accurate Personalized Federated Learning With Similarity-Based Model Aggregation
Personalized federated learning (PFL) combines client needs and data characteristics to train personalized models for local clients. However, the most of previous PFL schemes encountered challenges such as low model prediction accuracy and privacy leakage when applied to practical datasets. Besides, the existing privacy protection methods fail to achieve satisfactory results in terms of model prediction accuracy and security simultaneously. In this paper, we propose a Privacy-preserving Personalized Federated Learning under Secure Multi-party Computation (SMC-PPFL), which can preserve privacy while obtaining a local personalized model with high prediction accuracy. In SMC-PPFL, noise perturbation is utilized to protect similarity computation, and secure multi-party computation is employed for model sub-aggregations. This combination ensures that clients’ privacy is preserved, and the computed values remain unbiased without compromising security. Then, we propose a weighted sub-aggregation strategy based on the similarity of clients and introduce a regularization term in the local training to improve prediction accuracy. Finally, we evaluate the performance of SMC-PPFL on three common datasets. The experimental results show that SMC-PPFL achieves $2\%\!\sim\! 15\%$ higher prediction accuracy compared to the previous PFL schemes. Besides, the security analysis also verifies that SMC-PPFL can resist model inversion attacks and membership inference attacks.