Yi Zhang , Yunfan Lu , Fengxia Liu , Cheng Li , Zixian Gong , Zhe Hu , Qun Xu
{"title":"用于联合学习的隐私保护和通信效率高的随机交替方向乘法","authors":"Yi Zhang , Yunfan Lu , Fengxia Liu , Cheng Li , Zixian Gong , Zhe Hu , Qun Xu","doi":"10.1016/j.ins.2024.121641","DOIUrl":null,"url":null,"abstract":"<div><div>Federated learning constitutes a paradigm in distributed machine learning, wherein model training unfolds through the exchange of intermediary results between a central server and federated clients. Given its decentralized nature, conventional machine learning algorithms find limited applicability in the context of federated learning models. Hence, the alternating direction method of multipliers (ADMM), tailored for distributed optimization, is leveraged for this purpose. However, despite the considerable promise of the ADMM algorithm in federated learning, it faces challenges related to computational efficiency, communication efficiency, and data security. In response to these challenges, this study proposes the privacy-preserving and communication-efficient stochastic ADMM (PPCESADMM) algorithm that enhances the computational efficiency through the stochastic optimization method, reduces communication costs through sparse communication method, and ensures the security of federated clients' data via the homomorphic encryption method. Theoretical analyses confirm the convergence of the PPCESADMM algorithm under mild conditions and establish its convergence rate as <span><math><mi>O</mi><mo>(</mo><mn>1</mn><mo>/</mo><msqrt><mrow><mi>T</mi></mrow></msqrt><mo>)</mo></math></span>. Experiments illustrate the superior performance of our algorithm in communication cost compared to ADMM and CEADMM algorithms, achieving reductions of 65.10% and 44.32%, respectively. Furthermore, our method surpasses classical federated learning algorithms such as FedAvg, FedAvgM, and SCAFFOLD in terms of algorithmic convergence, achieving superior convergence precision within predefined training epochs. Finally, our algorithm converges to the same results as those obtained without using homomorphic encryption, albeit at the cost of increased computation time.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"691 ","pages":"Article 121641"},"PeriodicalIF":8.1000,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Privacy-preserving and communication-efficient stochastic alternating direction method of multipliers for federated learning\",\"authors\":\"Yi Zhang , Yunfan Lu , Fengxia Liu , Cheng Li , Zixian Gong , Zhe Hu , Qun Xu\",\"doi\":\"10.1016/j.ins.2024.121641\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Federated learning constitutes a paradigm in distributed machine learning, wherein model training unfolds through the exchange of intermediary results between a central server and federated clients. Given its decentralized nature, conventional machine learning algorithms find limited applicability in the context of federated learning models. Hence, the alternating direction method of multipliers (ADMM), tailored for distributed optimization, is leveraged for this purpose. However, despite the considerable promise of the ADMM algorithm in federated learning, it faces challenges related to computational efficiency, communication efficiency, and data security. In response to these challenges, this study proposes the privacy-preserving and communication-efficient stochastic ADMM (PPCESADMM) algorithm that enhances the computational efficiency through the stochastic optimization method, reduces communication costs through sparse communication method, and ensures the security of federated clients' data via the homomorphic encryption method. Theoretical analyses confirm the convergence of the PPCESADMM algorithm under mild conditions and establish its convergence rate as <span><math><mi>O</mi><mo>(</mo><mn>1</mn><mo>/</mo><msqrt><mrow><mi>T</mi></mrow></msqrt><mo>)</mo></math></span>. Experiments illustrate the superior performance of our algorithm in communication cost compared to ADMM and CEADMM algorithms, achieving reductions of 65.10% and 44.32%, respectively. Furthermore, our method surpasses classical federated learning algorithms such as FedAvg, FedAvgM, and SCAFFOLD in terms of algorithmic convergence, achieving superior convergence precision within predefined training epochs. Finally, our algorithm converges to the same results as those obtained without using homomorphic encryption, albeit at the cost of increased computation time.</div></div>\",\"PeriodicalId\":51063,\"journal\":{\"name\":\"Information Sciences\",\"volume\":\"691 \",\"pages\":\"Article 121641\"},\"PeriodicalIF\":8.1000,\"publicationDate\":\"2024-11-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Sciences\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S002002552401555X\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"0\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Sciences","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S002002552401555X","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Privacy-preserving and communication-efficient stochastic alternating direction method of multipliers for federated learning
Federated learning constitutes a paradigm in distributed machine learning, wherein model training unfolds through the exchange of intermediary results between a central server and federated clients. Given its decentralized nature, conventional machine learning algorithms find limited applicability in the context of federated learning models. Hence, the alternating direction method of multipliers (ADMM), tailored for distributed optimization, is leveraged for this purpose. However, despite the considerable promise of the ADMM algorithm in federated learning, it faces challenges related to computational efficiency, communication efficiency, and data security. In response to these challenges, this study proposes the privacy-preserving and communication-efficient stochastic ADMM (PPCESADMM) algorithm that enhances the computational efficiency through the stochastic optimization method, reduces communication costs through sparse communication method, and ensures the security of federated clients' data via the homomorphic encryption method. Theoretical analyses confirm the convergence of the PPCESADMM algorithm under mild conditions and establish its convergence rate as . Experiments illustrate the superior performance of our algorithm in communication cost compared to ADMM and CEADMM algorithms, achieving reductions of 65.10% and 44.32%, respectively. Furthermore, our method surpasses classical federated learning algorithms such as FedAvg, FedAvgM, and SCAFFOLD in terms of algorithmic convergence, achieving superior convergence precision within predefined training epochs. Finally, our algorithm converges to the same results as those obtained without using homomorphic encryption, albeit at the cost of increased computation time.
期刊介绍:
Informatics and Computer Science Intelligent Systems Applications is an esteemed international journal that focuses on publishing original and creative research findings in the field of information sciences. We also feature a limited number of timely tutorial and surveying contributions.
Our journal aims to cater to a diverse audience, including researchers, developers, managers, strategic planners, graduate students, and anyone interested in staying up-to-date with cutting-edge research in information science, knowledge engineering, and intelligent systems. While readers are expected to share a common interest in information science, they come from varying backgrounds such as engineering, mathematics, statistics, physics, computer science, cell biology, molecular biology, management science, cognitive science, neurobiology, behavioral sciences, and biochemistry.