用于联合学习的隐私保护和通信效率高的随机交替方向乘法

IF 8.1 1区 计算机科学 0 COMPUTER SCIENCE, INFORMATION SYSTEMS Information Sciences Pub Date : 2024-11-14 DOI:10.1016/j.ins.2024.121641
Yi Zhang , Yunfan Lu , Fengxia Liu , Cheng Li , Zixian Gong , Zhe Hu , Qun Xu
{"title":"用于联合学习的隐私保护和通信效率高的随机交替方向乘法","authors":"Yi Zhang ,&nbsp;Yunfan Lu ,&nbsp;Fengxia Liu ,&nbsp;Cheng Li ,&nbsp;Zixian Gong ,&nbsp;Zhe Hu ,&nbsp;Qun Xu","doi":"10.1016/j.ins.2024.121641","DOIUrl":null,"url":null,"abstract":"<div><div>Federated learning constitutes a paradigm in distributed machine learning, wherein model training unfolds through the exchange of intermediary results between a central server and federated clients. Given its decentralized nature, conventional machine learning algorithms find limited applicability in the context of federated learning models. Hence, the alternating direction method of multipliers (ADMM), tailored for distributed optimization, is leveraged for this purpose. However, despite the considerable promise of the ADMM algorithm in federated learning, it faces challenges related to computational efficiency, communication efficiency, and data security. In response to these challenges, this study proposes the privacy-preserving and communication-efficient stochastic ADMM (PPCESADMM) algorithm that enhances the computational efficiency through the stochastic optimization method, reduces communication costs through sparse communication method, and ensures the security of federated clients' data via the homomorphic encryption method. Theoretical analyses confirm the convergence of the PPCESADMM algorithm under mild conditions and establish its convergence rate as <span><math><mi>O</mi><mo>(</mo><mn>1</mn><mo>/</mo><msqrt><mrow><mi>T</mi></mrow></msqrt><mo>)</mo></math></span>. Experiments illustrate the superior performance of our algorithm in communication cost compared to ADMM and CEADMM algorithms, achieving reductions of 65.10% and 44.32%, respectively. Furthermore, our method surpasses classical federated learning algorithms such as FedAvg, FedAvgM, and SCAFFOLD in terms of algorithmic convergence, achieving superior convergence precision within predefined training epochs. Finally, our algorithm converges to the same results as those obtained without using homomorphic encryption, albeit at the cost of increased computation time.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"691 ","pages":"Article 121641"},"PeriodicalIF":8.1000,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Privacy-preserving and communication-efficient stochastic alternating direction method of multipliers for federated learning\",\"authors\":\"Yi Zhang ,&nbsp;Yunfan Lu ,&nbsp;Fengxia Liu ,&nbsp;Cheng Li ,&nbsp;Zixian Gong ,&nbsp;Zhe Hu ,&nbsp;Qun Xu\",\"doi\":\"10.1016/j.ins.2024.121641\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Federated learning constitutes a paradigm in distributed machine learning, wherein model training unfolds through the exchange of intermediary results between a central server and federated clients. Given its decentralized nature, conventional machine learning algorithms find limited applicability in the context of federated learning models. Hence, the alternating direction method of multipliers (ADMM), tailored for distributed optimization, is leveraged for this purpose. However, despite the considerable promise of the ADMM algorithm in federated learning, it faces challenges related to computational efficiency, communication efficiency, and data security. In response to these challenges, this study proposes the privacy-preserving and communication-efficient stochastic ADMM (PPCESADMM) algorithm that enhances the computational efficiency through the stochastic optimization method, reduces communication costs through sparse communication method, and ensures the security of federated clients' data via the homomorphic encryption method. Theoretical analyses confirm the convergence of the PPCESADMM algorithm under mild conditions and establish its convergence rate as <span><math><mi>O</mi><mo>(</mo><mn>1</mn><mo>/</mo><msqrt><mrow><mi>T</mi></mrow></msqrt><mo>)</mo></math></span>. Experiments illustrate the superior performance of our algorithm in communication cost compared to ADMM and CEADMM algorithms, achieving reductions of 65.10% and 44.32%, respectively. Furthermore, our method surpasses classical federated learning algorithms such as FedAvg, FedAvgM, and SCAFFOLD in terms of algorithmic convergence, achieving superior convergence precision within predefined training epochs. Finally, our algorithm converges to the same results as those obtained without using homomorphic encryption, albeit at the cost of increased computation time.</div></div>\",\"PeriodicalId\":51063,\"journal\":{\"name\":\"Information Sciences\",\"volume\":\"691 \",\"pages\":\"Article 121641\"},\"PeriodicalIF\":8.1000,\"publicationDate\":\"2024-11-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Sciences\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S002002552401555X\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"0\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Sciences","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S002002552401555X","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

联合学习是分布式机器学习的一种范式,它通过中央服务器与联合客户端之间交换中间结果来展开模型训练。鉴于其分散性,传统的机器学习算法在联合学习模型中的适用性有限。因此,为分布式优化量身定制的乘法交替方向法(ADMM)被用于此目的。然而,尽管 ADMM 算法在联合学习中大有可为,但它在计算效率、通信效率和数据安全方面仍面临挑战。针对这些挑战,本研究提出了隐私保护和通信效率随机 ADMM 算法(PPCESADMM),该算法通过随机优化方法提高计算效率,通过稀疏通信方法降低通信成本,并通过同态加密方法确保联盟客户数据的安全性。理论分析证实了 PPCESADMM 算法在温和条件下的收敛性,并确定其收敛速率为 O(1/T)。实验表明,与 ADMM 算法和 CEADMM 算法相比,我们的算法在通信成本方面表现出色,分别降低了 65.10% 和 44.32%。此外,在算法收敛性方面,我们的方法超越了 FedAvg、FedAvgM 和 SCAFFOLD 等经典联合学习算法,在预定义的训练历时内实现了卓越的收敛精度。最后,我们的算法收敛到了与不使用同态加密时相同的结果,尽管代价是计算时间的增加。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Privacy-preserving and communication-efficient stochastic alternating direction method of multipliers for federated learning
Federated learning constitutes a paradigm in distributed machine learning, wherein model training unfolds through the exchange of intermediary results between a central server and federated clients. Given its decentralized nature, conventional machine learning algorithms find limited applicability in the context of federated learning models. Hence, the alternating direction method of multipliers (ADMM), tailored for distributed optimization, is leveraged for this purpose. However, despite the considerable promise of the ADMM algorithm in federated learning, it faces challenges related to computational efficiency, communication efficiency, and data security. In response to these challenges, this study proposes the privacy-preserving and communication-efficient stochastic ADMM (PPCESADMM) algorithm that enhances the computational efficiency through the stochastic optimization method, reduces communication costs through sparse communication method, and ensures the security of federated clients' data via the homomorphic encryption method. Theoretical analyses confirm the convergence of the PPCESADMM algorithm under mild conditions and establish its convergence rate as O(1/T). Experiments illustrate the superior performance of our algorithm in communication cost compared to ADMM and CEADMM algorithms, achieving reductions of 65.10% and 44.32%, respectively. Furthermore, our method surpasses classical federated learning algorithms such as FedAvg, FedAvgM, and SCAFFOLD in terms of algorithmic convergence, achieving superior convergence precision within predefined training epochs. Finally, our algorithm converges to the same results as those obtained without using homomorphic encryption, albeit at the cost of increased computation time.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Information Sciences
Information Sciences 工程技术-计算机:信息系统
CiteScore
14.00
自引率
17.30%
发文量
1322
审稿时长
10.4 months
期刊介绍: Informatics and Computer Science Intelligent Systems Applications is an esteemed international journal that focuses on publishing original and creative research findings in the field of information sciences. We also feature a limited number of timely tutorial and surveying contributions. Our journal aims to cater to a diverse audience, including researchers, developers, managers, strategic planners, graduate students, and anyone interested in staying up-to-date with cutting-edge research in information science, knowledge engineering, and intelligent systems. While readers are expected to share a common interest in information science, they come from varying backgrounds such as engineering, mathematics, statistics, physics, computer science, cell biology, molecular biology, management science, cognitive science, neurobiology, behavioral sciences, and biochemistry.
期刊最新文献
Editorial Board Community structure testing by counting frequent common neighbor sets Finite-time secure synchronization for stochastic complex networks with delayed coupling under deception attacks: A two-step switching control scheme Adaptive granular data compression and interval granulation for efficient classification Introducing fairness in network visualization
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1