{"title":"Accelerating Federated Learning for Edge Intelligence Using Conjugated Central Acceleration With Inexact Global Line Search","authors":"Lei Zhao;Lin Cai;Wu-Sheng Lu","doi":"10.1109/TCCN.2024.3454273","DOIUrl":null,"url":null,"abstract":"Driven by the increasing demand for real-time, low-latency learning processes and the ever-growing emphasis on data privacy, Federated Learning (FL) enabled edge intelligence emerges as a promising decentralized learning paradigm at the edge of the network, which empowers collaborative model training on edge agents, allowing them to make intelligent decisions locally without relying solely on centralized cloud servers. To enhance the training efficiency of edge agents and alleviate communication burdens, we propose a novel technique called Conjugated Central Acceleration with Inexact Line Search enabled Federated Stochastic Variance Reduced Gradient (CLSFSVRG). Conjugate Central Acceleration leverages conjugate gradient technique to efficiently utilize the training information from multiple edge agents by additional updating efforts in the central server, thereby enhancing the convergence rates of the global model and reduce the local training burden. Inexact Line Search optimizes the step size for model updates, striking a balance between precision and computational efficiency. Simulation results demonstrate that the proposed scheme outperforms the state-of-the-art FL algorithms, achieving superior performance in terms of higher test accuracy and faster convergence speed. Remarkably, our approach reduces communication costs by an impressive 82.4%, while still achieving a test accuracy of 96.5%. By allowing a small portion of edge agents to participate, CLSFSVRG exhibits higher robustness without compromising the test accuracy. Moreover, the fast convergence speed achieved with a limited number of participating edge agents contributes to significant reductions in edge computing cost during the training procedure.","PeriodicalId":13069,"journal":{"name":"IEEE Transactions on Cognitive Communications and Networking","volume":"11 2","pages":"1244-1257"},"PeriodicalIF":7.0000,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Cognitive Communications and Networking","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10664433/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"TELECOMMUNICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
Driven by the increasing demand for real-time, low-latency learning processes and the ever-growing emphasis on data privacy, Federated Learning (FL) enabled edge intelligence emerges as a promising decentralized learning paradigm at the edge of the network, which empowers collaborative model training on edge agents, allowing them to make intelligent decisions locally without relying solely on centralized cloud servers. To enhance the training efficiency of edge agents and alleviate communication burdens, we propose a novel technique called Conjugated Central Acceleration with Inexact Line Search enabled Federated Stochastic Variance Reduced Gradient (CLSFSVRG). Conjugate Central Acceleration leverages conjugate gradient technique to efficiently utilize the training information from multiple edge agents by additional updating efforts in the central server, thereby enhancing the convergence rates of the global model and reduce the local training burden. Inexact Line Search optimizes the step size for model updates, striking a balance between precision and computational efficiency. Simulation results demonstrate that the proposed scheme outperforms the state-of-the-art FL algorithms, achieving superior performance in terms of higher test accuracy and faster convergence speed. Remarkably, our approach reduces communication costs by an impressive 82.4%, while still achieving a test accuracy of 96.5%. By allowing a small portion of edge agents to participate, CLSFSVRG exhibits higher robustness without compromising the test accuracy. Moreover, the fast convergence speed achieved with a limited number of participating edge agents contributes to significant reductions in edge computing cost during the training procedure.
在对实时、低延迟学习过程日益增长的需求和对数据隐私日益重视的推动下,支持联邦学习(FL)的边缘智能在网络边缘成为一种有前途的分散学习范式,它支持边缘代理的协作模型训练,使他们能够在本地做出智能决策,而无需仅仅依赖于集中式云服务器。为了提高边缘代理的训练效率和减轻通信负担,我们提出了一种基于非精确线搜索的联邦随机方差减少梯度共轭中心加速技术(CLSFSVRG)。共轭中心加速利用共轭梯度技术,通过在中心服务器上进行额外的更新工作,有效地利用来自多个边缘代理的训练信息,从而提高全局模型的收敛速度,减少局部训练负担。Inexact Line Search优化了模型更新的步长,在精度和计算效率之间取得了平衡。仿真结果表明,该方案在测试精度高、收敛速度快等方面优于现有的FL算法。值得注意的是,我们的方法将通信成本降低了令人印象深刻的82.4%,同时仍然实现了96.5%的测试准确性。通过允许一小部分边缘代理参与,CLSFSVRG表现出更高的鲁棒性,而不会影响测试精度。此外,在有限数量的边缘代理参与下实现的快速收敛速度有助于在训练过程中显著降低边缘计算成本。
期刊介绍:
The IEEE Transactions on Cognitive Communications and Networking (TCCN) aims to publish high-quality manuscripts that push the boundaries of cognitive communications and networking research. Cognitive, in this context, refers to the application of perception, learning, reasoning, memory, and adaptive approaches in communication system design. The transactions welcome submissions that explore various aspects of cognitive communications and networks, focusing on innovative and holistic approaches to complex system design. Key topics covered include architecture, protocols, cross-layer design, and cognition cycle design for cognitive networks. Additionally, research on machine learning, artificial intelligence, end-to-end and distributed intelligence, software-defined networking, cognitive radios, spectrum sharing, and security and privacy issues in cognitive networks are of interest. The publication also encourages papers addressing novel services and applications enabled by these cognitive concepts.