具有自适应计算和通信压缩功能的异构感知合作式联盟边缘学习

Zhenxiao Zhang, Zhidong Gao, Yuanxiong Guo, Yanmin Gong
{"title":"具有自适应计算和通信压缩功能的异构感知合作式联盟边缘学习","authors":"Zhenxiao Zhang, Zhidong Gao, Yuanxiong Guo, Yanmin Gong","doi":"arxiv-2409.04022","DOIUrl":null,"url":null,"abstract":"Motivated by the drawbacks of cloud-based federated learning (FL),\ncooperative federated edge learning (CFEL) has been proposed to improve\nefficiency for FL over mobile edge networks, where multiple edge servers\ncollaboratively coordinate the distributed model training across a large number\nof edge devices. However, CFEL faces critical challenges arising from dynamic\nand heterogeneous device properties, which slow down the convergence and\nincrease resource consumption. This paper proposes a heterogeneity-aware CFEL\nscheme called \\textit{Heterogeneity-Aware Cooperative Edge-based Federated\nAveraging} (HCEF) that aims to maximize the model accuracy while minimizing the\ntraining time and energy consumption via adaptive computation and communication\ncompression in CFEL. By theoretically analyzing how local update frequency and\ngradient compression affect the convergence error bound in CFEL, we develop an\nefficient online control algorithm for HCEF to dynamically determine local\nupdate frequencies and compression ratios for heterogeneous devices.\nExperimental results show that compared with prior schemes, the proposed HCEF\nscheme can maintain higher model accuracy while reducing training latency and\nimproving energy efficiency simultaneously.","PeriodicalId":501422,"journal":{"name":"arXiv - CS - Distributed, Parallel, and Cluster Computing","volume":"19 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Heterogeneity-Aware Cooperative Federated Edge Learning with Adaptive Computation and Communication Compression\",\"authors\":\"Zhenxiao Zhang, Zhidong Gao, Yuanxiong Guo, Yanmin Gong\",\"doi\":\"arxiv-2409.04022\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Motivated by the drawbacks of cloud-based federated learning (FL),\\ncooperative federated edge learning (CFEL) has been proposed to improve\\nefficiency for FL over mobile edge networks, where multiple edge servers\\ncollaboratively coordinate the distributed model training across a large number\\nof edge devices. However, CFEL faces critical challenges arising from dynamic\\nand heterogeneous device properties, which slow down the convergence and\\nincrease resource consumption. This paper proposes a heterogeneity-aware CFEL\\nscheme called \\\\textit{Heterogeneity-Aware Cooperative Edge-based Federated\\nAveraging} (HCEF) that aims to maximize the model accuracy while minimizing the\\ntraining time and energy consumption via adaptive computation and communication\\ncompression in CFEL. By theoretically analyzing how local update frequency and\\ngradient compression affect the convergence error bound in CFEL, we develop an\\nefficient online control algorithm for HCEF to dynamically determine local\\nupdate frequencies and compression ratios for heterogeneous devices.\\nExperimental results show that compared with prior schemes, the proposed HCEF\\nscheme can maintain higher model accuracy while reducing training latency and\\nimproving energy efficiency simultaneously.\",\"PeriodicalId\":501422,\"journal\":{\"name\":\"arXiv - CS - Distributed, Parallel, and Cluster Computing\",\"volume\":\"19 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Distributed, Parallel, and Cluster Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.04022\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Distributed, Parallel, and Cluster Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.04022","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

受基于云的联合学习(Federated Learning,FL)弊端的启发,人们提出了合作联合边缘学习(CFEL),以提高移动边缘网络上联合学习的效率。然而,CFEL 面临着动态和异构设备特性带来的严峻挑战,这些特性会减慢收敛速度并增加资源消耗。本文提出了一种名为textit{Heterogeneity-Aware Cooperative Edge-based FederatedAveraging}(HCEF)的异构感知CFEL方案,旨在通过CFEL中的自适应计算和通信压缩,最大限度地提高模型精度,同时最大限度地减少训练时间和能耗。通过从理论上分析局部更新频率和梯度压缩如何影响 CFEL 中的收敛误差边界,我们为 HCEF 开发了一种高效的在线控制算法,以动态确定异构设备的局部更新频率和压缩比。实验结果表明,与之前的方案相比,所提出的 HCEF 方案可以保持更高的模型精度,同时减少训练延迟并提高能效。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Heterogeneity-Aware Cooperative Federated Edge Learning with Adaptive Computation and Communication Compression
Motivated by the drawbacks of cloud-based federated learning (FL), cooperative federated edge learning (CFEL) has been proposed to improve efficiency for FL over mobile edge networks, where multiple edge servers collaboratively coordinate the distributed model training across a large number of edge devices. However, CFEL faces critical challenges arising from dynamic and heterogeneous device properties, which slow down the convergence and increase resource consumption. This paper proposes a heterogeneity-aware CFEL scheme called \textit{Heterogeneity-Aware Cooperative Edge-based Federated Averaging} (HCEF) that aims to maximize the model accuracy while minimizing the training time and energy consumption via adaptive computation and communication compression in CFEL. By theoretically analyzing how local update frequency and gradient compression affect the convergence error bound in CFEL, we develop an efficient online control algorithm for HCEF to dynamically determine local update frequencies and compression ratios for heterogeneous devices. Experimental results show that compared with prior schemes, the proposed HCEF scheme can maintain higher model accuracy while reducing training latency and improving energy efficiency simultaneously.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Massively parallel CMA-ES with increasing population Communication Lower Bounds and Optimal Algorithms for Symmetric Matrix Computations Energy Efficiency Support for Software Defined Networks: a Serverless Computing Approach CountChain: A Decentralized Oracle Network for Counting Systems Delay Analysis of EIP-4844
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1