Coding-Aware Rate Splitting for Distributed Coded Edge Learning

Tianheng Li, Jingzhe Zhang, Xiaofan He
{"title":"Coding-Aware Rate Splitting for Distributed Coded Edge Learning","authors":"Tianheng Li, Jingzhe Zhang, Xiaofan He","doi":"10.1109/INFOCOMWKSHPS57453.2023.10226011","DOIUrl":null,"url":null,"abstract":"Driven by the explosive escalation of machine learning applications, considerable efforts have been devoted to distributed edge learning. To alleviate the so-called straggling issue, coded computing that injects elaborate redundancy into computation emerges as a promising solution, which in turn ignites the recent research interests in distributed coded edge learning. Albeit effectively mitigating straggling, coded edge learning brings new challenges in communications. In particular, existing transmission schemes are mainly designed for conventional distributed edge learning, where the data offloaded to different edge nodes (ENs) are non-overlapping. They cannot achieve the best performance when applied directly to distributed coded edge learning, due to the redundancy among the data for different ENs in the coded settings. To the best of our knowledge, a tailor-designed transmission scheme for distributed coded edge learning still remains open. With this consideration, a novel coding-aware rate splitting scheme is proposed in this work, which splits the data to different ENs in a coding-aware way to avoid transmission redundancy and enables multiple simultaneous multi-casts to the ENs. To minimize the overall processing latency, an iterative optimization algorithm is developed based on the concave-convex procedure (CCCP) framework. Simulations demonstrate that the proposed scheme can substantially reduce the overall latency of distributed coded edge learning as compared to the baselines.","PeriodicalId":354290,"journal":{"name":"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)","volume":"65 4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INFOCOMWKSHPS57453.2023.10226011","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Driven by the explosive escalation of machine learning applications, considerable efforts have been devoted to distributed edge learning. To alleviate the so-called straggling issue, coded computing that injects elaborate redundancy into computation emerges as a promising solution, which in turn ignites the recent research interests in distributed coded edge learning. Albeit effectively mitigating straggling, coded edge learning brings new challenges in communications. In particular, existing transmission schemes are mainly designed for conventional distributed edge learning, where the data offloaded to different edge nodes (ENs) are non-overlapping. They cannot achieve the best performance when applied directly to distributed coded edge learning, due to the redundancy among the data for different ENs in the coded settings. To the best of our knowledge, a tailor-designed transmission scheme for distributed coded edge learning still remains open. With this consideration, a novel coding-aware rate splitting scheme is proposed in this work, which splits the data to different ENs in a coding-aware way to avoid transmission redundancy and enables multiple simultaneous multi-casts to the ENs. To minimize the overall processing latency, an iterative optimization algorithm is developed based on the concave-convex procedure (CCCP) framework. Simulations demonstrate that the proposed scheme can substantially reduce the overall latency of distributed coded edge learning as compared to the baselines.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
分布式编码边缘学习的编码感知速率分割
在机器学习应用爆炸式增长的推动下,分布式边缘学习已经投入了大量的努力。为了缓解所谓的离散问题,在计算中注入复杂冗余的编码计算成为一种有前途的解决方案,这反过来又引发了最近对分布式编码边缘学习的研究兴趣。编码边缘学习虽然有效地缓解了混乱,但也给通信带来了新的挑战。特别是,现有的传输方案主要针对传统的分布式边缘学习设计,其中数据卸载到不同的边缘节点(ENs)是不重叠的。当直接应用于分布式编码边缘学习时,由于编码设置中不同en的数据之间存在冗余,它们无法达到最佳性能。据我们所知,为分布式编码边缘学习量身定制的传输方案仍然是开放的。为此,本文提出了一种新的编码感知速率分割方案,该方案以编码感知的方式将数据分割到不同的网络上,以避免传输冗余,并使多个网络同时进行多播。为了最大限度地减少整体处理延迟,本文提出了一种基于凹-凸过程(CCCP)框架的迭代优化算法。仿真结果表明,与基线算法相比,该算法可以显著降低分布式编码边缘学习的总体延迟。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A Hasty Grid S&R Prototype Using Autonomous UTM and AI-Based Mission Coordination Trajectory Design for Unmanned Aerial Vehicles via Meta-Reinforcement Learning CICADA: Cloud-based Intelligent Classification and Active Defense Approach for IoT Security Learning-Aided Multi-UAV Online Trajectory Coordination and Resource Allocation for Mobile WSNs Vulnerability Exploit Pattern Generation and Analysis for proactive security risk mitigation for 5G networks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1