FedACT: An adaptive chained training approach for federated learning in computing power networks

IF 7.5 2区 计算机科学 Q1 TELECOMMUNICATIONS Digital Communications and Networks Pub Date : 2024-12-01 DOI:10.1016/j.dcan.2023.12.007
Min Wei , Qianying Zhao , Bo Lei , Yizhuo Cai , Yushun Zhang , Xing Zhang , Wenbo Wang
{"title":"FedACT: An adaptive chained training approach for federated learning in computing power networks","authors":"Min Wei ,&nbsp;Qianying Zhao ,&nbsp;Bo Lei ,&nbsp;Yizhuo Cai ,&nbsp;Yushun Zhang ,&nbsp;Xing Zhang ,&nbsp;Wenbo Wang","doi":"10.1016/j.dcan.2023.12.007","DOIUrl":null,"url":null,"abstract":"<div><div>Federated Learning (FL) is a novel distributed machine learning methodology that addresses large-scale parallel computing challenges while safeguarding data security. However, the traditional FL model in communication scenarios, whether for uplink or downlink communications, may give rise to several network problems, such as bandwidth occupation, additional network latency, and bandwidth fragmentation. In this paper, we propose an adaptive chained training approach (FedACT) for FL in computing power networks. First, a Computation-driven Clustering Strategy (CCS) is designed. The server clusters clients by task processing delays to minimize waiting delays at the central server. Second, we propose a Genetic-Algorithm-based Sorting (GAS) method to optimize the order of clients participating in training. Finally, based on the table lookup and forwarding rules of the Segment Routing over IPv6 (SRv6) protocol, the sorting results of GAS are written into the SRv6 packet header, to control the order in which clients participate in model training. We conduct extensive experiments on two datasets of CIFAR-10 and MNIST, and the results demonstrate that the proposed algorithm offers improved accuracy, diminished communication costs, and reduced network delays.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 6","pages":"Pages 1576-1589"},"PeriodicalIF":7.5000,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Digital Communications and Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2352864823001839","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"TELECOMMUNICATIONS","Score":null,"Total":0}
引用次数: 0

Abstract

Federated Learning (FL) is a novel distributed machine learning methodology that addresses large-scale parallel computing challenges while safeguarding data security. However, the traditional FL model in communication scenarios, whether for uplink or downlink communications, may give rise to several network problems, such as bandwidth occupation, additional network latency, and bandwidth fragmentation. In this paper, we propose an adaptive chained training approach (FedACT) for FL in computing power networks. First, a Computation-driven Clustering Strategy (CCS) is designed. The server clusters clients by task processing delays to minimize waiting delays at the central server. Second, we propose a Genetic-Algorithm-based Sorting (GAS) method to optimize the order of clients participating in training. Finally, based on the table lookup and forwarding rules of the Segment Routing over IPv6 (SRv6) protocol, the sorting results of GAS are written into the SRv6 packet header, to control the order in which clients participate in model training. We conduct extensive experiments on two datasets of CIFAR-10 and MNIST, and the results demonstrate that the proposed algorithm offers improved accuracy, diminished communication costs, and reduced network delays.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
FedACT:用于计算能力网络联合学习的自适应链式训练方法
联邦学习(FL)是一种新型的分布式机器学习方法,它在保护数据安全的同时解决了大规模并行计算的挑战。然而,传统的FL模型在通信场景下,无论是上行还是下行通信,都会带来带宽占用、额外的网络延迟、带宽碎片等网络问题。在本文中,我们提出了一种自适应链式训练方法(federact)用于计算能力网络中的FL。首先,设计了计算驱动聚类策略(CCS)。服务器按任务处理延迟对客户机进行集群,以最大限度地减少中央服务器上的等待延迟。其次,我们提出了一种基于遗传算法的排序(GAS)方法来优化客户端参与培训的顺序。最后,基于IPv6分段路由(SRv6)协议的查找表和转发规则,将GAS的排序结果写入SRv6包头中,以控制客户端参与模型训练的顺序。我们在CIFAR-10和MNIST两个数据集上进行了广泛的实验,结果表明所提出的算法提高了精度,降低了通信成本,减少了网络延迟。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Digital Communications and Networks
Digital Communications and Networks Computer Science-Hardware and Architecture
CiteScore
12.80
自引率
5.10%
发文量
915
审稿时长
30 weeks
期刊介绍: Digital Communications and Networks is a prestigious journal that emphasizes on communication systems and networks. We publish only top-notch original articles and authoritative reviews, which undergo rigorous peer-review. We are proud to announce that all our articles are fully Open Access and can be accessed on ScienceDirect. Our journal is recognized and indexed by eminent databases such as the Science Citation Index Expanded (SCIE) and Scopus. In addition to regular articles, we may also consider exceptional conference papers that have been significantly expanded. Furthermore, we periodically release special issues that focus on specific aspects of the field. In conclusion, Digital Communications and Networks is a leading journal that guarantees exceptional quality and accessibility for researchers and scholars in the field of communication systems and networks.
期刊最新文献
Editorial Board Cross-domain resources optimization for hybrid edge computing networks: Federated DRL approach Radio map estimation using a CycleGAN-based learning framework for 6G wireless communication Autonomous network management for 6G communication: A comprehensive survey Pre-filters design for weighted sum rate maximization in multiuser time reversal downlink systems
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1