Hierarchically Federated Learning in Wireless Networks: D2D Consensus and Inter-Cell Aggregation

Jie Zhang;Li Chen;Yunfei Chen;Xiaohui Chen;Guo Wei
{"title":"Hierarchically Federated Learning in Wireless Networks: D2D Consensus and Inter-Cell Aggregation","authors":"Jie Zhang;Li Chen;Yunfei Chen;Xiaohui Chen;Guo Wei","doi":"10.1109/TMLCN.2024.3385355","DOIUrl":null,"url":null,"abstract":"Decentralized federated learning (DFL) architecture enables clients to collaboratively train a shared machine learning model without a central parameter server. However, it is difficult to apply DFL to a multi-cell scenario due to inadequate model averaging and cross-cell device-to-device (D2D) communications. In this paper, we propose a hierarchically decentralized federated learning (HDFL) framework that combines intra-cell D2D links between devices and backhaul communications between base stations. In HDFL, devices from different cells collaboratively train a global model using periodic intra-cell D2D consensus and inter-cell aggregation. The strong convergence guarantee of the proposed HDFL algorithm is established even for non-convex objectives. Based on the convergence analysis, we characterize the network topology of each cell, the communication interval of intra-cell consensus and inter-cell aggregation on the training performance. To further improve the performance of HDFL, we optimize the computation capacity selection and bandwidth allocation to minimize the training latency and energy overhead. Numerical results based on the MNIST and CIFAR-10 datasets validate the superiority of HDFL over traditional DFL methods in the multi-cell scenario.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"442-456"},"PeriodicalIF":0.0000,"publicationDate":"2024-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10491307","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Machine Learning in Communications and Networking","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10491307/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Decentralized federated learning (DFL) architecture enables clients to collaboratively train a shared machine learning model without a central parameter server. However, it is difficult to apply DFL to a multi-cell scenario due to inadequate model averaging and cross-cell device-to-device (D2D) communications. In this paper, we propose a hierarchically decentralized federated learning (HDFL) framework that combines intra-cell D2D links between devices and backhaul communications between base stations. In HDFL, devices from different cells collaboratively train a global model using periodic intra-cell D2D consensus and inter-cell aggregation. The strong convergence guarantee of the proposed HDFL algorithm is established even for non-convex objectives. Based on the convergence analysis, we characterize the network topology of each cell, the communication interval of intra-cell consensus and inter-cell aggregation on the training performance. To further improve the performance of HDFL, we optimize the computation capacity selection and bandwidth allocation to minimize the training latency and energy overhead. Numerical results based on the MNIST and CIFAR-10 datasets validate the superiority of HDFL over traditional DFL methods in the multi-cell scenario.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
无线网络中的分层联合学习:D2D 共识和小区间聚合
分散联合学习(DFL)架构使客户能够在没有中央参数服务器的情况下协作训练共享机器学习模型。然而,由于模型平均化和跨小区设备到设备(D2D)通信不足,DFL 难以应用于多小区场景。在本文中,我们提出了一种分层分散联合学习(HDFL)框架,它结合了设备间的小区内 D2D 链接和基站间的回程通信。在 HDFL 中,来自不同小区的设备利用周期性的小区内 D2D 共识和小区间聚合协作训练一个全局模型。即使对于非凸目标,所提出的 HDFL 算法也有很强的收敛性保证。基于收敛性分析,我们描述了每个小区的网络拓扑结构、小区内共识和小区间聚合的通信间隔对训练性能的影响。为了进一步提高 HDFL 的性能,我们优化了计算容量选择和带宽分配,以尽量减少训练延迟和能量开销。基于 MNIST 和 CIFAR-10 数据集的数值结果验证了 HDFL 在多小区场景下优于传统的 DFL 方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Conditional Denoising Diffusion Probabilistic Models for Data Reconstruction Enhancement in Wireless Communications Multi-Agent Reinforcement Learning With Action Masking for UAV-Enabled Mobile Communications Online Learning for Intelligent Thermal Management of Interference-Coupled and Passively Cooled Base Stations Robust and Lightweight Modeling of IoT Network Behaviors From Raw Traffic Packets Front Cover
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1