HSFL:基于层级组织的高效分离联邦学习框架

Tengxi Xia, Yongheng Deng, Sheng Yue, Junyi He, Ju Ren, Yaoxue Zhang
{"title":"HSFL:基于层级组织的高效分离联邦学习框架","authors":"Tengxi Xia, Yongheng Deng, Sheng Yue, Junyi He, Ju Ren, Yaoxue Zhang","doi":"10.23919/CNSM55787.2022.9964646","DOIUrl":null,"url":null,"abstract":"Federated learning (FL) has emerged as a popular paradigm for distributed machine learning among vast clients. Unfortunately, resource-constrained clients often fail to participate in FL because they cannot pay for the memory resources required for model training due to their limited memory or bandwidth. Split federated learning (SFL) is a novel FL framework in which clients commit intermediate results of model training to a cloud server for client-server collaborative training of models, making resource-constrained clients also eligible for FL. However, existing SFL frameworks mostly require frequent communication with the cloud server to exchange intermediate results and model parameters, which results in significant communication overhead and elongated training time. In particular, this can be exacerbated by the imbalanced data distributions of clients. To tackle this issue, we propose HSFL, a hierarchical split federated learning framework that efficiently trains SFL model through hierarchical organization participants. Under the HSFL framework, we formulate a Cloud Aggregation Time Minimization (CATM) problem to minimize the global training time and design a light-weight client assignment algorithm based on dynamic programming to solve it. Moreover, we develop a self-adaption approach to cope with the dynamic computational resources of clients. Finally, we implement and evaluate HSFL on various real-world training tasks, elaborating on its effectiveness and superiority in terms of efficiency and accuracy compared to baselines.","PeriodicalId":232521,"journal":{"name":"2022 18th International Conference on Network and Service Management (CNSM)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"HSFL: An Efficient Split Federated Learning Framework via Hierarchical Organization\",\"authors\":\"Tengxi Xia, Yongheng Deng, Sheng Yue, Junyi He, Ju Ren, Yaoxue Zhang\",\"doi\":\"10.23919/CNSM55787.2022.9964646\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated learning (FL) has emerged as a popular paradigm for distributed machine learning among vast clients. Unfortunately, resource-constrained clients often fail to participate in FL because they cannot pay for the memory resources required for model training due to their limited memory or bandwidth. Split federated learning (SFL) is a novel FL framework in which clients commit intermediate results of model training to a cloud server for client-server collaborative training of models, making resource-constrained clients also eligible for FL. However, existing SFL frameworks mostly require frequent communication with the cloud server to exchange intermediate results and model parameters, which results in significant communication overhead and elongated training time. In particular, this can be exacerbated by the imbalanced data distributions of clients. To tackle this issue, we propose HSFL, a hierarchical split federated learning framework that efficiently trains SFL model through hierarchical organization participants. Under the HSFL framework, we formulate a Cloud Aggregation Time Minimization (CATM) problem to minimize the global training time and design a light-weight client assignment algorithm based on dynamic programming to solve it. Moreover, we develop a self-adaption approach to cope with the dynamic computational resources of clients. Finally, we implement and evaluate HSFL on various real-world training tasks, elaborating on its effectiveness and superiority in terms of efficiency and accuracy compared to baselines.\",\"PeriodicalId\":232521,\"journal\":{\"name\":\"2022 18th International Conference on Network and Service Management (CNSM)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 18th International Conference on Network and Service Management (CNSM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.23919/CNSM55787.2022.9964646\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 18th International Conference on Network and Service Management (CNSM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/CNSM55787.2022.9964646","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

摘要

联邦学习(FL)已经成为分布式机器学习的一个流行范例。不幸的是,资源受限的客户端经常无法参与FL,因为由于有限的内存或带宽,他们无法支付模型训练所需的内存资源。拆分联邦学习(Split federated learning, SFL)是一种新颖的模型学习框架,客户端将模型训练的中间结果提交给云服务器进行客户端-服务器协同训练模型,使得资源有限的客户端也可以进行模型学习。然而,现有的SFL框架大多需要与云服务器频繁通信以交换中间结果和模型参数,这导致通信开销大,训练时间长。特别是,客户机数据分布的不平衡可能会加剧这种情况。为了解决这个问题,我们提出了HSFL,一个分层分裂联邦学习框架,通过分层组织参与者有效地训练SFL模型。在HSFL框架下,提出了最小化全局训练时间的CATM (Cloud Aggregation Time Minimization)问题,并设计了基于动态规划的轻量级客户端分配算法来解决该问题。此外,我们还开发了一种自适应方法来处理客户端动态计算资源。最后,我们在各种现实世界的训练任务中实施和评估HSFL,阐述了与基线相比,HSFL在效率和准确性方面的有效性和优越性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
HSFL: An Efficient Split Federated Learning Framework via Hierarchical Organization
Federated learning (FL) has emerged as a popular paradigm for distributed machine learning among vast clients. Unfortunately, resource-constrained clients often fail to participate in FL because they cannot pay for the memory resources required for model training due to their limited memory or bandwidth. Split federated learning (SFL) is a novel FL framework in which clients commit intermediate results of model training to a cloud server for client-server collaborative training of models, making resource-constrained clients also eligible for FL. However, existing SFL frameworks mostly require frequent communication with the cloud server to exchange intermediate results and model parameters, which results in significant communication overhead and elongated training time. In particular, this can be exacerbated by the imbalanced data distributions of clients. To tackle this issue, we propose HSFL, a hierarchical split federated learning framework that efficiently trains SFL model through hierarchical organization participants. Under the HSFL framework, we formulate a Cloud Aggregation Time Minimization (CATM) problem to minimize the global training time and design a light-weight client assignment algorithm based on dynamic programming to solve it. Moreover, we develop a self-adaption approach to cope with the dynamic computational resources of clients. Finally, we implement and evaluate HSFL on various real-world training tasks, elaborating on its effectiveness and superiority in terms of efficiency and accuracy compared to baselines.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Function-as-a-Service Orchestration in Fog Computing Environments Intent-based Decentralized Orchestration for Green Energy-aware Provisioning of Fog-native Workflows HSFL: An Efficient Split Federated Learning Framework via Hierarchical Organization Network traffic classification based on periodic behavior detection VM Failure Prediction with Log Analysis using BERT-CNN Model
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1