{"title":"A Hybrid Semi-Asynchronous Federated Learning and Split Learning Strategy in Edge Networks","authors":"Neha Singh;Mainak Adhikari","doi":"10.1109/TNSE.2025.3530999","DOIUrl":null,"url":null,"abstract":"Federated Learning (FL) is an emerging technique that involves training Machine/Deep Learning models over distributed Edge Devices (EDs) while facing three challenges: device heterogeneity, resource-constraint devices, and Non-IID (Non-Identically Independently Distributed). In the standard FL, the centralized server has to wait for the model parameters from the slowest participating EDs for global training, which leads to increased waiting time due to device heterogeneity. Asynchronous FL resolves the issue of device heterogeneity, however, it requires frequent model parameter transfer, resulting in a straggler effect. Further, frequent asynchronous updates over Non-IID in participating EDs can affect training accuracy. To overcome the challenges, in this paper, we present a new Federated Semi-Asynchronous Split Learning (Fed-SASL) strategy. Fed-SASL utilizes semi-asynchronous aggregation, where model parameters are aggregated in a centralized cloud server, and received from participating EDs without waiting for all devices. This strategy significantly reduces training time and communication overhead. Additionally, split learning is employed to handle slow EDs by dividing the neural network model based on the computational loads of devices, thereby reducing the burden on stragglers. Extensive simulation results over real-time testbed and one benchmark dataset demonstrate the effectiveness of the proposed strategy over existing ones.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"12 2","pages":"1429-1439"},"PeriodicalIF":6.7000,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Network Science and Engineering","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10849944/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
Federated Learning (FL) is an emerging technique that involves training Machine/Deep Learning models over distributed Edge Devices (EDs) while facing three challenges: device heterogeneity, resource-constraint devices, and Non-IID (Non-Identically Independently Distributed). In the standard FL, the centralized server has to wait for the model parameters from the slowest participating EDs for global training, which leads to increased waiting time due to device heterogeneity. Asynchronous FL resolves the issue of device heterogeneity, however, it requires frequent model parameter transfer, resulting in a straggler effect. Further, frequent asynchronous updates over Non-IID in participating EDs can affect training accuracy. To overcome the challenges, in this paper, we present a new Federated Semi-Asynchronous Split Learning (Fed-SASL) strategy. Fed-SASL utilizes semi-asynchronous aggregation, where model parameters are aggregated in a centralized cloud server, and received from participating EDs without waiting for all devices. This strategy significantly reduces training time and communication overhead. Additionally, split learning is employed to handle slow EDs by dividing the neural network model based on the computational loads of devices, thereby reducing the burden on stragglers. Extensive simulation results over real-time testbed and one benchmark dataset demonstrate the effectiveness of the proposed strategy over existing ones.
期刊介绍:
The proposed journal, called the IEEE Transactions on Network Science and Engineering (TNSE), is committed to timely publishing of peer-reviewed technical articles that deal with the theory and applications of network science and the interconnections among the elements in a system that form a network. In particular, the IEEE Transactions on Network Science and Engineering publishes articles on understanding, prediction, and control of structures and behaviors of networks at the fundamental level. The types of networks covered include physical or engineered networks, information networks, biological networks, semantic networks, economic networks, social networks, and ecological networks. Aimed at discovering common principles that govern network structures, network functionalities and behaviors of networks, the journal seeks articles on understanding, prediction, and control of structures and behaviors of networks. Another trans-disciplinary focus of the IEEE Transactions on Network Science and Engineering is the interactions between and co-evolution of different genres of networks.