{"title":"DFL: Dynamic Federated Split Learning in Heterogeneous IoT","authors":"Eric Samikwa;Antonio Di Maio;Torsten Braun","doi":"10.1109/TMLCN.2024.3409205","DOIUrl":null,"url":null,"abstract":"Federated Learning (FL) in edge Internet of Things (IoT) environments is challenging due to the heterogeneous nature of the learning environment, mainly embodied in two aspects. Firstly, the statistically heterogeneous data, usually non-independent identically distributed (non-IID), from geographically distributed clients can deteriorate the FL training accuracy. Secondly, the heterogeneous computing and communication resources in IoT devices often result in unstable training processes that slow down the training of a global model and affect energy consumption. Most existing solutions address only the unilateral side of the heterogeneity issue but neglect the joint problem of resources and data heterogeneity for the resource-constrained IoT. In this article, we propose Dynamic Federated split Learning (DFL) to address the joint problem of data and resource heterogeneity for distributed training in IoT. DFL enhances training efficiency in heterogeneous dynamic IoT through resource-aware split computing of deep neural networks and dynamic clustering of training participants based on the similarity of their sub-model layers. We evaluate DFL on a real testbed comprising heterogeneous IoT devices using two widely-adopted datasets, in various non-IID settings. Results show that DFL improves training performance in terms of training time by up to 48%, accuracy by up to 32%, and energy consumption by up to 62.8% compared to classic FL and Federated Split Learning in scenarios with both data and resource heterogeneity.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"733-752"},"PeriodicalIF":0.0000,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10547401","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Machine Learning in Communications and Networking","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10547401/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Federated Learning (FL) in edge Internet of Things (IoT) environments is challenging due to the heterogeneous nature of the learning environment, mainly embodied in two aspects. Firstly, the statistically heterogeneous data, usually non-independent identically distributed (non-IID), from geographically distributed clients can deteriorate the FL training accuracy. Secondly, the heterogeneous computing and communication resources in IoT devices often result in unstable training processes that slow down the training of a global model and affect energy consumption. Most existing solutions address only the unilateral side of the heterogeneity issue but neglect the joint problem of resources and data heterogeneity for the resource-constrained IoT. In this article, we propose Dynamic Federated split Learning (DFL) to address the joint problem of data and resource heterogeneity for distributed training in IoT. DFL enhances training efficiency in heterogeneous dynamic IoT through resource-aware split computing of deep neural networks and dynamic clustering of training participants based on the similarity of their sub-model layers. We evaluate DFL on a real testbed comprising heterogeneous IoT devices using two widely-adopted datasets, in various non-IID settings. Results show that DFL improves training performance in terms of training time by up to 48%, accuracy by up to 32%, and energy consumption by up to 62.8% compared to classic FL and Federated Split Learning in scenarios with both data and resource heterogeneity.