Pub Date : 2024-10-18DOI: 10.1109/TMC.2024.3476340
Jialiang Zhu;Hao Zheng;Wenchao Xu;Haozhao Wang;Zhiming He;Yuxuan Liu;Shuang Wang;Qi Sun
Federated Learning (FL) is to collaboratively train a global model among distributed clients by iteratively aggregating their local updates without sharing their raw data, whereby the global modal can approximately converge to the centralized training way over a global dataset that composed of all local datasets (i.e., union of all users’ local data). However, in real-world scenarios, the distributions of the data classes are often imbalanced not only locally, but also in the global dataset, which severely deteriorate the FL performance due to the conflicting knowledge aggregation. Existing solutions for FL class imbalance either focus on the local data to regulate the training process or purely aim at the global datasets, which often fail to alleviate the class imbalance problem if there is mismatch between the local and global imbalance. Considering these limitations, this paper proposes a Global-Local Joint Learning method, namely GLJL, which simultaneously harmonizes the global and local class imbalance issue by jointly embedding the local and the global factors into each client’s loss function. Through extensive experiments over popular datasets with various class imbalance settings, we show that the proposed method can significantly improve the model accuracy over minority classes without sacrificing the accuracy of other classes.
{"title":"Harmonizing Global and Local Class Imbalance for Federated Learning","authors":"Jialiang Zhu;Hao Zheng;Wenchao Xu;Haozhao Wang;Zhiming He;Yuxuan Liu;Shuang Wang;Qi Sun","doi":"10.1109/TMC.2024.3476340","DOIUrl":"https://doi.org/10.1109/TMC.2024.3476340","url":null,"abstract":"Federated Learning (FL) is to collaboratively train a global model among distributed clients by iteratively aggregating their local updates without sharing their raw data, whereby the global modal can approximately converge to the centralized training way over a global dataset that composed of all local datasets (i.e., union of all users’ local data). However, in real-world scenarios, the distributions of the data classes are often imbalanced not only locally, but also in the global dataset, which severely deteriorate the FL performance due to the conflicting knowledge aggregation. Existing solutions for FL class imbalance either focus on the local data to regulate the training process or purely aim at the global datasets, which often fail to alleviate the class imbalance problem if there is mismatch between the local and global imbalance. Considering these limitations, this paper proposes a Global-Local Joint Learning method, namely GLJL, which simultaneously harmonizes the global and local class imbalance issue by jointly embedding the local and the global factors into each client’s loss function. Through extensive experiments over popular datasets with various class imbalance settings, we show that the proposed method can significantly improve the model accuracy over minority classes without sacrificing the accuracy of other classes.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 2","pages":"1120-1131"},"PeriodicalIF":7.7,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142938402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-17DOI: 10.1109/TMC.2024.3476338
Jiongyu Dai;Lianjun Li;Ramin Safavinejad;Shadab Mahboob;Hao Chen;Vishnu V Ratnam;Haining Wang;Jianzhong Zhang;Lingjia Liu
Network slicing plays a critical role in enabling multiple virtualized and independent network services to be created on top of a common physical network infrastructure. In this paper, we introduce a deep reinforcement learning (DRL)-based radio resource management (RRM) solution for radio access network (RAN) slicing under service-level agreement (SLA) guarantees. The objective of this solution is to minimize the SLA violation. Our method is designed with a two-level scheduling structure that works seamlessly under Open Radio Access Network (O-RAN) architecture. Specifically, at an upper level, a DRL-based inter-slice scheduler is working on a coarse time granularity to allocate resources to network slices. And at a lower level, an existing intra-slice scheduler such as proportional fair (PF) is working on a fine time granularity to allocate slice dedicated resources to slice users. This setting makes our solution O-RAN compliant and ready to be deployed as an ‘xApp’ on the RAN Intelligent Controller (RIC). For performance evaluation and proof of concept purposes, we develop two platforms, one industry-level simulator and one O-RAN compliant testbed; evaluation on both platforms demonstrates our solution’s superior performance over conventional methods.
{"title":"O-RAN-Enabled Intelligent Network Slicing to Meet Service-Level Agreement (SLA)","authors":"Jiongyu Dai;Lianjun Li;Ramin Safavinejad;Shadab Mahboob;Hao Chen;Vishnu V Ratnam;Haining Wang;Jianzhong Zhang;Lingjia Liu","doi":"10.1109/TMC.2024.3476338","DOIUrl":"https://doi.org/10.1109/TMC.2024.3476338","url":null,"abstract":"Network slicing plays a critical role in enabling multiple virtualized and independent network services to be created on top of a common physical network infrastructure. In this paper, we introduce a deep reinforcement learning (DRL)-based radio resource management (RRM) solution for radio access network (RAN) slicing under service-level agreement (SLA) guarantees. The objective of this solution is to minimize the SLA violation. Our method is designed with a two-level scheduling structure that works seamlessly under Open Radio Access Network (O-RAN) architecture. Specifically, at an upper level, a DRL-based inter-slice scheduler is working on a coarse time granularity to allocate resources to network slices. And at a lower level, an existing intra-slice scheduler such as proportional fair (PF) is working on a fine time granularity to allocate slice dedicated resources to slice users. This setting makes our solution O-RAN compliant and ready to be deployed as an ‘xApp’ on the RAN Intelligent Controller (RIC). For performance evaluation and proof of concept purposes, we develop two platforms, one industry-level simulator and one O-RAN compliant testbed; evaluation on both platforms demonstrates our solution’s superior performance over conventional methods.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 2","pages":"890-906"},"PeriodicalIF":7.7,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142938371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-15DOI: 10.1109/TMC.2024.3478048
Jakub Žádník;Michel Kieffer;Anthony Trioux;Markku Mäkitalo;Pekka Jääskeläinen
Remote inference allows lightweight edge devices, such as autonomous drones, to perform vision tasks exceeding their computational, energy, or processing delay budget. In such applications, reliable transmission of information is challenging due to high variations of channel quality. Traditional approaches involving spatio-temporal transforms, quantization, and entropy coding followed by digital transmission may be affected by a sudden decrease in quality (the digital cliff