Pub Date : 2021-12-01DOI: 10.1109/GLOBECOM46510.2021.9685514
Yunfeng Zhao, Zhicheng Liu, Chao Qiu, Xiaofei Wang, F. Yu, Victor C. M. Leung
As a compelling collaborative machine learning framework in the big data era, federated learning allows multiple participants to jointly train a model without revealing their private data. To further leverage the ubiquitous resources in end-edge-cloud systems, hierarchical federated learning (HFL) focuses on the layered feature to relieve the excessive communication overhead and the risk of data leakage. For end devices are often considered as self-interested and reluctant to join in model training, encouraging them to participate becomes an emerging and challenging issue, which deeply impacts training performance and has not been well considered yet. This paper proposes an incentive mechanism for HFL in end-edge-cloud systems, which motivates end devices to contribute data for model training. The hierarchical training process in end-edge-cloud systems is modeled as a multi-layer Stackelberg game where sub-games are interconnected through the utility functions. We derive the Nash equilibrium strategies and closed-form solutions to guide players. Due to fully grasping the inner interest relationship among players, the proposed mechanism could exchange the low costs for the high model performance. Simulations demonstrate the effectiveness of the proposed mechanism and reveal stakeholder's dependencies on the allocation of data resources.
{"title":"An Incentive Mechanism for Big Data Trading in End-Edge-Cloud Hierarchical Federated Learning","authors":"Yunfeng Zhao, Zhicheng Liu, Chao Qiu, Xiaofei Wang, F. Yu, Victor C. M. Leung","doi":"10.1109/GLOBECOM46510.2021.9685514","DOIUrl":"https://doi.org/10.1109/GLOBECOM46510.2021.9685514","url":null,"abstract":"As a compelling collaborative machine learning framework in the big data era, federated learning allows multiple participants to jointly train a model without revealing their private data. To further leverage the ubiquitous resources in end-edge-cloud systems, hierarchical federated learning (HFL) focuses on the layered feature to relieve the excessive communication overhead and the risk of data leakage. For end devices are often considered as self-interested and reluctant to join in model training, encouraging them to participate becomes an emerging and challenging issue, which deeply impacts training performance and has not been well considered yet. This paper proposes an incentive mechanism for HFL in end-edge-cloud systems, which motivates end devices to contribute data for model training. The hierarchical training process in end-edge-cloud systems is modeled as a multi-layer Stackelberg game where sub-games are interconnected through the utility functions. We derive the Nash equilibrium strategies and closed-form solutions to guide players. Due to fully grasping the inner interest relationship among players, the proposed mechanism could exchange the low costs for the high model performance. Simulations demonstrate the effectiveness of the proposed mechanism and reveal stakeholder's dependencies on the allocation of data resources.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116188778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/GLOBECOM46510.2021.9685768
Chuan-Zheng Lee, L. P. Barnes, Wenhao Zhan, Ayfer Özgür
We propose schemes for minimax statistical estimation of sparse parameter or observation vectors over a Gaussian multiple-access channel (MAC) under squared error loss, using techniques from statistics, compressed sensing and wireless communication. These “analog” schemes exploit the superposition inherent in the Gaussian MAC, using compressed sensing to reduce the number of channel uses needed. For the sparse Gaussian location and sparse product Bernoulli models, we derive expressions for risk in terms of the numbers of nodes, parameters, channel uses and nonzero entries (sparsity). We show that they offer exponential improvements over existing lower bounds for risk in “digital” schemes that assume nodes to transmit bits errorlessly at the Shannon capacity. This shows that analog schemes that design estimation and communication jointly can efficiently exploit the inherent sparsity in high-dimensional models and observations, and provide drastic improvements over digital schemes that separate source and channel coding in this context.
{"title":"Over-the-Air Statistical Estimation of Sparse Models","authors":"Chuan-Zheng Lee, L. P. Barnes, Wenhao Zhan, Ayfer Özgür","doi":"10.1109/GLOBECOM46510.2021.9685768","DOIUrl":"https://doi.org/10.1109/GLOBECOM46510.2021.9685768","url":null,"abstract":"We propose schemes for minimax statistical estimation of sparse parameter or observation vectors over a Gaussian multiple-access channel (MAC) under squared error loss, using techniques from statistics, compressed sensing and wireless communication. These “analog” schemes exploit the superposition inherent in the Gaussian MAC, using compressed sensing to reduce the number of channel uses needed. For the sparse Gaussian location and sparse product Bernoulli models, we derive expressions for risk in terms of the numbers of nodes, parameters, channel uses and nonzero entries (sparsity). We show that they offer exponential improvements over existing lower bounds for risk in “digital” schemes that assume nodes to transmit bits errorlessly at the Shannon capacity. This shows that analog schemes that design estimation and communication jointly can efficiently exploit the inherent sparsity in high-dimensional models and observations, and provide drastic improvements over digital schemes that separate source and channel coding in this context.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123530328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/GLOBECOM46510.2021.9685769
Zhifeng Tang, Zhuo Sun, Nan Yang, Xiangyun Zhou
In this paper, we analyze the age of information (AoI) performance of a multi-user mobile edge computing (MEC) system where a base station (BS) generates and transmits computation-intensive packets to user equipments (UEs). In this MEC system, we consider two computing schemes, namely, the local computing scheme and the edge computing scheme. In the local computing scheme, each packet is transmitted to the UE and then computed by the local server at the UE. In the edge computing scheme, each packet is computed by the edge server at the BS and then transmitted to the UE. Considering exponentially distributed transmission time and computation time and adopting the first come first serve queuing policy, we derive the closed-form expressions for the average AoI of these two computing schemes. Simulation results corroborate our analysis and examine the impact of system parameters on the average AoI.
{"title":"Age of Information Analysis of Multi-user Mobile Edge Computing Systems","authors":"Zhifeng Tang, Zhuo Sun, Nan Yang, Xiangyun Zhou","doi":"10.1109/GLOBECOM46510.2021.9685769","DOIUrl":"https://doi.org/10.1109/GLOBECOM46510.2021.9685769","url":null,"abstract":"In this paper, we analyze the age of information (AoI) performance of a multi-user mobile edge computing (MEC) system where a base station (BS) generates and transmits computation-intensive packets to user equipments (UEs). In this MEC system, we consider two computing schemes, namely, the local computing scheme and the edge computing scheme. In the local computing scheme, each packet is transmitted to the UE and then computed by the local server at the UE. In the edge computing scheme, each packet is computed by the edge server at the BS and then transmitted to the UE. Considering exponentially distributed transmission time and computation time and adopting the first come first serve queuing policy, we derive the closed-form expressions for the average AoI of these two computing schemes. Simulation results corroborate our analysis and examine the impact of system parameters on the average AoI.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"144 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123577129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/GLOBECOM46510.2021.9685444
Gunasekaran Manogaran, Bharat S. Rawal
Smart or electronic healthcare is undergoing rapid change from the traditional specialist and hospital-centered style to a disseminated patient-centered using Internet of Things (IoT). Presently, 4G and other advanced communication standards are utilized in healthcare for intelligent healthcare services and applications. Traffic handling is an essential feature for the flexible interoperability of the internet of things (IoT) with other heterogeneous communication networks. Efficient traffic handling controls latency and communication failures due to random access and collision in cellular network overlay IoT. It is challenging for existing communication technology to achieve the necessities of time-sensitive and very dynamic healthcare applications of the future. In this manuscript, adaptive eNB selection with traffic scheduling (AeS-TS) is proposed to improve the efficiency of IoT-long term evolution (LTE) networks. AeS-Tsworks in two phases: adaptive eNB selection and gateway traffic scheduling. In eNB selection, traffic-aware radio infrastructure selection with the offloading feature is presented. eNB selection is preceded by using a preference function to improve the acceptance rate of incoming IoT traffic and minimize transmission loss. In the traffic scheduling phase, sequential and level-based slot transmission is adapted to improve traffic forwarding quality. The slots are selected by analyzing the error in time function using the recurrent learning process.
{"title":"An Efficient eNB Selection and Traffic Scheduling Method for LTE Overlay IoT Communication Networks","authors":"Gunasekaran Manogaran, Bharat S. Rawal","doi":"10.1109/GLOBECOM46510.2021.9685444","DOIUrl":"https://doi.org/10.1109/GLOBECOM46510.2021.9685444","url":null,"abstract":"Smart or electronic healthcare is undergoing rapid change from the traditional specialist and hospital-centered style to a disseminated patient-centered using Internet of Things (IoT). Presently, 4G and other advanced communication standards are utilized in healthcare for intelligent healthcare services and applications. Traffic handling is an essential feature for the flexible interoperability of the internet of things (IoT) with other heterogeneous communication networks. Efficient traffic handling controls latency and communication failures due to random access and collision in cellular network overlay IoT. It is challenging for existing communication technology to achieve the necessities of time-sensitive and very dynamic healthcare applications of the future. In this manuscript, adaptive eNB selection with traffic scheduling (AeS-TS) is proposed to improve the efficiency of IoT-long term evolution (LTE) networks. AeS-Tsworks in two phases: adaptive eNB selection and gateway traffic scheduling. In eNB selection, traffic-aware radio infrastructure selection with the offloading feature is presented. eNB selection is preceded by using a preference function to improve the acceptance rate of incoming IoT traffic and minimize transmission loss. In the traffic scheduling phase, sequential and level-based slot transmission is adapted to improve traffic forwarding quality. The slots are selected by analyzing the error in time function using the recurrent learning process.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123623416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/GLOBECOM46510.2021.9685617
Chang Tian, G. Huang, An Liu, Wu Luo
We investigate the downlink transmission for multi-user multi-input multi-out (MU-MIMO) system, in which the regularized zero forcing (RZF) precoder is adopted and the power allocation and regularization factor are optimized. Our aim is to find a power allocation and regularization factor control policy that can minimize the long-term average power consumption subject to long-term delay constraint for each user. The induced optimization problem is formulated as a constrained Markov decision process (CMDP), which is efficiently solved by the proposed constrained deep reinforcement learning algorithm, called successive convex approximation policy optimization (SCAPO). The SCAPO is based on solving a sequence of convex objective/feasibility optimization problems obtained by replacing the objective and constraint functions in the original problems with convex surrogate functions. At each iteration, the SCAPO merely needs to estimate the first-order information and solve a convex surrogate problem that can be efficiently parallel tackled. Moreover, the SCAPO enables to reuse old experiences from previous updates, thereby significantly reducing the implementation cost. Numerical results have shown that the novel SCAPO can achieve the state-of-the-art performance over advanced baselines.
{"title":"Delay-Aware Power Control for Downlink Multi-User MIMO via Constrained Deep Reinforcement Learning","authors":"Chang Tian, G. Huang, An Liu, Wu Luo","doi":"10.1109/GLOBECOM46510.2021.9685617","DOIUrl":"https://doi.org/10.1109/GLOBECOM46510.2021.9685617","url":null,"abstract":"We investigate the downlink transmission for multi-user multi-input multi-out (MU-MIMO) system, in which the regularized zero forcing (RZF) precoder is adopted and the power allocation and regularization factor are optimized. Our aim is to find a power allocation and regularization factor control policy that can minimize the long-term average power consumption subject to long-term delay constraint for each user. The induced optimization problem is formulated as a constrained Markov decision process (CMDP), which is efficiently solved by the proposed constrained deep reinforcement learning algorithm, called successive convex approximation policy optimization (SCAPO). The SCAPO is based on solving a sequence of convex objective/feasibility optimization problems obtained by replacing the objective and constraint functions in the original problems with convex surrogate functions. At each iteration, the SCAPO merely needs to estimate the first-order information and solve a convex surrogate problem that can be efficiently parallel tackled. Moreover, the SCAPO enables to reuse old experiences from previous updates, thereby significantly reducing the implementation cost. Numerical results have shown that the novel SCAPO can achieve the state-of-the-art performance over advanced baselines.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123721434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/GLOBECOM46510.2021.9685180
Qiong Liu, Peng Yang, Feng Lyu, Ning Zhang, Li Yu
Traditional congestion control algorithms rely on various model-based methods to improve the end-to-end (E2E) performance of packet transmission. The resulting decisions quickly become less effective amid the dynamics of network conditions. In order to perform congestion control adaptively, reinforcement learning (RL) can be adopted to continuously learn the optimal strategy from the network environment. Oftentimes, the reward of such a learning problem is a weighted sum of multiple E2E performance metrics, such as throughput, delay, and fairness. Unfortunately, those weights can be only manually tuned based on extensive experiments. To address this issue, in this paper, we design a constrained RL algorithm for congestion control named CRL-CC to adaptively tune those weights, with the objective of effectively improving the overall E2E packet transmission performance. In particular, the multi-objective optimization problem is firstly formulated as a constrained optimization problem. Then, the Lagrangian relaxation method is leveraged to transform the constrained optimization problem into a single-objective optimization problem, which is solved by designing a multi-objective reward function with Lagrangian multipliers. Extensive experiments based on OpenAI-Gym show that the proposed CRL-CC algorithm can achieve higher overall performance in various network conditions. In particular, the CRL-CC algorithm outperforms the benchmark algorithm on Pantheon by 21.7%, 27.4%, and 5.3% in throughput, delay, and fairness, respectively.
{"title":"Multi-Objective Network Congestion Control via Constrained Reinforcement Learning","authors":"Qiong Liu, Peng Yang, Feng Lyu, Ning Zhang, Li Yu","doi":"10.1109/GLOBECOM46510.2021.9685180","DOIUrl":"https://doi.org/10.1109/GLOBECOM46510.2021.9685180","url":null,"abstract":"Traditional congestion control algorithms rely on various model-based methods to improve the end-to-end (E2E) performance of packet transmission. The resulting decisions quickly become less effective amid the dynamics of network conditions. In order to perform congestion control adaptively, reinforcement learning (RL) can be adopted to continuously learn the optimal strategy from the network environment. Oftentimes, the reward of such a learning problem is a weighted sum of multiple E2E performance metrics, such as throughput, delay, and fairness. Unfortunately, those weights can be only manually tuned based on extensive experiments. To address this issue, in this paper, we design a constrained RL algorithm for congestion control named CRL-CC to adaptively tune those weights, with the objective of effectively improving the overall E2E packet transmission performance. In particular, the multi-objective optimization problem is firstly formulated as a constrained optimization problem. Then, the Lagrangian relaxation method is leveraged to transform the constrained optimization problem into a single-objective optimization problem, which is solved by designing a multi-objective reward function with Lagrangian multipliers. Extensive experiments based on OpenAI-Gym show that the proposed CRL-CC algorithm can achieve higher overall performance in various network conditions. In particular, the CRL-CC algorithm outperforms the benchmark algorithm on Pantheon by 21.7%, 27.4%, and 5.3% in throughput, delay, and fairness, respectively.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124106674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/GLOBECOM46510.2021.9685645
Kevin Galligan, Amit Solomon, Arslan Riaz, M. Médard, R. Yazicigil, K. Duffy
We introduce Iterative GRAND (IGRAND), a universal product code decoder that applies iterative bounded distance decoding and decodes component codes using code-agnostic Guessing Random Additive Noise Decoding (GRAND). We empirically determine its accuracy and, based on GRAND hardware measurements, its complexity, showing gains over alternative algorithms. We prove that the class of product codes with random linear component codes, which IGRAND is capable of decoding, are capacity-achieving in hard-decision channels.
{"title":"IGRAND: decode any product code","authors":"Kevin Galligan, Amit Solomon, Arslan Riaz, M. Médard, R. Yazicigil, K. Duffy","doi":"10.1109/GLOBECOM46510.2021.9685645","DOIUrl":"https://doi.org/10.1109/GLOBECOM46510.2021.9685645","url":null,"abstract":"We introduce Iterative GRAND (IGRAND), a universal product code decoder that applies iterative bounded distance decoding and decodes component codes using code-agnostic Guessing Random Additive Noise Decoding (GRAND). We empirically determine its accuracy and, based on GRAND hardware measurements, its complexity, showing gains over alternative algorithms. We prove that the class of product codes with random linear component codes, which IGRAND is capable of decoding, are capacity-achieving in hard-decision channels.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"374 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124680429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/GLOBECOM46510.2021.9685898
Kunlun Wang, Yong Zhou, Qingqing Wu, Wen Hua Chen, Yang Yang
This paper investigates the task offloading problem in a hybrid intelligent reflecting surface (IRS) and massive multiple-input multiple-output (MIMO) relay assisted fog computing system, where multiple task nodes (TNs) offload their computational tasks to computing nodes (CNs) nearby massive MIMO relay node (MRN) and fog access node (FAN) via the IRS for execution. By considering the practical imperfect channel state information (CSI) model, we formulate a joint task offloading, IRS phase shift optimization, and power allocation problem to minimize the total energy consumption. We solve the resultant non-convex optimization problem in three steps. First, we solve the IRS phase shift optimization problem with the semidefinite relaxation (SDR) algorithm. Then, we exploit a differential convex (DC) optimization framework to determine the power allocation decision. Given the IRS phase shifts, the computational resources, and the power allocation, we propose an alternating optimization algorithm for finding the jointly optimized results. The simulation results demonstrate the effectiveness of the proposed scheme as compared with other benchmark schemes.
{"title":"Multi-Tier Task Offloading with Intelligent Reflecting Surface and Massive MIMO Relay","authors":"Kunlun Wang, Yong Zhou, Qingqing Wu, Wen Hua Chen, Yang Yang","doi":"10.1109/GLOBECOM46510.2021.9685898","DOIUrl":"https://doi.org/10.1109/GLOBECOM46510.2021.9685898","url":null,"abstract":"This paper investigates the task offloading problem in a hybrid intelligent reflecting surface (IRS) and massive multiple-input multiple-output (MIMO) relay assisted fog computing system, where multiple task nodes (TNs) offload their computational tasks to computing nodes (CNs) nearby massive MIMO relay node (MRN) and fog access node (FAN) via the IRS for execution. By considering the practical imperfect channel state information (CSI) model, we formulate a joint task offloading, IRS phase shift optimization, and power allocation problem to minimize the total energy consumption. We solve the resultant non-convex optimization problem in three steps. First, we solve the IRS phase shift optimization problem with the semidefinite relaxation (SDR) algorithm. Then, we exploit a differential convex (DC) optimization framework to determine the power allocation decision. Given the IRS phase shifts, the computational resources, and the power allocation, we propose an alternating optimization algorithm for finding the jointly optimized results. The simulation results demonstrate the effectiveness of the proposed scheme as compared with other benchmark schemes.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125033030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/GLOBECOM46510.2021.9685234
Pengwenlong Gu, Dingjie Zhong, Cunqing Hua, Farid Naït-Abdesselam, A. Serhrouchni, R. Khatoun
5G communications are expected to expand both capacity and flexibility in future vehicular networks. However, due to the wide coverage range of 5G-based networks, massive device access in the 5G era will pose great challenges in access control and terminal management. In order to address the scalability issue in large-scale 5G-based vehicular networks, we propose in this paper the use of two heuristic sharding schemes which are based on the Determinantal Point Process (DPP) with different complexities. Specifically, in the proposed algorithms, both location and wireless channel condition of a base station (BS) are jointly considered respectively as diversity and quality parameters in the DPP. Both of them can effectively control the size of each shard, ensure the shards are evenly distributed and allow in-shard cooperation among the BSs. The communication robustness is then greatly improved due to the efficient in-shard cooperation and the system guarantees stable throughput even in scenarios where transactions volume changes dynamically. While compared to benchmark schemes, the simulation results of the proposed protocol and algorithms show significant performance gains in terms of coverage and load balancing.
{"title":"Scaling A Blockchain System For 5G-based Vehicular Networks Using Heuristic Sharding","authors":"Pengwenlong Gu, Dingjie Zhong, Cunqing Hua, Farid Naït-Abdesselam, A. Serhrouchni, R. Khatoun","doi":"10.1109/GLOBECOM46510.2021.9685234","DOIUrl":"https://doi.org/10.1109/GLOBECOM46510.2021.9685234","url":null,"abstract":"5G communications are expected to expand both capacity and flexibility in future vehicular networks. However, due to the wide coverage range of 5G-based networks, massive device access in the 5G era will pose great challenges in access control and terminal management. In order to address the scalability issue in large-scale 5G-based vehicular networks, we propose in this paper the use of two heuristic sharding schemes which are based on the Determinantal Point Process (DPP) with different complexities. Specifically, in the proposed algorithms, both location and wireless channel condition of a base station (BS) are jointly considered respectively as diversity and quality parameters in the DPP. Both of them can effectively control the size of each shard, ensure the shards are evenly distributed and allow in-shard cooperation among the BSs. The communication robustness is then greatly improved due to the efficient in-shard cooperation and the system guarantees stable throughput even in scenarios where transactions volume changes dynamically. While compared to benchmark schemes, the simulation results of the proposed protocol and algorithms show significant performance gains in terms of coverage and load balancing.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129492988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/GLOBECOM46510.2021.9685230
Congzhou Li, Chunxi Li, Yongxiang Zhao, Baoxian Zhang, Cheng Li
How to effectively organize various heterogeneous clients for effective model training has been a critical issue in federated learning. Existing algorithms in this aspect are all for single model training and are not suitable for parallel multi-model training due to the inefficient utilization of resources at the powerful clients. In this paper, we study the issue of multi-model training in federated learning. The objective is to effectively utilize the heterogeneous resources at clients for parallel multi-model training and therefore maximize the overall training efficiency while ensuring a certain fairness among individual models. For this purpose, we introduce a logarithmic function to characterize the relationship between the model training accuracy and the number of clients involved in the training based on measurement results. We accordingly formulate the multi-model training as an optimization problem to find an assignment to maximize the overall training efficiency while ensuring a log fairness among individual models. We design a Logarithmic Fairness based Multi-model Balancing algorithm (LFMB), which iteratively replaces the already assigned models with a not-assigned model at each client for improving the training efficiency, until no such improvement can be found. Numerical results demonstrate the significantly high performance of LFMB in terms of overall training efficiency and fairness.
{"title":"An Efficient Multi-Model Training Algorithm for Federated Learning","authors":"Congzhou Li, Chunxi Li, Yongxiang Zhao, Baoxian Zhang, Cheng Li","doi":"10.1109/GLOBECOM46510.2021.9685230","DOIUrl":"https://doi.org/10.1109/GLOBECOM46510.2021.9685230","url":null,"abstract":"How to effectively organize various heterogeneous clients for effective model training has been a critical issue in federated learning. Existing algorithms in this aspect are all for single model training and are not suitable for parallel multi-model training due to the inefficient utilization of resources at the powerful clients. In this paper, we study the issue of multi-model training in federated learning. The objective is to effectively utilize the heterogeneous resources at clients for parallel multi-model training and therefore maximize the overall training efficiency while ensuring a certain fairness among individual models. For this purpose, we introduce a logarithmic function to characterize the relationship between the model training accuracy and the number of clients involved in the training based on measurement results. We accordingly formulate the multi-model training as an optimization problem to find an assignment to maximize the overall training efficiency while ensuring a log fairness among individual models. We design a Logarithmic Fairness based Multi-model Balancing algorithm (LFMB), which iteratively replaces the already assigned models with a not-assigned model at each client for improving the training efficiency, until no such improvement can be found. Numerical results demonstrate the significantly high performance of LFMB in terms of overall training efficiency and fairness.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128400692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}