Pub Date : 2021-07-28DOI: 10.1109/iccc52777.2021.9580408
Zhen Gao, Keke Ying, Chen He, Zhenvu Xiao, Dezhi Zheng, Jun Zhang
Low earth orbit (LEO) satellite based Internet of Things tend to exhibit unique advantages for broad coverage over the earth with relatively low latency. This paper investigates the random access problem in massive multi-input multi-output (mMIMO) systems for LEO satellite communications (Satcom). Specifically, a training sequence based grant-free random access scheme is adopted to deal with the joint activity detection and channel estimation. Considering the limited power supply and hardware cost onboard, a quantized compressive sensing algorithm is developed to mitigate the distortion introduced by low-resolution analog to digital converters. Expectation maximization algorithm is then employed to learn the unknown parameters in the prior assumption. Simulation results verify the effectiveness of our proposed scheme.
{"title":"Grant-Free Random Access in Massive MIMO Based LEO Satellite Internet of Things","authors":"Zhen Gao, Keke Ying, Chen He, Zhenvu Xiao, Dezhi Zheng, Jun Zhang","doi":"10.1109/iccc52777.2021.9580408","DOIUrl":"https://doi.org/10.1109/iccc52777.2021.9580408","url":null,"abstract":"Low earth orbit (LEO) satellite based Internet of Things tend to exhibit unique advantages for broad coverage over the earth with relatively low latency. This paper investigates the random access problem in massive multi-input multi-output (mMIMO) systems for LEO satellite communications (Satcom). Specifically, a training sequence based grant-free random access scheme is adopted to deal with the joint activity detection and channel estimation. Considering the limited power supply and hardware cost onboard, a quantized compressive sensing algorithm is developed to mitigate the distortion introduced by low-resolution analog to digital converters. Expectation maximization algorithm is then employed to learn the unknown parameters in the prior assumption. Simulation results verify the effectiveness of our proposed scheme.","PeriodicalId":425118,"journal":{"name":"2021 IEEE/CIC International Conference on Communications in China (ICCC)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115103042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Federated learning is considered as a privacy-preserving distributed machine learning framework, where the model training is distributed over end devices by fully exploiting scattered computation capability and training data. Different from centralized machine learning where the convergence time is decided by number of training rounds, under the framework of FL, the convergence time also depends on the communication delay and computation delay for local training in each round. Therefore, we employ total training delay as the performance metric in our strategy design. Note that the training delay per round is prone to the limited wireless resources and system heterogeneity, where end devices have different computational and communication capabilities. To achieve timely parameter aggregation over limited spectrum, we incorporate age of parameter in device scheduling for each training round, which is defined as the number of rounds elapsed since last time of parameter uploading. Moreover, since diversity of uploaded parameters is important for training performance over data with non-IID distributions, we exploit energy harvesting technology to prevent device drop-outs during training process. In this paper, we propose an age-aware communication strategy for federated learning over wireless networks, by jointly considering the staleness of parameters and heterogeneous capabilities at end devices to realize fast and accurate model training. Numerical results demonstrate the effectiveness and accuracy of our proposed strategy.
{"title":"Age-aware Communication Strategy in Federated Learning with Energy Harvesting Devices","authors":"Xin Liu, Xiaoqi Qin, Hao Chen, Yiming Liu, Baoling Liu, Ping Zhang","doi":"10.1109/iccc52777.2021.9580240","DOIUrl":"https://doi.org/10.1109/iccc52777.2021.9580240","url":null,"abstract":"Federated learning is considered as a privacy-preserving distributed machine learning framework, where the model training is distributed over end devices by fully exploiting scattered computation capability and training data. Different from centralized machine learning where the convergence time is decided by number of training rounds, under the framework of FL, the convergence time also depends on the communication delay and computation delay for local training in each round. Therefore, we employ total training delay as the performance metric in our strategy design. Note that the training delay per round is prone to the limited wireless resources and system heterogeneity, where end devices have different computational and communication capabilities. To achieve timely parameter aggregation over limited spectrum, we incorporate age of parameter in device scheduling for each training round, which is defined as the number of rounds elapsed since last time of parameter uploading. Moreover, since diversity of uploaded parameters is important for training performance over data with non-IID distributions, we exploit energy harvesting technology to prevent device drop-outs during training process. In this paper, we propose an age-aware communication strategy for federated learning over wireless networks, by jointly considering the staleness of parameters and heterogeneous capabilities at end devices to realize fast and accurate model training. Numerical results demonstrate the effectiveness and accuracy of our proposed strategy.","PeriodicalId":425118,"journal":{"name":"2021 IEEE/CIC International Conference on Communications in China (ICCC)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114454651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-28DOI: 10.1109/iccc52777.2021.9580441
Xinyu Xie, Yongpeng Wu
This paper investigates the unsourced random access (URA) scheme to accommodate a large amount of machine-type users communicating to a massive MIMO base station. Existing works adopt a slotted transmission strategy to reduce system complexity and operate under the framework of coupled compressed sensing (CCS), concatenating an outer tree code to an inner compressed sensing code for message stitching. We observe that the sparse angular domain MIMO channel can help decouple the CCS scheme and introduce an uncoupled slotted transmission scheme without the tree encoder/decoder. We propose a novel MRF-GAMP method capturing the structured sparsity of the angular domain channel for activity detection and channel estimation. Then, message reconstruction is based on rearranging strongly correlated slot-wise channels into groups by a clustering algorithm. Extensive simulation shows that our approach achieves a better error performance and a higher spectral efficiency compared to the CCS scheme.
{"title":"Unsourced Random Access with a Massive MIMO Receiver: Exploiting Angular Domain Sparsity","authors":"Xinyu Xie, Yongpeng Wu","doi":"10.1109/iccc52777.2021.9580441","DOIUrl":"https://doi.org/10.1109/iccc52777.2021.9580441","url":null,"abstract":"This paper investigates the unsourced random access (URA) scheme to accommodate a large amount of machine-type users communicating to a massive MIMO base station. Existing works adopt a slotted transmission strategy to reduce system complexity and operate under the framework of coupled compressed sensing (CCS), concatenating an outer tree code to an inner compressed sensing code for message stitching. We observe that the sparse angular domain MIMO channel can help decouple the CCS scheme and introduce an uncoupled slotted transmission scheme without the tree encoder/decoder. We propose a novel MRF-GAMP method capturing the structured sparsity of the angular domain channel for activity detection and channel estimation. Then, message reconstruction is based on rearranging strongly correlated slot-wise channels into groups by a clustering algorithm. Extensive simulation shows that our approach achieves a better error performance and a higher spectral efficiency compared to the CCS scheme.","PeriodicalId":425118,"journal":{"name":"2021 IEEE/CIC International Conference on Communications in China (ICCC)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128608800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-28DOI: 10.1109/iccc52777.2021.9580209
Bo Lin, F. Gao, Shun Zhang, Ting Zhou, A. Alkhateeb
A critical bottleneck of massive multiple-input multiple-output (MIMO) system is the huge training overhead caused by downlink transmission, like channel estimation, downlink beamforming and covariance observation. In this paper, we propose to use the channel state information (CSI) of a small number of antennas to extrapolate the CSI of the other antennas and reduce the training overhead. Specifically, we design a deep neural network that we call an antenna domain extrapolation network (ADEN) that can exploit the correlation function among antennas. We then propose a deep learning (DL) based antenna selection network (ASN) that can select a limited antennas for optimizing the extrapolation, which is conventionally a type of combinatorial optimization and is difficult to solve. We trickly designed a constrained degradation algorithm to generate a differentiable approximation of the discrete antenna selection vector such that the back-propagation of the neural network can be guaranteed. Numerical results show that the proposed ADEN outperforms the traditional fully connected one, and the antenna selection scheme learned by ASN is much better than the trivially used uniform selection.
{"title":"Deep Learning based Antenna Selection and CSI Extrapolation in Massive MIMO Systems","authors":"Bo Lin, F. Gao, Shun Zhang, Ting Zhou, A. Alkhateeb","doi":"10.1109/iccc52777.2021.9580209","DOIUrl":"https://doi.org/10.1109/iccc52777.2021.9580209","url":null,"abstract":"A critical bottleneck of massive multiple-input multiple-output (MIMO) system is the huge training overhead caused by downlink transmission, like channel estimation, downlink beamforming and covariance observation. In this paper, we propose to use the channel state information (CSI) of a small number of antennas to extrapolate the CSI of the other antennas and reduce the training overhead. Specifically, we design a deep neural network that we call an antenna domain extrapolation network (ADEN) that can exploit the correlation function among antennas. We then propose a deep learning (DL) based antenna selection network (ASN) that can select a limited antennas for optimizing the extrapolation, which is conventionally a type of combinatorial optimization and is difficult to solve. We trickly designed a constrained degradation algorithm to generate a differentiable approximation of the discrete antenna selection vector such that the back-propagation of the neural network can be guaranteed. Numerical results show that the proposed ADEN outperforms the traditional fully connected one, and the antenna selection scheme learned by ASN is much better than the trivially used uniform selection.","PeriodicalId":425118,"journal":{"name":"2021 IEEE/CIC International Conference on Communications in China (ICCC)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134387302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Power load forecasting is of great significance to ensure the smooth operation of smart grid. Because the load generation and consumption are related to the grid internal and environmental factors external, reliable and accurate power load forecasting is undoubtedly challenging in smart grid. Since weather factors are always the leading causes that affecting power generation load in smart grid, especially in distributed photovoltaic power generation, we propose a load forecasting method to realize the forecast of the generated load under different weather conditions in this paper. We firstly investigates the combined effect of various weather factors on power load comprehensively. Specially, the parametric regression models are utilized to analyse the relationship between the power load and weather factors. Secondly, a hybrid forecasting method based on Multilayer Perceptron (MLP) neural network is proposed to achieve reliable and accurate power load forecasting of various weather conditions. Different from the existing works, we not only take into account the weather factors, but also select corresponding parametric models integrated as the additional input of the MLP neural network to predict the power load. More importantly, a modified extreme learning machine (ELM) based hierarchical learning algorithm is introduced to train the formulated model. As a result, the training process of the neutral network can be accelerated in the sense that iteration times are reduced, in which case the learning accuracy can also be guaranteed. The proposed method is evaluated on the real dataset which consist of meteorological factors and corresponding load data. The results show the proposed method outperforms the existing algorithms in prediction accuracy. The prediction error Mean Square Error(MSE) and Root Mean Squared Error(RMSE) can be reduced by 36.28% and 20.18% respectively, which ensure the reliability of the power load forecasting.
{"title":"A Hybrid Load Forecasting Method Based on Neural Network in Smart Grid","authors":"Jingyi Zhang, Wenpeng Jing, Zhaoming Lu, Yueting Wang, X. Wen","doi":"10.1109/iccc52777.2021.9580230","DOIUrl":"https://doi.org/10.1109/iccc52777.2021.9580230","url":null,"abstract":"Power load forecasting is of great significance to ensure the smooth operation of smart grid. Because the load generation and consumption are related to the grid internal and environmental factors external, reliable and accurate power load forecasting is undoubtedly challenging in smart grid. Since weather factors are always the leading causes that affecting power generation load in smart grid, especially in distributed photovoltaic power generation, we propose a load forecasting method to realize the forecast of the generated load under different weather conditions in this paper. We firstly investigates the combined effect of various weather factors on power load comprehensively. Specially, the parametric regression models are utilized to analyse the relationship between the power load and weather factors. Secondly, a hybrid forecasting method based on Multilayer Perceptron (MLP) neural network is proposed to achieve reliable and accurate power load forecasting of various weather conditions. Different from the existing works, we not only take into account the weather factors, but also select corresponding parametric models integrated as the additional input of the MLP neural network to predict the power load. More importantly, a modified extreme learning machine (ELM) based hierarchical learning algorithm is introduced to train the formulated model. As a result, the training process of the neutral network can be accelerated in the sense that iteration times are reduced, in which case the learning accuracy can also be guaranteed. The proposed method is evaluated on the real dataset which consist of meteorological factors and corresponding load data. The results show the proposed method outperforms the existing algorithms in prediction accuracy. The prediction error Mean Square Error(MSE) and Root Mean Squared Error(RMSE) can be reduced by 36.28% and 20.18% respectively, which ensure the reliability of the power load forecasting.","PeriodicalId":425118,"journal":{"name":"2021 IEEE/CIC International Conference on Communications in China (ICCC)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129328368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-28DOI: 10.1109/iccc52777.2021.9580367
Jianming Guo, Lei Yang, David Rincón Rivera, S. Sallent, Chengguang Fan, Quan Chen, Xuanran Li
Software-defined networking (SDN) logically separates the control and data-forward planes, which opens the way to a more flexible configuration and management for low-Earth orbit satellite networks. A significant challenge in SDN is the controller placement problem (CPP). Due to the characteristics such as the dynamic network topology and limited bandwidth, CPP is quite complex in satellite networks. In this paper, we propose a static placement with dynamic assignment (SPDA) method without high bandwidth assumption, and formulate CPP into a mixed integer programming model. The dynamic topology is taken into account by effectively dividing time snapshots. Real satellite constellations are adopted to evaluate the performance of our controller placement solution. The results show that SPDA outperforms existing methods and can reduce the switch-controller latency in both average and worst cases.
{"title":"SDN Controller Placement in LEO Satellite Networks Based on Dynamic Topology","authors":"Jianming Guo, Lei Yang, David Rincón Rivera, S. Sallent, Chengguang Fan, Quan Chen, Xuanran Li","doi":"10.1109/iccc52777.2021.9580367","DOIUrl":"https://doi.org/10.1109/iccc52777.2021.9580367","url":null,"abstract":"Software-defined networking (SDN) logically separates the control and data-forward planes, which opens the way to a more flexible configuration and management for low-Earth orbit satellite networks. A significant challenge in SDN is the controller placement problem (CPP). Due to the characteristics such as the dynamic network topology and limited bandwidth, CPP is quite complex in satellite networks. In this paper, we propose a static placement with dynamic assignment (SPDA) method without high bandwidth assumption, and formulate CPP into a mixed integer programming model. The dynamic topology is taken into account by effectively dividing time snapshots. Real satellite constellations are adopted to evaluate the performance of our controller placement solution. The results show that SPDA outperforms existing methods and can reduce the switch-controller latency in both average and worst cases.","PeriodicalId":425118,"journal":{"name":"2021 IEEE/CIC International Conference on Communications in China (ICCC)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116778446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-28DOI: 10.1109/iccc52777.2021.9580346
Huajiao Chen, Changyin Sun, Fan Jiang, Jing Jiang
To meet the increasing wireless data demands, leveraging millimeter wave(mmWave) frequency band has become imperative for 5G systems due to the rich spectrum resources and greater bandwidth. In mmWave communication systems, multi-connection is an indispensable key technology, where the coordinated service of multiple links will enable users to get more wireless resources and ensure mobile robustness. However, mmWave multi-connections face challenges in beams selection process: (i) The time of multi-link serial search is long relative to single link, and the search overhead is large and the hardware complexity is high; (ii) In the case of multi-connection parallel transmission, the mutual interference between beams results in low multiplexing gain; (iii) The conventional codebook produces non-standard (non-pencil-shaped) beam shapes, which makes it difficult to reduce inter-beam interference only by relying on different codebooks. In response to the above problems, this paper uses sub-6GHz channel and deep neural network (DNN) to enhance beam search for mmWave multi-connection. Specifically, the spatial correlation between the low frequency band and the mmWave frequency band is exploited to map the sub-6GHz channel information to the mmWave beam index. To speed beams search process, a parallel deep neural network with transfer learning is proposed to predict the best beams for multi-links of a user. Simulation results show that the sub-6G Hz channel information can be used to effectively predict the optimal mmWave beams for multi-connected user, and the parallel transfer learning structure can facilitate in reducing interference and training overhead. As a result, near-optimal system sum-rate can be achieved.
{"title":"Beams Selection for MmWave Multi-Connection Based on Sub-6GHz Predicting and Parallel Transfer Learning","authors":"Huajiao Chen, Changyin Sun, Fan Jiang, Jing Jiang","doi":"10.1109/iccc52777.2021.9580346","DOIUrl":"https://doi.org/10.1109/iccc52777.2021.9580346","url":null,"abstract":"To meet the increasing wireless data demands, leveraging millimeter wave(mmWave) frequency band has become imperative for 5G systems due to the rich spectrum resources and greater bandwidth. In mmWave communication systems, multi-connection is an indispensable key technology, where the coordinated service of multiple links will enable users to get more wireless resources and ensure mobile robustness. However, mmWave multi-connections face challenges in beams selection process: (i) The time of multi-link serial search is long relative to single link, and the search overhead is large and the hardware complexity is high; (ii) In the case of multi-connection parallel transmission, the mutual interference between beams results in low multiplexing gain; (iii) The conventional codebook produces non-standard (non-pencil-shaped) beam shapes, which makes it difficult to reduce inter-beam interference only by relying on different codebooks. In response to the above problems, this paper uses sub-6GHz channel and deep neural network (DNN) to enhance beam search for mmWave multi-connection. Specifically, the spatial correlation between the low frequency band and the mmWave frequency band is exploited to map the sub-6GHz channel information to the mmWave beam index. To speed beams search process, a parallel deep neural network with transfer learning is proposed to predict the best beams for multi-links of a user. Simulation results show that the sub-6G Hz channel information can be used to effectively predict the optimal mmWave beams for multi-connected user, and the parallel transfer learning structure can facilitate in reducing interference and training overhead. As a result, near-optimal system sum-rate can be achieved.","PeriodicalId":425118,"journal":{"name":"2021 IEEE/CIC International Conference on Communications in China (ICCC)","volume":"153 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122068240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-28DOI: 10.1109/iccc52777.2021.9580344
Yiwei Lu, Yang Huang, Tianyu Hu
Mobile edge computing (MEC) is a novel technology for enhancing the computation capacity of user equipment (UEs), by offloading the computation-intensive tasks at UEs to a base station. In the context of UAV-mounted MEC, state of the art only addresses the optimization of offloading and wireless/computing resource allocation in the presence of air-ground channels. In contrast, this paper addresses the optimization, considering both the time-varying/random terrestrial channels and the line-of-sight air-ground channels, where a robust optimization problem is formulated so as to minimize the energy consumption of the UAV and the UEs. In order to develop a resource scheduling scheme which enables energy-efficient air-ground cooperative MEC, we propose a joint iterative optimization algorithm by exploiting the weighted mean square error approach and S-procedure. Numerical results demonstrate that, compared to various baseline schemes, the proposed algorithm can effectively reduce the energy consumption in the presence of a large number of input tasks. Compared with the non-robust schemes, the proposed algorithm can reduce the energy consumption more stably.
{"title":"Robust Resource Scheduling for Air-Ground Cooperative Mobile Edge Computing","authors":"Yiwei Lu, Yang Huang, Tianyu Hu","doi":"10.1109/iccc52777.2021.9580344","DOIUrl":"https://doi.org/10.1109/iccc52777.2021.9580344","url":null,"abstract":"Mobile edge computing (MEC) is a novel technology for enhancing the computation capacity of user equipment (UEs), by offloading the computation-intensive tasks at UEs to a base station. In the context of UAV-mounted MEC, state of the art only addresses the optimization of offloading and wireless/computing resource allocation in the presence of air-ground channels. In contrast, this paper addresses the optimization, considering both the time-varying/random terrestrial channels and the line-of-sight air-ground channels, where a robust optimization problem is formulated so as to minimize the energy consumption of the UAV and the UEs. In order to develop a resource scheduling scheme which enables energy-efficient air-ground cooperative MEC, we propose a joint iterative optimization algorithm by exploiting the weighted mean square error approach and S-procedure. Numerical results demonstrate that, compared to various baseline schemes, the proposed algorithm can effectively reduce the energy consumption in the presence of a large number of input tasks. Compared with the non-robust schemes, the proposed algorithm can reduce the energy consumption more stably.","PeriodicalId":425118,"journal":{"name":"2021 IEEE/CIC International Conference on Communications in China (ICCC)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123638147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-28DOI: 10.1109/iccc52777.2021.9580416
Hanqi Tang, Ruobin Zheng, Zongpeng Li, Q. T. Sun
In complex network environments, there always exist heterogeneous devices with different computational powers. In this work, we propose a novel scalable random linear network coding (RLNC) framework based on a chain of embedded fields, so as to endow heterogeneous receivers with different decoding capabilities. In this framework, the source linearly combines the original packets over embedded fields in an encoding matrix and then combines the coded packets over GF(2) before transmission to the network. Based on the arithmetic compatibility over embedded fields in the encoding process, we derive a sufficient and necessary condition for decodability over these fields of different sizes. Moreover, we theoretically study the construction of an optimal encoding matrix in terms of decodability. The numerical analysis in classical wireless broadcast networks illustrates that the proposed scalable RLNC not only provides a nice decoding compatibility over different fields, but also performs better than classical RLNC in terms of decoding complexity.
{"title":"Scalable Network Coding over Embedded Fields","authors":"Hanqi Tang, Ruobin Zheng, Zongpeng Li, Q. T. Sun","doi":"10.1109/iccc52777.2021.9580416","DOIUrl":"https://doi.org/10.1109/iccc52777.2021.9580416","url":null,"abstract":"In complex network environments, there always exist heterogeneous devices with different computational powers. In this work, we propose a novel scalable random linear network coding (RLNC) framework based on a chain of embedded fields, so as to endow heterogeneous receivers with different decoding capabilities. In this framework, the source linearly combines the original packets over embedded fields in an encoding matrix and then combines the coded packets over GF(2) before transmission to the network. Based on the arithmetic compatibility over embedded fields in the encoding process, we derive a sufficient and necessary condition for decodability over these fields of different sizes. Moreover, we theoretically study the construction of an optimal encoding matrix in terms of decodability. The numerical analysis in classical wireless broadcast networks illustrates that the proposed scalable RLNC not only provides a nice decoding compatibility over different fields, but also performs better than classical RLNC in terms of decoding complexity.","PeriodicalId":425118,"journal":{"name":"2021 IEEE/CIC International Conference on Communications in China (ICCC)","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125097408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-28DOI: 10.1109/iccc52777.2021.9580397
Jun Zong, Fuqian Yang, Yong Zhou, H. Qian, Xiliang Luo
In this paper, we investigate the optimal configuration of the intelligent walls (IW s) installed in smart buildings. In particular, by controlling the states of IW s, the indoor wireless propagation environment can be judiciously adjusted to maximize the system performance. Since the total number of feasible configuration patterns increases exponentially with the number of the IW s, it becomes impractical to search for the optimal configuration in an exhaustive way when the number of IWs gets large. To address such a problem, we first prove that the optimal system performance can be achieved by only exploiting a limited number of IW configuration patterns. Furthermore, we put forth one efficient algorithm of low complexity to identify the small set of optimal patterns. Numerical results are also provided to verify the proposed scheme.
{"title":"Optimal Configuration of Intelligent Walls for Interference Management in Smart Buildings","authors":"Jun Zong, Fuqian Yang, Yong Zhou, H. Qian, Xiliang Luo","doi":"10.1109/iccc52777.2021.9580397","DOIUrl":"https://doi.org/10.1109/iccc52777.2021.9580397","url":null,"abstract":"In this paper, we investigate the optimal configuration of the intelligent walls (IW s) installed in smart buildings. In particular, by controlling the states of IW s, the indoor wireless propagation environment can be judiciously adjusted to maximize the system performance. Since the total number of feasible configuration patterns increases exponentially with the number of the IW s, it becomes impractical to search for the optimal configuration in an exhaustive way when the number of IWs gets large. To address such a problem, we first prove that the optimal system performance can be achieved by only exploiting a limited number of IW configuration patterns. Furthermore, we put forth one efficient algorithm of low complexity to identify the small set of optimal patterns. Numerical results are also provided to verify the proposed scheme.","PeriodicalId":425118,"journal":{"name":"2021 IEEE/CIC International Conference on Communications in China (ICCC)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129270649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}