Pub Date : 2020-06-01DOI: 10.1109/ICC40277.2020.9148665
Nazih Salhab, Rana Rahim, R. Langar, R. Boutaba
Cloud computing is being embraced more and more by telecommunication operators for on-demand access to computing resources. Knowing that 5G Core reference architecture is envisioned to be cloud-native and service-oriented, we propose, in this paper, offloading to the cloud, some of 5G delay-tolerant Network Functions and in particular the Network Data Analytics Function (NWDAF). The dynamic selection of cloud resources to serve off-loaded 5G-NWDAF, while incurring minimum cost and maximizing utilization of served next generation Node-Bs (gNBs) requires agility and automation. This paper introduces a framework to automate the selection process that satisfies resource demands while meeting two objectives, namely, cost minimization and utilization maximization. We first formulate the mapping of gNBs to 5G-NWDAF problem as an Integer Linear Program (ILP). Then, we propose an algorithm to solve it based on branch-cut-and-price technique combining all of branch-and-price, branch-and-cut and branch-and-bound. Results using pricing data from a public cloud provider (Google Cloud Platform), show that our proposal achieves important savings in cloud computing costs and reduction in execution time compared to other state-of-the-art frameworks.
电信运营商越来越多地采用云计算来按需访问计算资源。了解到5G核心参考架构被设想为云原生和面向服务的,我们在本文中建议将一些5G延迟容忍网络功能,特别是网络数据分析功能(NWDAF)卸载到云中。动态选择云资源为卸载的5G-NWDAF服务,同时降低成本并最大限度地利用所服务的下一代node - b (gnb),这需要敏捷性和自动化。本文介绍了一个自动化选择过程的框架,以满足资源需求,同时满足两个目标,即成本最小化和利用率最大化。我们首先将gnb与5G-NWDAF问题的映射表述为整数线性规划(ILP)。在此基础上,我们提出了一种基于分支分割和价格技术的求解算法,该算法将分支分割和价格、分支分割和分支分割结合起来。使用来自公共云提供商(Google云平台)的定价数据的结果表明,与其他最先进的框架相比,我们的建议在云计算成本和执行时间方面实现了重要的节省。
{"title":"Offloading Network Data Analytics Function to the Cloud with Minimum Cost and Maximum Utilization","authors":"Nazih Salhab, Rana Rahim, R. Langar, R. Boutaba","doi":"10.1109/ICC40277.2020.9148665","DOIUrl":"https://doi.org/10.1109/ICC40277.2020.9148665","url":null,"abstract":"Cloud computing is being embraced more and more by telecommunication operators for on-demand access to computing resources. Knowing that 5G Core reference architecture is envisioned to be cloud-native and service-oriented, we propose, in this paper, offloading to the cloud, some of 5G delay-tolerant Network Functions and in particular the Network Data Analytics Function (NWDAF). The dynamic selection of cloud resources to serve off-loaded 5G-NWDAF, while incurring minimum cost and maximizing utilization of served next generation Node-Bs (gNBs) requires agility and automation. This paper introduces a framework to automate the selection process that satisfies resource demands while meeting two objectives, namely, cost minimization and utilization maximization. We first formulate the mapping of gNBs to 5G-NWDAF problem as an Integer Linear Program (ILP). Then, we propose an algorithm to solve it based on branch-cut-and-price technique combining all of branch-and-price, branch-and-cut and branch-and-bound. Results using pricing data from a public cloud provider (Google Cloud Platform), show that our proposal achieves important savings in cloud computing costs and reduction in execution time compared to other state-of-the-art frameworks.","PeriodicalId":106560,"journal":{"name":"ICC 2020 - 2020 IEEE International Conference on Communications (ICC)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116871071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ICC40277.2020.9148872
Shuai Wang, Rui Wang, Qi Hao, Yik-Chung Wu, H. Poor
While machine-type communication (MTC) devices generate massive data, they often cannot process this data due to limited energy and computation power. To this end, edge intelligence has been proposed, which collects distributed data and performs machine learning at the edge. However, this paradigm needs to maximize the learning performance instead of the communication throughput, for which the celebrated water-filling and max-min fairness algorithms become inefficient since they allocate resources merely according to the quality of wireless channels. This paper proposes a learning centric power allocation (LCPA) method, which allocates radio resources based on an empirical classification error model. To get insights into LCPA, an asymptotic optimal solution is derived. The solution shows that the transmit powers are inversely proportional to the channel gain, and scale exponentially with the learning parameters. Experimental results show that the proposed LCPA algorithm significantly outperforms other power allocation algorithms.
{"title":"Learning Centric Power Allocation for Edge Intelligence","authors":"Shuai Wang, Rui Wang, Qi Hao, Yik-Chung Wu, H. Poor","doi":"10.1109/ICC40277.2020.9148872","DOIUrl":"https://doi.org/10.1109/ICC40277.2020.9148872","url":null,"abstract":"While machine-type communication (MTC) devices generate massive data, they often cannot process this data due to limited energy and computation power. To this end, edge intelligence has been proposed, which collects distributed data and performs machine learning at the edge. However, this paradigm needs to maximize the learning performance instead of the communication throughput, for which the celebrated water-filling and max-min fairness algorithms become inefficient since they allocate resources merely according to the quality of wireless channels. This paper proposes a learning centric power allocation (LCPA) method, which allocates radio resources based on an empirical classification error model. To get insights into LCPA, an asymptotic optimal solution is derived. The solution shows that the transmit powers are inversely proportional to the channel gain, and scale exponentially with the learning parameters. Experimental results show that the proposed LCPA algorithm significantly outperforms other power allocation algorithms.","PeriodicalId":106560,"journal":{"name":"ICC 2020 - 2020 IEEE International Conference on Communications (ICC)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125748489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ICC40277.2020.9148881
Runyu Lyu, Wenchi Cheng, Wei Zhang, F. Qin
Due to its low energy consumption and high security, near field communication (NFC) has been extensively used in various short-range non-contact transmission scenarios such as the proximity payment and NFC entrance guard. However, the low data rate of NFC limits its application in high rate demanded scenarios, such as high-resolution fingerprint identification and streaming media transmission. In this paper, we propose the orbital angular momentum (OAM) based NFC system to significantly increase the capacity for NFC systems. With coils circularly equipped at the transmitter and receiver, OAM signals can be transmitted, received, and detected. Then, we analyze the mutual inductances between the transmit and receive coils to derive the OAM-NFC magneto-inductive channel matrix. Based on the channel matrix, we develop the OAM-NFC transmission and detection schemes for NFC multiplexing transmission. We also compare the capacity of our proposed OAM-NFC system with those of SISO NFC system and MIMO NFC system. Simulation results validate the feasibility and capacity enhancement of our proposed OAM-NFC system. How the number of the transceiver coils impacts the capacity of OAM-NFC system is also evaluated.
{"title":"OAM-NFC: A Short-Range High Capacity Transmission Scheme","authors":"Runyu Lyu, Wenchi Cheng, Wei Zhang, F. Qin","doi":"10.1109/ICC40277.2020.9148881","DOIUrl":"https://doi.org/10.1109/ICC40277.2020.9148881","url":null,"abstract":"Due to its low energy consumption and high security, near field communication (NFC) has been extensively used in various short-range non-contact transmission scenarios such as the proximity payment and NFC entrance guard. However, the low data rate of NFC limits its application in high rate demanded scenarios, such as high-resolution fingerprint identification and streaming media transmission. In this paper, we propose the orbital angular momentum (OAM) based NFC system to significantly increase the capacity for NFC systems. With coils circularly equipped at the transmitter and receiver, OAM signals can be transmitted, received, and detected. Then, we analyze the mutual inductances between the transmit and receive coils to derive the OAM-NFC magneto-inductive channel matrix. Based on the channel matrix, we develop the OAM-NFC transmission and detection schemes for NFC multiplexing transmission. We also compare the capacity of our proposed OAM-NFC system with those of SISO NFC system and MIMO NFC system. Simulation results validate the feasibility and capacity enhancement of our proposed OAM-NFC system. How the number of the transceiver coils impacts the capacity of OAM-NFC system is also evaluated.","PeriodicalId":106560,"journal":{"name":"ICC 2020 - 2020 IEEE International Conference on Communications (ICC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125851472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ICC40277.2020.9148938
Jing Zhang, Jun Du, Chunxiao Jiang, Yuan Shen, Jian Wang
As a promising technology to improve the computation experience for mobile devices, mobile edge computing (MEC) is becoming an emerging paradigm to meet the tremendous increasing computation demands. In this paper, a mobile edge computing system consisting of multiple mobile devices with energy harvesting and an edge server is considered. Specifically, multiple devices decide the offloading ratio and local computation capacity, which are both in continuous values. Each device equips a task load queue and energy harvesting, which increases the system dynamics and leads to the time-dependence of the optimal offloading decision. In order to minimize the sum cost of the execution time and energy consumption in the long-term, we develop a continuous control based deep reinforcement learning algorithm for computation offloading. Utilizing the actor-critic learning approach, we propose a centralized learning policy for each device. By incorporating the states of other devices with centralized learning, the proposed method learns to coordinate among all devices. Simulation results validate the effectiveness of our proposed algorithm, which demonstrates superior generalization ability and achieves a better performance compared with discrete decision based deep reinforcement learning methods.
{"title":"Computation Offloading in Energy Harvesting Systems via Continuous Deep Reinforcement Learning","authors":"Jing Zhang, Jun Du, Chunxiao Jiang, Yuan Shen, Jian Wang","doi":"10.1109/ICC40277.2020.9148938","DOIUrl":"https://doi.org/10.1109/ICC40277.2020.9148938","url":null,"abstract":"As a promising technology to improve the computation experience for mobile devices, mobile edge computing (MEC) is becoming an emerging paradigm to meet the tremendous increasing computation demands. In this paper, a mobile edge computing system consisting of multiple mobile devices with energy harvesting and an edge server is considered. Specifically, multiple devices decide the offloading ratio and local computation capacity, which are both in continuous values. Each device equips a task load queue and energy harvesting, which increases the system dynamics and leads to the time-dependence of the optimal offloading decision. In order to minimize the sum cost of the execution time and energy consumption in the long-term, we develop a continuous control based deep reinforcement learning algorithm for computation offloading. Utilizing the actor-critic learning approach, we propose a centralized learning policy for each device. By incorporating the states of other devices with centralized learning, the proposed method learns to coordinate among all devices. Simulation results validate the effectiveness of our proposed algorithm, which demonstrates superior generalization ability and achieves a better performance compared with discrete decision based deep reinforcement learning methods.","PeriodicalId":106560,"journal":{"name":"ICC 2020 - 2020 IEEE International Conference on Communications (ICC)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128574358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ICC40277.2020.9149401
Wensheng Zhang, Jingxian Wu, Chengxiang Wang
In this paper, we coin a new concept of tensor-computing, which is based on tensor theory and designed for future sixth generation (6G) wireless communication systems. Two different types of tensors, namely spectrum-tensor and system-tensor, are defined and analysed to develop a new spectrum usage framework for 6G. The spectrum-tensor encapsulates high dimensional spectrum big data into the format of a compact tensor. The system-tensor summarizes key system performance, including data rate, bandwidth, delay, spectral efficiency, and energy efficiency, into a multi-dimension tensor. The concepts of spectrum-tensor and system-tensor enable unique tensor-based computing and analysis with the help of high efficiency tensor-computing tools, such as tensor completion and tensor decomposition. In the new spectrum usage framework, a value-based spectrum fusion scheme is designed. The maximum system value is achieved under the constraint that the individual value of single user should be guaranteed. The proposed tensor-computing framework builds a bridge between 6G wireless functions with real-world high dimension data processing tools, such as TensorFlow and Tensor Processing Unit (TPU). The authors hope this paper will shine a beam of tensor theory in and open a new research field of tensor-computing for future 6G wireless communications.
{"title":"Tensor-computing-based Spectrum Usage Framework for 6G","authors":"Wensheng Zhang, Jingxian Wu, Chengxiang Wang","doi":"10.1109/ICC40277.2020.9149401","DOIUrl":"https://doi.org/10.1109/ICC40277.2020.9149401","url":null,"abstract":"In this paper, we coin a new concept of tensor-computing, which is based on tensor theory and designed for future sixth generation (6G) wireless communication systems. Two different types of tensors, namely spectrum-tensor and system-tensor, are defined and analysed to develop a new spectrum usage framework for 6G. The spectrum-tensor encapsulates high dimensional spectrum big data into the format of a compact tensor. The system-tensor summarizes key system performance, including data rate, bandwidth, delay, spectral efficiency, and energy efficiency, into a multi-dimension tensor. The concepts of spectrum-tensor and system-tensor enable unique tensor-based computing and analysis with the help of high efficiency tensor-computing tools, such as tensor completion and tensor decomposition. In the new spectrum usage framework, a value-based spectrum fusion scheme is designed. The maximum system value is achieved under the constraint that the individual value of single user should be guaranteed. The proposed tensor-computing framework builds a bridge between 6G wireless functions with real-world high dimension data processing tools, such as TensorFlow and Tensor Processing Unit (TPU). The authors hope this paper will shine a beam of tensor theory in and open a new research field of tensor-computing for future 6G wireless communications.","PeriodicalId":106560,"journal":{"name":"ICC 2020 - 2020 IEEE International Conference on Communications (ICC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128677139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ICC40277.2020.9148622
Masaki Takahashi, Y. Kawamoto, N. Kato, A. Miura, M. Toyoshima
In recent years, the expectations for high throughput satellite (HTS) have diversified based on rapid increase in traffic demands. However, the Ku-band and Ka-band utilized by HTS are growing tighter. It is necessary to utilize the limited frequency ranges efficiently and share resources with other communication systems. The digital beam forming (DBF), which has a high area flexibility for allocating power resources, is being developed to adapt to the diversification of communication applications. However, it remains unclear how multi-spot beam placement is related to throughput in an HTS communication system equipped with DBF. In this study, we attempted to determine how the distances between spot beams in the same frequency band and the distances between adjacent spot beams in different frequency bands are related to overall system throughput and to derive a multi-spot beam arrangement to improve overall system throughput. The main contributions of this study are the clarification of relationships between the positions of multi-spot beams and overall system throughput and the construction of a novel mathematical model to derive multi-spot beam arrangements to enhance overall throughput. The effectiveness of our proposal is evaluated through numerical analysis.
{"title":"Adaptive Multi-Beam Arrangement for Improving Throughput in an HTS Communication System","authors":"Masaki Takahashi, Y. Kawamoto, N. Kato, A. Miura, M. Toyoshima","doi":"10.1109/ICC40277.2020.9148622","DOIUrl":"https://doi.org/10.1109/ICC40277.2020.9148622","url":null,"abstract":"In recent years, the expectations for high throughput satellite (HTS) have diversified based on rapid increase in traffic demands. However, the Ku-band and Ka-band utilized by HTS are growing tighter. It is necessary to utilize the limited frequency ranges efficiently and share resources with other communication systems. The digital beam forming (DBF), which has a high area flexibility for allocating power resources, is being developed to adapt to the diversification of communication applications. However, it remains unclear how multi-spot beam placement is related to throughput in an HTS communication system equipped with DBF. In this study, we attempted to determine how the distances between spot beams in the same frequency band and the distances between adjacent spot beams in different frequency bands are related to overall system throughput and to derive a multi-spot beam arrangement to improve overall system throughput. The main contributions of this study are the clarification of relationships between the positions of multi-spot beams and overall system throughput and the construction of a novel mathematical model to derive multi-spot beam arrangements to enhance overall throughput. The effectiveness of our proposal is evaluated through numerical analysis.","PeriodicalId":106560,"journal":{"name":"ICC 2020 - 2020 IEEE International Conference on Communications (ICC)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129321202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ICC40277.2020.9149366
Wei Wu, Erwu Liu, Xinglin Gong, Rui Wang
With the development of precise positioning technology, a growing number of location-based services (LBSs) facilitate people’s life. Most LBSs require proof of location (PoL) to prove that the user satisfies the service requirement, which exposes the user’s privacy. In this paper, we propose a zero-knowledge proof of location (zk-PoL) protocol to better protect the user’s privacy. With the zk-PoL protocol, the user can choose necessary information to expose to the server, so that hierarchical privacy protection can be achieved. The evaluation shows that the zk-PoL has excellent security to resist main attacks, moreover the computational efficiency is independent of input parameters and the zk-PoL is appropriate to delay-tolerant LBSs.
{"title":"Blockchain Based Zero-Knowledge Proof of Location in IoT","authors":"Wei Wu, Erwu Liu, Xinglin Gong, Rui Wang","doi":"10.1109/ICC40277.2020.9149366","DOIUrl":"https://doi.org/10.1109/ICC40277.2020.9149366","url":null,"abstract":"With the development of precise positioning technology, a growing number of location-based services (LBSs) facilitate people’s life. Most LBSs require proof of location (PoL) to prove that the user satisfies the service requirement, which exposes the user’s privacy. In this paper, we propose a zero-knowledge proof of location (zk-PoL) protocol to better protect the user’s privacy. With the zk-PoL protocol, the user can choose necessary information to expose to the server, so that hierarchical privacy protection can be achieved. The evaluation shows that the zk-PoL has excellent security to resist main attacks, moreover the computational efficiency is independent of input parameters and the zk-PoL is appropriate to delay-tolerant LBSs.","PeriodicalId":106560,"journal":{"name":"ICC 2020 - 2020 IEEE International Conference on Communications (ICC)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129657574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ICC40277.2020.9148916
E. Viegas, A. Santin, V. Cogo, Vilmar Abreu
Despite the promising results of machine learning for network-based intrusion detection, current techniques are not widely deployed in real-world environments. In general, proposed detection models quickly become obsolete, thus, generating unreliable classifications over time. In this paper, we propose a new reliable model for semi-supervised intrusion detection that uses a verification technique to provide reliable classifications over time, even in the absence of model updates. Additionally, we cope with this verification technique with semi-supervised learning to autonomously update the underlying machine learning models without human assistance. Our experiments consider a full year of real network traffic and demonstrate that our solution maintains the accuracy rate over time without model updates while rejecting only 10.6% of instances on average. Moreover, when autonomous (non-human-assisted) model updates are performed, the average rejection rate drops to just 3.2% without affecting the accuracy of our solution.
{"title":"A Reliable Semi-Supervised Intrusion Detection Model: One Year of Network Traffic Anomalies","authors":"E. Viegas, A. Santin, V. Cogo, Vilmar Abreu","doi":"10.1109/ICC40277.2020.9148916","DOIUrl":"https://doi.org/10.1109/ICC40277.2020.9148916","url":null,"abstract":"Despite the promising results of machine learning for network-based intrusion detection, current techniques are not widely deployed in real-world environments. In general, proposed detection models quickly become obsolete, thus, generating unreliable classifications over time. In this paper, we propose a new reliable model for semi-supervised intrusion detection that uses a verification technique to provide reliable classifications over time, even in the absence of model updates. Additionally, we cope with this verification technique with semi-supervised learning to autonomously update the underlying machine learning models without human assistance. Our experiments consider a full year of real network traffic and demonstrate that our solution maintains the accuracy rate over time without model updates while rejecting only 10.6% of instances on average. Moreover, when autonomous (non-human-assisted) model updates are performed, the average rejection rate drops to just 3.2% without affecting the accuracy of our solution.","PeriodicalId":106560,"journal":{"name":"ICC 2020 - 2020 IEEE International Conference on Communications (ICC)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127181385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ICC40277.2020.9149116
A. S. D. Sena, D. B. D. Costa, Z. Ding, P. Nardelli, U. Dias, C. Papadias
In this paper, we propose a novel successive sub-array activation (SSAA) diversity scheme for a massive multiple-input multiple-output (MIMO) system in combination with non-orthogonal multiple access (NOMA). A single-cell multi-cluster downlink scenario is considered, where the base station (BS) sends redundant symbols through multiple transmit sub-arrays to multi-antenna receivers. An in-depth analytical analysis is carried out, in which an exact closed-form expression for the outage probability is derived. Also, a high signal-to-noise ratio (SNR) outage approximation is obtained and the system diversity order is determined. Our results show that the proposed scheme outperforms conventional full array massive MIMO setups.
{"title":"Successive Sub-Array Activation for Massive MIMO-NOMA Networks","authors":"A. S. D. Sena, D. B. D. Costa, Z. Ding, P. Nardelli, U. Dias, C. Papadias","doi":"10.1109/ICC40277.2020.9149116","DOIUrl":"https://doi.org/10.1109/ICC40277.2020.9149116","url":null,"abstract":"In this paper, we propose a novel successive sub-array activation (SSAA) diversity scheme for a massive multiple-input multiple-output (MIMO) system in combination with non-orthogonal multiple access (NOMA). A single-cell multi-cluster downlink scenario is considered, where the base station (BS) sends redundant symbols through multiple transmit sub-arrays to multi-antenna receivers. An in-depth analytical analysis is carried out, in which an exact closed-form expression for the outage probability is derived. Also, a high signal-to-noise ratio (SNR) outage approximation is obtained and the system diversity order is determined. Our results show that the proposed scheme outperforms conventional full array massive MIMO setups.","PeriodicalId":106560,"journal":{"name":"ICC 2020 - 2020 IEEE International Conference on Communications (ICC)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127224246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}