Pub Date : 2020-06-01DOI: 10.1109/ICC40277.2020.9148871
Adrien Gausseran, F. Giroire, B. Jaumard, J. Moulierac
Modern 5G networks promise more bandwidth, less delay, and more flexibility for an ever increasing number of users and applications, with Software Defined Networking, Network Function Virtualization, and Network Slicing as key enablers. Within that context, efficiently provisioning network and cloud resources of a wide variety of applications with dynamic users’ demands is a real challenge. In this work, we consider the problem of network slice reconfiguration. Reconfiguring from time to time network slices allows to reduce the network operational costs and to increase the number of slices that can be managed within the network. However, it impacts users’ Quality of Service during the reconfiguration step. To solve this issue, we study solutions implementing a make-before-break scheme. We propose new models and scalable algorithms (relying on column generation techniques) that solve large data instances in few seconds.
{"title":"Be Scalable and Rescue My Slices During Reconfiguration","authors":"Adrien Gausseran, F. Giroire, B. Jaumard, J. Moulierac","doi":"10.1109/ICC40277.2020.9148871","DOIUrl":"https://doi.org/10.1109/ICC40277.2020.9148871","url":null,"abstract":"Modern 5G networks promise more bandwidth, less delay, and more flexibility for an ever increasing number of users and applications, with Software Defined Networking, Network Function Virtualization, and Network Slicing as key enablers. Within that context, efficiently provisioning network and cloud resources of a wide variety of applications with dynamic users’ demands is a real challenge. In this work, we consider the problem of network slice reconfiguration. Reconfiguring from time to time network slices allows to reduce the network operational costs and to increase the number of slices that can be managed within the network. However, it impacts users’ Quality of Service during the reconfiguration step. To solve this issue, we study solutions implementing a make-before-break scheme. We propose new models and scalable algorithms (relying on column generation techniques) that solve large data instances in few seconds.","PeriodicalId":106560,"journal":{"name":"ICC 2020 - 2020 IEEE International Conference on Communications (ICC)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116011966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/icc40277.2020.9149189
Qi Cao, Siliang Zeng, Man-On Pun, Yi Chen
How to predict the wireless network level performance such as the network capacity, the average user data rate, and the 5%-tile user data rate is a million-dollar question. In the literature, some pioneering works have been proposed by exploiting either the information theoretic techniques on the physical layer (PHY) information or the Markov chain techniques on the multiple access control (MAC) layer information. However, since these mathematical model-driven approaches usually focus on a small part of the network structure, they cannot characterize the whole network performance. In this paper, we propose to utilize a data-driven machine learning approach to tackle this problem. More specifically, both PHY and MAC information is fed into a deep neural network (DNN) specifically designed for network-level performance prediction. Simulation results show that the network level performance can be accurately predicted at the cost of higher computational complexity.
{"title":"Network-Level System Performance Prediction Using Deep Neural Networks with Cross-Layer Information","authors":"Qi Cao, Siliang Zeng, Man-On Pun, Yi Chen","doi":"10.1109/icc40277.2020.9149189","DOIUrl":"https://doi.org/10.1109/icc40277.2020.9149189","url":null,"abstract":"How to predict the wireless network level performance such as the network capacity, the average user data rate, and the 5%-tile user data rate is a million-dollar question. In the literature, some pioneering works have been proposed by exploiting either the information theoretic techniques on the physical layer (PHY) information or the Markov chain techniques on the multiple access control (MAC) layer information. However, since these mathematical model-driven approaches usually focus on a small part of the network structure, they cannot characterize the whole network performance. In this paper, we propose to utilize a data-driven machine learning approach to tackle this problem. More specifically, both PHY and MAC information is fed into a deep neural network (DNN) specifically designed for network-level performance prediction. Simulation results show that the network level performance can be accurately predicted at the cost of higher computational complexity.","PeriodicalId":106560,"journal":{"name":"ICC 2020 - 2020 IEEE International Conference on Communications (ICC)","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122393496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ICC40277.2020.9148753
Peter Neuhaus, Meik Dörpinghaus, H. Halbauer, Stefan Wesemann, Martin Schlüter, Florian Gast, G. Fettweis
Wireless communications systems beyond 5G are foreseen to utilize the large available bandwidths above 100 GHz. However, the power consumption of analog-to-digital converters (ADCs) for such systems is expected to be prohibitively high, because it grows quadratically with the sampling rate for high amplitude resolutions. Shifting the resolution from the amplitude to the time domain, i.e., by reducing the amplitude resolution and by employing temporal oversampling w.r.t. the Nyquist rate, is expected to be more energy efficient. To this end, we propose a novel low-cost sub-terahertz system employing zero crossing modulation (ZXM) transmit signals in combination with 1-bit quantization and temporal oversampling at the receiver. We derive and evaluate new finite-state machines for efficient de-/modulation of ZXM transmit signals, i.e., for efficient bit sequence to symbol sequence de-/mapping. Furthermore, the coded performance of the system is evaluated for a wideband line-of-sight channel.
{"title":"Sub-THz Wideband System Employing 1-bit Quantization and Temporal Oversampling","authors":"Peter Neuhaus, Meik Dörpinghaus, H. Halbauer, Stefan Wesemann, Martin Schlüter, Florian Gast, G. Fettweis","doi":"10.1109/ICC40277.2020.9148753","DOIUrl":"https://doi.org/10.1109/ICC40277.2020.9148753","url":null,"abstract":"Wireless communications systems beyond 5G are foreseen to utilize the large available bandwidths above 100 GHz. However, the power consumption of analog-to-digital converters (ADCs) for such systems is expected to be prohibitively high, because it grows quadratically with the sampling rate for high amplitude resolutions. Shifting the resolution from the amplitude to the time domain, i.e., by reducing the amplitude resolution and by employing temporal oversampling w.r.t. the Nyquist rate, is expected to be more energy efficient. To this end, we propose a novel low-cost sub-terahertz system employing zero crossing modulation (ZXM) transmit signals in combination with 1-bit quantization and temporal oversampling at the receiver. We derive and evaluate new finite-state machines for efficient de-/modulation of ZXM transmit signals, i.e., for efficient bit sequence to symbol sequence de-/mapping. Furthermore, the coded performance of the system is evaluated for a wideband line-of-sight channel.","PeriodicalId":106560,"journal":{"name":"ICC 2020 - 2020 IEEE International Conference on Communications (ICC)","volume":"132 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122833059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ICC40277.2020.9149214
F. Marzouk, Tafseer Akhtar, I. Politis, J. Barraca, A. Radwan
C-RAN presents an advanced mobile networking architecture that promises to tackle various challenging aspects of 5G such as increasing energy efficiency and providing high capacity. Indeed, C-RAN paves the way toward better energy efficiency by centralizing the baseband processing at cloud computing based servers (BBU pool). In this paper, we propose an RRH group based mapping (RGBM) that aims to minimize the power consumption at the BBU pool, while considering users’ QoS and BBU capacity constraints. To achieve this, the proposed scheme uses two key steps: i) the formation of RRH groups aimed at improving the QoS of weak users, ii) the formation of RRH cluster to be mapped for minimal number of BBUs requirement. The proposed scheme uses an efficient greedy heuristic to solve the optimization problem. The performance of the proposed approach was evaluated using simulations, which indicate a significant gain in terms of BBU minimization, power reduction and energy efficiency, while preserving QoS constraints, against well studied legacy solutions.
{"title":"Power Minimizing BBU-RRH Group Based Mapping in C-RAN with Constrained Devices","authors":"F. Marzouk, Tafseer Akhtar, I. Politis, J. Barraca, A. Radwan","doi":"10.1109/ICC40277.2020.9149214","DOIUrl":"https://doi.org/10.1109/ICC40277.2020.9149214","url":null,"abstract":"C-RAN presents an advanced mobile networking architecture that promises to tackle various challenging aspects of 5G such as increasing energy efficiency and providing high capacity. Indeed, C-RAN paves the way toward better energy efficiency by centralizing the baseband processing at cloud computing based servers (BBU pool). In this paper, we propose an RRH group based mapping (RGBM) that aims to minimize the power consumption at the BBU pool, while considering users’ QoS and BBU capacity constraints. To achieve this, the proposed scheme uses two key steps: i) the formation of RRH groups aimed at improving the QoS of weak users, ii) the formation of RRH cluster to be mapped for minimal number of BBUs requirement. The proposed scheme uses an efficient greedy heuristic to solve the optimization problem. The performance of the proposed approach was evaluated using simulations, which indicate a significant gain in terms of BBU minimization, power reduction and energy efficiency, while preserving QoS constraints, against well studied legacy solutions.","PeriodicalId":106560,"journal":{"name":"ICC 2020 - 2020 IEEE International Conference on Communications (ICC)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122475767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ICC40277.2020.9149405
Jing Bai, Yongchao Wang
In this paper, we develop an efficient quadratic programming (QP) decoding algorithm via the alternating direction method of multipliers (ADMM) technique for binary low density parity check (LDPC) codes. Its main content is as follows: first, through transforming the three-variables parity check equation to its equivalent expression, we relax the maximum likelihood decoding problem to a quadratic program. Second, the ADMM technique is exploited to design the solving algorithm of the resulting QP decoding model. Compared with the existing ADMM-based mathematical programming (MP) decoding algorithms, our proposed algorithm eliminates complex Euclidean projection onto the check polytope. Third, we prove that the proposed algorithm satisfies the favorable property of all-zeros assumption. Moreover, by exploiting the inside structure of the QP model, we show that the decoding complexity of our proposed algorithm in each iteration is linear in terms of LDPC code length. Simulation results demonstrate that the proposed QP decoder attains better error-correction performance than the sum-product BP decoder and costs the least amount of decoding time amongst the state-of-the-art ADMM-based MP decoding algorithms.
{"title":"Quadratic Programming Decoder for Binary LDPC Codes via ADMM Technique with Linear Complexity","authors":"Jing Bai, Yongchao Wang","doi":"10.1109/ICC40277.2020.9149405","DOIUrl":"https://doi.org/10.1109/ICC40277.2020.9149405","url":null,"abstract":"In this paper, we develop an efficient quadratic programming (QP) decoding algorithm via the alternating direction method of multipliers (ADMM) technique for binary low density parity check (LDPC) codes. Its main content is as follows: first, through transforming the three-variables parity check equation to its equivalent expression, we relax the maximum likelihood decoding problem to a quadratic program. Second, the ADMM technique is exploited to design the solving algorithm of the resulting QP decoding model. Compared with the existing ADMM-based mathematical programming (MP) decoding algorithms, our proposed algorithm eliminates complex Euclidean projection onto the check polytope. Third, we prove that the proposed algorithm satisfies the favorable property of all-zeros assumption. Moreover, by exploiting the inside structure of the QP model, we show that the decoding complexity of our proposed algorithm in each iteration is linear in terms of LDPC code length. Simulation results demonstrate that the proposed QP decoder attains better error-correction performance than the sum-product BP decoder and costs the least amount of decoding time amongst the state-of-the-art ADMM-based MP decoding algorithms.","PeriodicalId":106560,"journal":{"name":"ICC 2020 - 2020 IEEE International Conference on Communications (ICC)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122897115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/icc40277.2020.9148819
Shushu Liu, Zheng Yan
Edge computing has been widely adopted in many systems, thanks for its advantages to offer low latency and alleviate heavy request loads from end users. Its integration with indoor positioning is one of promising research topics. Different from a traditional positioning system where a user normally query remotely deployed positioning services provided by a Location Information Service Provider (LIS), LIS will outsource its service to an edge device, and the user can obtain the service by directly accessing the edge device in an edge computing-based system. Though the benefits from edge computing, there is still some open issues for service outsourcing. One of them is how to ensure that the outsourced service is executed honestly by the edge device. However, the current literature has not yet seriously studied this issue with a feasible solution. In this paper, we design a verification scheme to solve this open problem for indoor positioning based on edge computing. By injecting some specially designed dataset into a trained machine learning based positioning model, the functionality of outsourced model on edge devices can be verified through this dataset with regard to its prediction accuracy from outsourced model. The verification is successful only when the prediction accuracy can pass a threshold. In experiments, we provide extensive empirical evidence using state-of-the-art positioning models based on real-world datasets to prove the effectiveness of our proposed scheme and meanwhile investigate the effects caused by different factors.
边缘计算在许多系统中被广泛采用,这得益于它提供低延迟和减轻最终用户的繁重请求负载的优势。它与室内定位的结合是一个很有前途的研究课题。与传统定位系统中用户通常查询由位置信息服务提供商(Location Information Service Provider, LIS)提供的远程部署定位服务不同,LIS将其服务外包给边缘设备,用户在基于边缘计算的系统中直接访问边缘设备即可获得服务。尽管边缘计算带来了好处,但服务外包仍然存在一些悬而未决的问题。其中之一是如何确保外包服务由边缘设备诚实地执行。然而,目前的文献尚未认真研究这一问题并提出可行的解决方案。本文设计了一种基于边缘计算的室内定位验证方案来解决这一开放性问题。通过将一些专门设计的数据集注入训练好的基于机器学习的定位模型中,可以通过该数据集验证外包模型在边缘设备上的功能,以及外包模型的预测精度。只有当预测精度达到一定的阈值时,验证才会成功。在实验中,我们使用基于真实数据集的最先进的定位模型提供了广泛的经验证据,以证明我们提出的方案的有效性,同时研究了不同因素造成的影响。
{"title":"Verifiable Edge Computing for Indoor Positioning","authors":"Shushu Liu, Zheng Yan","doi":"10.1109/icc40277.2020.9148819","DOIUrl":"https://doi.org/10.1109/icc40277.2020.9148819","url":null,"abstract":"Edge computing has been widely adopted in many systems, thanks for its advantages to offer low latency and alleviate heavy request loads from end users. Its integration with indoor positioning is one of promising research topics. Different from a traditional positioning system where a user normally query remotely deployed positioning services provided by a Location Information Service Provider (LIS), LIS will outsource its service to an edge device, and the user can obtain the service by directly accessing the edge device in an edge computing-based system. Though the benefits from edge computing, there is still some open issues for service outsourcing. One of them is how to ensure that the outsourced service is executed honestly by the edge device. However, the current literature has not yet seriously studied this issue with a feasible solution. In this paper, we design a verification scheme to solve this open problem for indoor positioning based on edge computing. By injecting some specially designed dataset into a trained machine learning based positioning model, the functionality of outsourced model on edge devices can be verified through this dataset with regard to its prediction accuracy from outsourced model. The verification is successful only when the prediction accuracy can pass a threshold. In experiments, we provide extensive empirical evidence using state-of-the-art positioning models based on real-world datasets to prove the effectiveness of our proposed scheme and meanwhile investigate the effects caused by different factors.","PeriodicalId":106560,"journal":{"name":"ICC 2020 - 2020 IEEE International Conference on Communications (ICC)","volume":"190 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122111654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/icc40277.2020.9148701
Thanh-Hai To, A. Duda
In this paper, we propose Timemaps, a new scheduling scheme to improve the performance of LoRaWAN. The idea is to build a temporal map of all transmissions of IoT devices by a Gateway to schedule transmissions and avoid collisions. A device when performing a Join operation includes its traffic description in the request. Based on the traffic descriptions from all devices, the Gateway constructs a schedule for channel access that avoids collisions. The Gateway includes in the Join accept the information on the temporal position of the device transmission in the schedule, the Spreading Factor (SF) to use based on measured Signal to Noise Ratio (SNR) at the Gateway, and the channel to use. We evaluate our proposal with the NS-3 simulator in both cases of perfect clocks and clocks with a drift as well as for homogeneous and inhomogeneous node density. The simulation takes into account quasi-orthogonality and the capture effect. The results show that Timemaps benefits from remarkably higher PDR and a considerably lower collision ratio compared to LoRaWAN along with slightly increased energy consumption.
{"title":"Timemaps for Improving Performance of LoRaWAN","authors":"Thanh-Hai To, A. Duda","doi":"10.1109/icc40277.2020.9148701","DOIUrl":"https://doi.org/10.1109/icc40277.2020.9148701","url":null,"abstract":"In this paper, we propose Timemaps, a new scheduling scheme to improve the performance of LoRaWAN. The idea is to build a temporal map of all transmissions of IoT devices by a Gateway to schedule transmissions and avoid collisions. A device when performing a Join operation includes its traffic description in the request. Based on the traffic descriptions from all devices, the Gateway constructs a schedule for channel access that avoids collisions. The Gateway includes in the Join accept the information on the temporal position of the device transmission in the schedule, the Spreading Factor (SF) to use based on measured Signal to Noise Ratio (SNR) at the Gateway, and the channel to use. We evaluate our proposal with the NS-3 simulator in both cases of perfect clocks and clocks with a drift as well as for homogeneous and inhomogeneous node density. The simulation takes into account quasi-orthogonality and the capture effect. The results show that Timemaps benefits from remarkably higher PDR and a considerably lower collision ratio compared to LoRaWAN along with slightly increased energy consumption.","PeriodicalId":106560,"journal":{"name":"ICC 2020 - 2020 IEEE International Conference on Communications (ICC)","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117103997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ICC40277.2020.9149404
A. Jalooli, Kuilin Zhang, Min Song, Wenye Wang
Effective clustering is vital to mitigate routing scalability and reliability issues in heterogeneous vehicular networks. In this paper, we propose an adaptive clustering scheme to maximize the cluster stability in vehicular networks. The scheme uses the predicted driving behavior of vehicles over a time horizon to maximize the clusters’ lifetime. To this end, we first define the stability degree of vehicles by exploiting the unique aspects of vehicular environments. We then formulate the clustering problem as an optimization problem, which is used within a rolling horizon framework in the cluster formation process. Our scheme is based on a heterogeneous vehicular network architecture, which allows the coexistence of dedicated short-range communication and cellular network for vehicular communications. The simulation results demonstrate that our scheme significantly outperforms alternative clustering algorithms in terms of the overall clusters’ lifetime under different traffic conditions. Our scheme can also be utilized to provide a well-grounded comprehension of the optimally of the existing and future distributed clustering algorithms.
{"title":"A Novel Clustering Scheme for Heterogeneous Vehicular Networks","authors":"A. Jalooli, Kuilin Zhang, Min Song, Wenye Wang","doi":"10.1109/ICC40277.2020.9149404","DOIUrl":"https://doi.org/10.1109/ICC40277.2020.9149404","url":null,"abstract":"Effective clustering is vital to mitigate routing scalability and reliability issues in heterogeneous vehicular networks. In this paper, we propose an adaptive clustering scheme to maximize the cluster stability in vehicular networks. The scheme uses the predicted driving behavior of vehicles over a time horizon to maximize the clusters’ lifetime. To this end, we first define the stability degree of vehicles by exploiting the unique aspects of vehicular environments. We then formulate the clustering problem as an optimization problem, which is used within a rolling horizon framework in the cluster formation process. Our scheme is based on a heterogeneous vehicular network architecture, which allows the coexistence of dedicated short-range communication and cellular network for vehicular communications. The simulation results demonstrate that our scheme significantly outperforms alternative clustering algorithms in terms of the overall clusters’ lifetime under different traffic conditions. Our scheme can also be utilized to provide a well-grounded comprehension of the optimally of the existing and future distributed clustering algorithms.","PeriodicalId":106560,"journal":{"name":"ICC 2020 - 2020 IEEE International Conference on Communications (ICC)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129690271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ICC40277.2020.9148971
Anil Kurt, G. M. Guvensen
In this paper, adaptive hybrid beamforming methods are proposed for millimeter-wave range massive MIMO systems considering single carrier wideband transmission in uplink data mode. A statistical analog beamformer is adaptively constructed in slow-time, while the channel is time-varying and erroneously estimated. Proposed recursive filtering approach is shown to bring a remarkable robustness against estimation errors. Then, analytical modifications are applied on an analog beamformer design method and approximated expressions are obtained for channel covariance matrices that decouple angular spread and center angle of multipath components. Resultant adaptive construction methods use only the estimated power levels on angular patches and they are shown to be very efficient such that they reduce computational complexity significantly while the performance remains almost the same.
{"title":"An Adaptive Hybrid Beamforming Scheme for Time-Varying Wideband Massive MIMO Channels","authors":"Anil Kurt, G. M. Guvensen","doi":"10.1109/ICC40277.2020.9148971","DOIUrl":"https://doi.org/10.1109/ICC40277.2020.9148971","url":null,"abstract":"In this paper, adaptive hybrid beamforming methods are proposed for millimeter-wave range massive MIMO systems considering single carrier wideband transmission in uplink data mode. A statistical analog beamformer is adaptively constructed in slow-time, while the channel is time-varying and erroneously estimated. Proposed recursive filtering approach is shown to bring a remarkable robustness against estimation errors. Then, analytical modifications are applied on an analog beamformer design method and approximated expressions are obtained for channel covariance matrices that decouple angular spread and center angle of multipath components. Resultant adaptive construction methods use only the estimated power levels on angular patches and they are shown to be very efficient such that they reduce computational complexity significantly while the performance remains almost the same.","PeriodicalId":106560,"journal":{"name":"ICC 2020 - 2020 IEEE International Conference on Communications (ICC)","volume":"1992 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128613042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ICC40277.2020.9149276
Yi Shen, Chunming Wu, Dezhang Kong, Mingliang Yang
Distributed Denial of Service (DDoS) attack is one of the most severe threats to the current network security. As a new network architecture, Software-Defined Networking (SDN) draws notable attention from both industry and academia. The characteristics of SDN such as centralized management and flow-based traffic monitoring make it an ideal platform to defend against DDoS attacks. When designing a network intrusion detection system (NIDS) in SDN, how to obtain fine-grained flow information with minimal overhead to the SDN architecture is a problem to be solved. In this paper, we propose TPDD, a two-phase DDoS detection system to detect DDoS attacks in SDN. In the first phase, we utilize the characteristics of SDN to collect coarse-grained flow information from the core switches and locate the potential victim. Then we monitor the edge switches located close to the potential victim to obtain finer-grained traffic information in the second phase. The collection method of each phase fully considers the impact on the bandwidth between the controller and switches. Without modifying the existing flow rules, the collection module can obtain sufficient information about traffic. By using entropy-based and machine learning-based methods, the detection module can effectively detect anomalies and identify whether the potential victim marked in the first phase is the target of attacks. Experimental results show that TPDD can effectively detect DDoS attacks with little overhead.
{"title":"TPDD: A Two-Phase DDoS Detection System in Software-Defined Networking","authors":"Yi Shen, Chunming Wu, Dezhang Kong, Mingliang Yang","doi":"10.1109/ICC40277.2020.9149276","DOIUrl":"https://doi.org/10.1109/ICC40277.2020.9149276","url":null,"abstract":"Distributed Denial of Service (DDoS) attack is one of the most severe threats to the current network security. As a new network architecture, Software-Defined Networking (SDN) draws notable attention from both industry and academia. The characteristics of SDN such as centralized management and flow-based traffic monitoring make it an ideal platform to defend against DDoS attacks. When designing a network intrusion detection system (NIDS) in SDN, how to obtain fine-grained flow information with minimal overhead to the SDN architecture is a problem to be solved. In this paper, we propose TPDD, a two-phase DDoS detection system to detect DDoS attacks in SDN. In the first phase, we utilize the characteristics of SDN to collect coarse-grained flow information from the core switches and locate the potential victim. Then we monitor the edge switches located close to the potential victim to obtain finer-grained traffic information in the second phase. The collection method of each phase fully considers the impact on the bandwidth between the controller and switches. Without modifying the existing flow rules, the collection module can obtain sufficient information about traffic. By using entropy-based and machine learning-based methods, the detection module can effectively detect anomalies and identify whether the potential victim marked in the first phase is the target of attacks. Experimental results show that TPDD can effectively detect DDoS attacks with little overhead.","PeriodicalId":106560,"journal":{"name":"ICC 2020 - 2020 IEEE International Conference on Communications (ICC)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129086074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}