Pub Date : 2020-09-01DOI: 10.1109/5GWF49715.2020.9221185
William A. Flanagan
Session Bridging, a new form of header compression, significantly reduces bandwidth requirements in multiple use cases. This innovation separates connections into end segments marked by separate protocol instances using normal headers. A transport segment between them carries payloads with compressed headers. The process dramatically reduces the core bandwidth per connection for conversational applications and extends savings into access circuits as well. Session Bridging takes advantage of MPLS or virtual Ethernet connectivity in existing networks. Custom code on a working proof of concept confirms viability for conversational connectivity-real-time voice, live video conferencing, telemedicine, and gaming-applications where latency is a key quality measure. Extensions save additional bandwidth on links between routers and over a Radio Access Network.
{"title":"Header Compression Across Entire Network Without Internet Protocol Saves Bandwidth and Latency","authors":"William A. Flanagan","doi":"10.1109/5GWF49715.2020.9221185","DOIUrl":"https://doi.org/10.1109/5GWF49715.2020.9221185","url":null,"abstract":"Session Bridging, a new form of header compression, significantly reduces bandwidth requirements in multiple use cases. This innovation separates connections into end segments marked by separate protocol instances using normal headers. A transport segment between them carries payloads with compressed headers. The process dramatically reduces the core bandwidth per connection for conversational applications and extends savings into access circuits as well. Session Bridging takes advantage of MPLS or virtual Ethernet connectivity in existing networks. Custom code on a working proof of concept confirms viability for conversational connectivity-real-time voice, live video conferencing, telemedicine, and gaming-applications where latency is a key quality measure. Extensions save additional bandwidth on links between routers and over a Radio Access Network.","PeriodicalId":232687,"journal":{"name":"2020 IEEE 3rd 5G World Forum (5GWF)","volume":"517 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133496343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1109/5GWF49715.2020.9221020
Jyotirmoy Karjee, Shubhneet Khatter, Diprotiv Sarkar, Hema Lakshman C. Tammineedi, Ashok Kumar Reddy Chavva
The recommended bit rate (RBR) is assigned by gNodeB to the user equipment (UE) using MAC control element (CE) entity to provide bit rate information in 5G New Radio (NR). At the UE, the bit rate information is passed on to the upper layers; i.e., transport or application, for a specific logical channel either in uplink or downlink. However, based on specific application rate of target, UE does not know how to efficiently utilize and distribute RBR/throughput in lower and upper layer, respectively considering the specific logical channel. To address these problems, we propose a cross layer rate adaptation (CLRA) mechanism for UE. CLRA consists of two parts. In the first part, CLRA utilizes RBR received from gNodeB to compute throughput at lower layer. In the second part, CLRA distributes the throughput in upper layer received from lower layer based on specific applications rate of target. CLRA provides an intelligent mechanism to distribute throughput among foreground/ background applications and voice over internet protocol (VoIP) application considering a learning based codec adaptation. We conduct experiments with Samsung Galaxy S8 device and simulations to validate CLRA mechanism for applications in 5G NR.
{"title":"5G-NR Cross Layer Rate Adaptation for VoIP and Foreground/Background Applications in UE","authors":"Jyotirmoy Karjee, Shubhneet Khatter, Diprotiv Sarkar, Hema Lakshman C. Tammineedi, Ashok Kumar Reddy Chavva","doi":"10.1109/5GWF49715.2020.9221020","DOIUrl":"https://doi.org/10.1109/5GWF49715.2020.9221020","url":null,"abstract":"The recommended bit rate (RBR) is assigned by gNodeB to the user equipment (UE) using MAC control element (CE) entity to provide bit rate information in 5G New Radio (NR). At the UE, the bit rate information is passed on to the upper layers; i.e., transport or application, for a specific logical channel either in uplink or downlink. However, based on specific application rate of target, UE does not know how to efficiently utilize and distribute RBR/throughput in lower and upper layer, respectively considering the specific logical channel. To address these problems, we propose a cross layer rate adaptation (CLRA) mechanism for UE. CLRA consists of two parts. In the first part, CLRA utilizes RBR received from gNodeB to compute throughput at lower layer. In the second part, CLRA distributes the throughput in upper layer received from lower layer based on specific applications rate of target. CLRA provides an intelligent mechanism to distribute throughput among foreground/ background applications and voice over internet protocol (VoIP) application considering a learning based codec adaptation. We conduct experiments with Samsung Galaxy S8 device and simulations to validate CLRA mechanism for applications in 5G NR.","PeriodicalId":232687,"journal":{"name":"2020 IEEE 3rd 5G World Forum (5GWF)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131318113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1109/5GWF49715.2020.9221127
V. Mota, V. P. Magri, T. Ferreira, L. Matos, P. Castellanos, Maurício W. B. Silva, Luciana S. Briggs
Swivel low cost prototype using an alternative measurement setup with one pair of identical antennas to obtain the radiation pattern for millimeter wave antennas and antenna arrays applied to 5G and RFID services is proposed. The prototype uses a stepper motor controlled by Arduino Uno, that allows 360° rotation. The measurement setup uses a Swivel Prototype where the transmitter antenna is connected to a signal generator and the receiver antenna is connected to a signal analyzer, both of them using GPIB-USB control interface. LabVIEW is used to control the equipment and it measures the power reception level. Moreover, Matlab in the same LabVIEW VI is used to plot the radiation pattern in rectangular and polar forms. The simulated and measured results are compared and validated, both in a real environment and an anechoic chamber.
{"title":"Swivel low cost prototype and Automatized Measurment Setup to Determine 5G and RFID Arrays Radiation Pattern","authors":"V. Mota, V. P. Magri, T. Ferreira, L. Matos, P. Castellanos, Maurício W. B. Silva, Luciana S. Briggs","doi":"10.1109/5GWF49715.2020.9221127","DOIUrl":"https://doi.org/10.1109/5GWF49715.2020.9221127","url":null,"abstract":"Swivel low cost prototype using an alternative measurement setup with one pair of identical antennas to obtain the radiation pattern for millimeter wave antennas and antenna arrays applied to 5G and RFID services is proposed. The prototype uses a stepper motor controlled by Arduino Uno, that allows 360° rotation. The measurement setup uses a Swivel Prototype where the transmitter antenna is connected to a signal generator and the receiver antenna is connected to a signal analyzer, both of them using GPIB-USB control interface. LabVIEW is used to control the equipment and it measures the power reception level. Moreover, Matlab in the same LabVIEW VI is used to plot the radiation pattern in rectangular and polar forms. The simulated and measured results are compared and validated, both in a real environment and an anechoic chamber.","PeriodicalId":232687,"journal":{"name":"2020 IEEE 3rd 5G World Forum (5GWF)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129234661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1109/5GWF49715.2020.9221366
Debabrata Dalai, Sarath Babu, B. S. Manoj
Satellite networks are expected to be an important component in 5G mobile communication systems. However, high propagation delay, limited bandwidth, and the uncertainty of orbital parameters make the 5G-satellite integration a challenging task. Satellite Edge Servers (SESes), along with Software Defined Networking (SDN), forms one of the major areas of research due to their user proximity as well as the role in reducing end-to-end latency as required by the 5G standards. In this paper, we propose a novel Satellite Edge Computing (SEC) framework for 5Gsatellite integration enabling multi-layer caching as well as intersatellite cache exchange. The efficacy of the proposed framework is analyzed using different case studies involving issues such as high mobility, dwell time, and cache prefetching. We developed a Python-based discrete event satellite network simulator to study the performance of our framework on end-to-end delay and cache hit ratio. Simulation results show that our framework achieves an end-to-end delay of 40. 17ms with 87.32% cache hit ratio.
{"title":"On Using Edge Servers in 5G Satellite Networks","authors":"Debabrata Dalai, Sarath Babu, B. S. Manoj","doi":"10.1109/5GWF49715.2020.9221366","DOIUrl":"https://doi.org/10.1109/5GWF49715.2020.9221366","url":null,"abstract":"Satellite networks are expected to be an important component in 5G mobile communication systems. However, high propagation delay, limited bandwidth, and the uncertainty of orbital parameters make the 5G-satellite integration a challenging task. Satellite Edge Servers (SESes), along with Software Defined Networking (SDN), forms one of the major areas of research due to their user proximity as well as the role in reducing end-to-end latency as required by the 5G standards. In this paper, we propose a novel Satellite Edge Computing (SEC) framework for 5Gsatellite integration enabling multi-layer caching as well as intersatellite cache exchange. The efficacy of the proposed framework is analyzed using different case studies involving issues such as high mobility, dwell time, and cache prefetching. We developed a Python-based discrete event satellite network simulator to study the performance of our framework on end-to-end delay and cache hit ratio. Simulation results show that our framework achieves an end-to-end delay of 40. 17ms with 87.32% cache hit ratio.","PeriodicalId":232687,"journal":{"name":"2020 IEEE 3rd 5G World Forum (5GWF)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123239663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The introduction of 5G provides an opportunity for mobile operators to build, integrate, and upgrade existing infrastructure to enrich their existing services and enable new business use cases. Mobile operators want to leverage their existing infrastructures as much as possible so that cost of deploying a new network is optimal and it continues to provide existing services with seamless interworking. The main goal of this paper is to provide system architecture and methods for interworking and migration between 4G and 5G mobile technologies. Some of the key challenges faced in rolling out 5G are: (a) ability to serve 4G, 5G Non-standalone (NSA) and standalone (SA) subscribers, (b) integration of new 5G radio without overloading existing network, (c) enabling 5G devices with new applications and enhanced quality of service (QoS), and (d) introducing network slicing to enrich 5G experience. This paper identifies different interworking scenarios, associated challenges, and addresses possible solutions. In summary, mobile service provider must define their transformation journey based on what 3G/4G network they have, what new services need to be enabled, and then develop 5G architecture with smooth integration.
{"title":"Migration and Interworking between 4G and 5G","authors":"Prakash Suthar, Vivek Agarwal, Rajaneesh Shetty, Anil Jangam","doi":"10.1109/5GWF49715.2020.9221021","DOIUrl":"https://doi.org/10.1109/5GWF49715.2020.9221021","url":null,"abstract":"The introduction of 5G provides an opportunity for mobile operators to build, integrate, and upgrade existing infrastructure to enrich their existing services and enable new business use cases. Mobile operators want to leverage their existing infrastructures as much as possible so that cost of deploying a new network is optimal and it continues to provide existing services with seamless interworking. The main goal of this paper is to provide system architecture and methods for interworking and migration between 4G and 5G mobile technologies. Some of the key challenges faced in rolling out 5G are: (a) ability to serve 4G, 5G Non-standalone (NSA) and standalone (SA) subscribers, (b) integration of new 5G radio without overloading existing network, (c) enabling 5G devices with new applications and enhanced quality of service (QoS), and (d) introducing network slicing to enrich 5G experience. This paper identifies different interworking scenarios, associated challenges, and addresses possible solutions. In summary, mobile service provider must define their transformation journey based on what 3G/4G network they have, what new services need to be enabled, and then develop 5G architecture with smooth integration.","PeriodicalId":232687,"journal":{"name":"2020 IEEE 3rd 5G World Forum (5GWF)","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123304792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1109/5GWF49715.2020.9221139
J. Bas, A. Dowhuszko
This paper studies the data rate that a High Throughput Satellite (HTS) system with fully-regenerative payload can achieve when using an intensity modulation/direct detection optical feeder link. A low-order M-ary Pulse Amplitude Modulation (M-PAM) with time-packing is used to modulate the intensity of the laser diode beam, making use of an external Mach-Zehnder modulator. These M-PAM symbols are recovered on-board the satellite with the aid of a photodetector, and are then encapsulated into the 5G radio frame of the access link. The M-PAM modulation order and the overlapping factor of timepacking are jointly selected to tackle the impact of slowly-varying weather conditions. Moreover, the inter-symbol interference that time-packing introduces is mitigated in reception using a Viterbi equalizer. As expected, time-packing enables a finer granularity on the link adaptation capability of the optical feeder link, enabling to adjust its spectral efficiency according to the moderate attenuation that thin cloud layers introduce.
{"title":"Time-Packing as Enabler of Optical Feeder Link Adaptation in High Throughput Satellite Systems","authors":"J. Bas, A. Dowhuszko","doi":"10.1109/5GWF49715.2020.9221139","DOIUrl":"https://doi.org/10.1109/5GWF49715.2020.9221139","url":null,"abstract":"This paper studies the data rate that a High Throughput Satellite (HTS) system with fully-regenerative payload can achieve when using an intensity modulation/direct detection optical feeder link. A low-order M-ary Pulse Amplitude Modulation (M-PAM) with time-packing is used to modulate the intensity of the laser diode beam, making use of an external Mach-Zehnder modulator. These M-PAM symbols are recovered on-board the satellite with the aid of a photodetector, and are then encapsulated into the 5G radio frame of the access link. The M-PAM modulation order and the overlapping factor of timepacking are jointly selected to tackle the impact of slowly-varying weather conditions. Moreover, the inter-symbol interference that time-packing introduces is mitigated in reception using a Viterbi equalizer. As expected, time-packing enables a finer granularity on the link adaptation capability of the optical feeder link, enabling to adjust its spectral efficiency according to the moderate attenuation that thin cloud layers introduce.","PeriodicalId":232687,"journal":{"name":"2020 IEEE 3rd 5G World Forum (5GWF)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114389725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1109/5GWF49715.2020.9221434
Nairuhi Grigoryan, E. Matús, G. Fettweis
5G supports the variety of new services with different requirements for throughput, latency and reliability. Multicore computing platforms are used to meet the various requirements while allowing scalability and flexibility in the implementation of the base stations. The challenge in this regards is the efficient distribution and processing of signal processing tasks on parallel processors. Moreover, with increasing of the application complexity, the management and synchronization overhead increases disproportionately, which limits the increase in performance and system efficiency. To cope with this problem the application granularity reduction using task clustering was proposed recently and demonstrated impressive performance improvement. Unfortunately, no practical clustering algorithm have been studied in this regards. Our motivation is to study and design well suited clustering algorithms to these needs. More particularly, we modify Clustering And Scheduling System II(CASSII) algorithm in order to gain higher speed-ups and show the performance improvement in regards to original algorithm and not clustered graphs.
{"title":"Scalable 5G Signal Processing on Multiprocessor System: A Clustering Approach","authors":"Nairuhi Grigoryan, E. Matús, G. Fettweis","doi":"10.1109/5GWF49715.2020.9221434","DOIUrl":"https://doi.org/10.1109/5GWF49715.2020.9221434","url":null,"abstract":"5G supports the variety of new services with different requirements for throughput, latency and reliability. Multicore computing platforms are used to meet the various requirements while allowing scalability and flexibility in the implementation of the base stations. The challenge in this regards is the efficient distribution and processing of signal processing tasks on parallel processors. Moreover, with increasing of the application complexity, the management and synchronization overhead increases disproportionately, which limits the increase in performance and system efficiency. To cope with this problem the application granularity reduction using task clustering was proposed recently and demonstrated impressive performance improvement. Unfortunately, no practical clustering algorithm have been studied in this regards. Our motivation is to study and design well suited clustering algorithms to these needs. More particularly, we modify Clustering And Scheduling System II(CASSII) algorithm in order to gain higher speed-ups and show the performance improvement in regards to original algorithm and not clustered graphs.","PeriodicalId":232687,"journal":{"name":"2020 IEEE 3rd 5G World Forum (5GWF)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121149836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1109/5GWF49715.2020.9221114
Ujjwal Pawar, A. K. Singh, Keval Malde, T. B. Reddy, A. Franklin
Cloud Radio Access Network (C-RAN) is rising as an attractive solution for the operators to cope with the ever-increasing user demand in a cost-efficient way. C-RAN’s architecture consists of (i) Distributed Units (DU) located at the remote sites along with RF processing units, (ii) the Central Unit (CU) consisting of high speed programmable processors performing tasks such as mobility control, radio access network sharing, positioning, session management over a (iii) low latency, high bandwidth fronthaul link, which connects multiple DUs to the CU pool realized on a cloud platform. In traditional C-RAN, the functionalities that the BBUs and RRHs have to perform are fixed. Instead of having such a fixed set of functionalities, the concept of functional splits was introduced by 3GPP to bring forth the idea of shifting network stack functions between CUs and DUs in next generation C-RAN. In this paper, a real-time C-RAN testbed running on OpenAirInterface (OAI) software platform is used to profile the energy consumed by different functional splits configured by varying the CPU clock frequency and channel bandwidth. It is observed that for some lower CPU clock frequencies, the energy consumption is reduced without affecting the system throughput and overall user experience. With these insights, operators can improve the energy efficiency of CRAN systems deployed.
{"title":"Understanding Energy Consumption of Cloud Radio Access Networks: an Experimental Study","authors":"Ujjwal Pawar, A. K. Singh, Keval Malde, T. B. Reddy, A. Franklin","doi":"10.1109/5GWF49715.2020.9221114","DOIUrl":"https://doi.org/10.1109/5GWF49715.2020.9221114","url":null,"abstract":"Cloud Radio Access Network (C-RAN) is rising as an attractive solution for the operators to cope with the ever-increasing user demand in a cost-efficient way. C-RAN’s architecture consists of (i) Distributed Units (DU) located at the remote sites along with RF processing units, (ii) the Central Unit (CU) consisting of high speed programmable processors performing tasks such as mobility control, radio access network sharing, positioning, session management over a (iii) low latency, high bandwidth fronthaul link, which connects multiple DUs to the CU pool realized on a cloud platform. In traditional C-RAN, the functionalities that the BBUs and RRHs have to perform are fixed. Instead of having such a fixed set of functionalities, the concept of functional splits was introduced by 3GPP to bring forth the idea of shifting network stack functions between CUs and DUs in next generation C-RAN. In this paper, a real-time C-RAN testbed running on OpenAirInterface (OAI) software platform is used to profile the energy consumed by different functional splits configured by varying the CPU clock frequency and channel bandwidth. It is observed that for some lower CPU clock frequencies, the energy consumption is reduced without affecting the system throughput and overall user experience. With these insights, operators can improve the energy efficiency of CRAN systems deployed.","PeriodicalId":232687,"journal":{"name":"2020 IEEE 3rd 5G World Forum (5GWF)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122712063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1109/5GWF49715.2020.9221119
A. Vanelli-Coralli, A. Guidotti, T. Foggi, G. Colavolpe, G. Montorsi
The evolution of 5G into beyond 5G and 6G networks aims at responding to the increasing need of our society of ubiquitous and continuous connectivity services in all areas of our life: from education to finance, from politics to health, from entertainment to environment protection. The next generation network communication infrastructure is called to support this increasing demand of connectivity by enforcing: energy- and costefficiency to guarantee environmental and economical sustainability; scalability, flexibility, and adaptability to ensure support to the heterogeneity of the service characteristics and constraints, as well as the variety of equipment; reliability and dependability to fulfil its role of critical infrastructure able to provide global connectivity no matter the social, political, or environmental situation. In this framework, non-terrestrial networks (NTN) are recognized to play a crucial role. It is in fact generally understood that the terrestrial network alone cannot provide the flexibility, scalability, adaptability, and coverage required to meet the above requirements, and the integration of the NTN component is a key enabler. In this framework, 3GPP has started to address the inclusion of technology enablers in the NR standard to support NTN. However, to fully exploit the potential of the NT component in an integrated terrestrial and NT architecture, several research and innovation challenges shall be addressed. In this paper, we first discuss the current development of NTN in 5G and then we present the vision of the role of NTN in B5G and 6G networks and we elaborate the corresponding research challenges.
{"title":"5G and Beyond 5G Non-Terrestrial Networks: trends and research challenges","authors":"A. Vanelli-Coralli, A. Guidotti, T. Foggi, G. Colavolpe, G. Montorsi","doi":"10.1109/5GWF49715.2020.9221119","DOIUrl":"https://doi.org/10.1109/5GWF49715.2020.9221119","url":null,"abstract":"The evolution of 5G into beyond 5G and 6G networks aims at responding to the increasing need of our society of ubiquitous and continuous connectivity services in all areas of our life: from education to finance, from politics to health, from entertainment to environment protection. The next generation network communication infrastructure is called to support this increasing demand of connectivity by enforcing: energy- and costefficiency to guarantee environmental and economical sustainability; scalability, flexibility, and adaptability to ensure support to the heterogeneity of the service characteristics and constraints, as well as the variety of equipment; reliability and dependability to fulfil its role of critical infrastructure able to provide global connectivity no matter the social, political, or environmental situation. In this framework, non-terrestrial networks (NTN) are recognized to play a crucial role. It is in fact generally understood that the terrestrial network alone cannot provide the flexibility, scalability, adaptability, and coverage required to meet the above requirements, and the integration of the NTN component is a key enabler. In this framework, 3GPP has started to address the inclusion of technology enablers in the NR standard to support NTN. However, to fully exploit the potential of the NT component in an integrated terrestrial and NT architecture, several research and innovation challenges shall be addressed. In this paper, we first discuss the current development of NTN in 5G and then we present the vision of the role of NTN in B5G and 6G networks and we elaborate the corresponding research challenges.","PeriodicalId":232687,"journal":{"name":"2020 IEEE 3rd 5G World Forum (5GWF)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125812222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1109/5GWF49715.2020.9221167
Anand N. Warrier, Saidhiraj Amuru
Which neural network architecture should be used for my problem? This is a common question that is encountered nowadays. Having searched a slew of papers that have been published over the last few years in the cross domain of machine learning and wireless communications, the authors found that several researchers working in this multi-disciplinary field continue to have the same question. In this regard, we make an attempt to provide a guide for choosing neural networks using an example application from the field of wireless communications, specifically we consider modulation classification. While deep learning was used to address modulation classification quite extensively using real world data, none of these papers give intuition about the neural network architectures that must be chosen to get good classification performance. During our study and experiments, we realized that this simple example with simple wireless channel models can be used as a reference to understand how to choose the appropriate deep learning models, specifically neural network models, based on the system model for the problem under consideration. In this paper, we provide numerical results to support the intuition that arises for various cases.
{"title":"How to choose aneural network architecture? – A modulation classification example","authors":"Anand N. Warrier, Saidhiraj Amuru","doi":"10.1109/5GWF49715.2020.9221167","DOIUrl":"https://doi.org/10.1109/5GWF49715.2020.9221167","url":null,"abstract":"Which neural network architecture should be used for my problem? This is a common question that is encountered nowadays. Having searched a slew of papers that have been published over the last few years in the cross domain of machine learning and wireless communications, the authors found that several researchers working in this multi-disciplinary field continue to have the same question. In this regard, we make an attempt to provide a guide for choosing neural networks using an example application from the field of wireless communications, specifically we consider modulation classification. While deep learning was used to address modulation classification quite extensively using real world data, none of these papers give intuition about the neural network architectures that must be chosen to get good classification performance. During our study and experiments, we realized that this simple example with simple wireless channel models can be used as a reference to understand how to choose the appropriate deep learning models, specifically neural network models, based on the system model for the problem under consideration. In this paper, we provide numerical results to support the intuition that arises for various cases.","PeriodicalId":232687,"journal":{"name":"2020 IEEE 3rd 5G World Forum (5GWF)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133737550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}