Pub Date : 2023-03-13DOI: 10.1109/COMST.2023.3256323
Stephanie Baker;Wei Xiang
Healthcare systems are under increasing strain due to a myriad of factors, from a steadily ageing global population to the current COVID-19 pandemic. In a world where we have needed to be connected but apart, the need for enhanced remote and at-home healthcare has become clear. The Internet of Things (IoT) offers a promising solution. The IoT has created a highly connected world, with billions of devices collecting and communicating data from a range of applications, including healthcare. Due to these high volumes of data, a natural synergy with Artificial Intelligence (AI) has become apparent - big data both enables and requires AI to interpret, understand, and make decisions that provide optimal outcomes. In this extensive survey, we thoroughly explore this synergy through an examination of the field of the Artificial Intelligence of Things (AIoT) for healthcare. This work begins by briefly establishing a unified architecture of AIoT in a healthcare context, including sensors and devices, novel communication technologies, and cross-layer AI. We then examine recent research pertaining to each component of the AIoT architecture from several key perspectives, identifying promising technologies, challenges, and opportunities that are unique to healthcare. Several examples of real-world AIoT healthcare use cases are then presented to illustrate the potential of these technologies. Lastly, this work outlines promising directions for future research in AIoT for healthcare.
{"title":"Artificial Intelligence of Things for Smarter Healthcare: A Survey of Advancements, Challenges, and Opportunities","authors":"Stephanie Baker;Wei Xiang","doi":"10.1109/COMST.2023.3256323","DOIUrl":"https://doi.org/10.1109/COMST.2023.3256323","url":null,"abstract":"Healthcare systems are under increasing strain due to a myriad of factors, from a steadily ageing global population to the current COVID-19 pandemic. In a world where we have needed to be connected but apart, the need for enhanced remote and at-home healthcare has become clear. The Internet of Things (IoT) offers a promising solution. The IoT has created a highly connected world, with billions of devices collecting and communicating data from a range of applications, including healthcare. Due to these high volumes of data, a natural synergy with Artificial Intelligence (AI) has become apparent - big data both enables and requires AI to interpret, understand, and make decisions that provide optimal outcomes. In this extensive survey, we thoroughly explore this synergy through an examination of the field of the Artificial Intelligence of Things (AIoT) for healthcare. This work begins by briefly establishing a unified architecture of AIoT in a healthcare context, including sensors and devices, novel communication technologies, and cross-layer AI. We then examine recent research pertaining to each component of the AIoT architecture from several key perspectives, identifying promising technologies, challenges, and opportunities that are unique to healthcare. Several examples of real-world AIoT healthcare use cases are then presented to illustrate the potential of these technologies. Lastly, this work outlines promising directions for future research in AIoT for healthcare.","PeriodicalId":55029,"journal":{"name":"IEEE Communications Surveys and Tutorials","volume":"25 2","pages":"1261-1293"},"PeriodicalIF":35.6,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/9739/10130694/10066875.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49952788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-10DOI: 10.1109/COMST.2023.3274934
Mohammed Jouhari;Nasir Saeed;Mohamed-Slim Alouini;El Mehdi Amhoud
Long-range (LoRa) technology is most widely used for enabling low-power wide area networks (WANs) on unlicensed frequency bands. Despite its modest data rates, it provides extensive coverage for low-power devices, making it an ideal communication system for many Internet of Things (IoT) applications. In general, LoRa is considered as the physical layer, whereas LoRaWAN is the medium access control (MAC) layer of the LoRa stack that adopts a star topology to enable communication between multiple end devices (EDs) and the network gateway. The chirp spread spectrum modulation deals with LoRa signal interference and ensures long-range communication. At the same time, the adaptive data rate mechanism allows EDs to dynamically alter some LoRa features, such as the spreading factor (SF), code rate, and carrier frequency to address the time variance of communication conditions in dense networks. Despite the high LoRa connectivity demand, LoRa signals interference and concurrent transmission collisions are major limitations. Therefore, to enhance LoRaWAN capacity, the LoRa Alliance released many LoRaWAN versions, and the research community has provided numerous solutions to develop scalable LoRaWAN technology. Hence, we thoroughly examine LoRaWAN scalability challenges and state-of-the-art solutions in both the physical and MAC layers. These solutions primarily rely on SF, logical, and frequency channel assignment, whereas others propose new network topologies or implement signal processing schemes to cancel the interference and allow LoRaWAN to connect more EDs efficiently. A summary of the existing solutions in the literature is provided at the end of the paper, describing the advantages and disadvantages of each solution and suggesting possible enhancements as future research directions.
{"title":"A Survey on Scalable LoRaWAN for Massive IoT: Recent Advances, Potentials, and Challenges","authors":"Mohammed Jouhari;Nasir Saeed;Mohamed-Slim Alouini;El Mehdi Amhoud","doi":"10.1109/COMST.2023.3274934","DOIUrl":"https://doi.org/10.1109/COMST.2023.3274934","url":null,"abstract":"Long-range (LoRa) technology is most widely used for enabling low-power wide area networks (WANs) on unlicensed frequency bands. Despite its modest data rates, it provides extensive coverage for low-power devices, making it an ideal communication system for many Internet of Things (IoT) applications. In general, LoRa is considered as the physical layer, whereas LoRaWAN is the medium access control (MAC) layer of the LoRa stack that adopts a star topology to enable communication between multiple end devices (EDs) and the network gateway. The chirp spread spectrum modulation deals with LoRa signal interference and ensures long-range communication. At the same time, the adaptive data rate mechanism allows EDs to dynamically alter some LoRa features, such as the spreading factor (SF), code rate, and carrier frequency to address the time variance of communication conditions in dense networks. Despite the high LoRa connectivity demand, LoRa signals interference and concurrent transmission collisions are major limitations. Therefore, to enhance LoRaWAN capacity, the LoRa Alliance released many LoRaWAN versions, and the research community has provided numerous solutions to develop scalable LoRaWAN technology. Hence, we thoroughly examine LoRaWAN scalability challenges and state-of-the-art solutions in both the physical and MAC layers. These solutions primarily rely on SF, logical, and frequency channel assignment, whereas others propose new network topologies or implement signal processing schemes to cancel the interference and allow LoRaWAN to connect more EDs efficiently. A summary of the existing solutions in the literature is provided at the end of the paper, describing the advantages and disadvantages of each solution and suggesting possible enhancements as future research directions.","PeriodicalId":55029,"journal":{"name":"IEEE Communications Surveys and Tutorials","volume":"25 3","pages":"1841-1876"},"PeriodicalIF":35.6,"publicationDate":"2023-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49963515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-10DOI: 10.1109/COMST.2023.3275038
Yeongjae Kim
Over the last two decades, IEEE has been defining standards for Time-Sensitive Networking (TSN). These standards combine traffic shaping and scheduling mechanisms to guarantee bounded delays across an Ethernet network. Time-Sensitive Networks are designed for applications where delays are critical, such as process control, networks for vehicles and aircraft, and virtual reality applications. Many industrial companies are currently actively implementing TSN standards. The goal of this paper is to provide a concise and self-contained description of the TSN mechanisms and how they affect the network performance metrics. The paper is intended for practicing engineers wanting to improve their understanding of TSN and for students curious about these mechanisms.
{"title":"A Concise Tutorial on Traffic Shaping and Scheduling in Time-Sensitive Networks","authors":"Yeongjae Kim","doi":"10.1109/COMST.2023.3275038","DOIUrl":"https://doi.org/10.1109/COMST.2023.3275038","url":null,"abstract":"Over the last two decades, IEEE has been defining standards for Time-Sensitive Networking (TSN). These standards combine traffic shaping and scheduling mechanisms to guarantee bounded delays across an Ethernet network. Time-Sensitive Networks are designed for applications where delays are critical, such as process control, networks for vehicles and aircraft, and virtual reality applications. Many industrial companies are currently actively implementing TSN standards. The goal of this paper is to provide a concise and self-contained description of the TSN mechanisms and how they affect the network performance metrics. The paper is intended for practicing engineers wanting to improve their understanding of TSN and for students curious about these mechanisms.","PeriodicalId":55029,"journal":{"name":"IEEE Communications Surveys and Tutorials","volume":"25 3","pages":"1941-1953"},"PeriodicalIF":35.6,"publicationDate":"2023-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49963700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-08DOI: 10.1109/COMST.2023.3254481
Zebo Yang;Maede Zolanvari;Raj Jain
Driven by the rapid progress in quantum hardware, recent years have witnessed a furious race for quantum technologies in both academia and industry. Universal quantum computers have supported up to hundreds of qubits, while the scale of quantum annealers has reached three orders of magnitude (i.e., thousands of qubits). Quantum computing power keeps climbing. Race has consequently generated an overwhelming number of research papers and documents. This article provides an entry point for interested readers to learn the key aspects of quantum computing and communications from a computer science perspective. It begins with a pedagogical introduction and then reviews the key milestones and recent advances in quantum computing. In this article, the key elements of a quantum Internet are categorized into four important issues, which are investigated in detail: a) quantum computers, b) quantum networks, c) quantum cryptography, and d) quantum machine learning. Finally, the article identifies and discusses the main barriers, the major research directions, and trends.
{"title":"A Survey of Important Issues in Quantum Computing and Communications","authors":"Zebo Yang;Maede Zolanvari;Raj Jain","doi":"10.1109/COMST.2023.3254481","DOIUrl":"https://doi.org/10.1109/COMST.2023.3254481","url":null,"abstract":"Driven by the rapid progress in quantum hardware, recent years have witnessed a furious race for quantum technologies in both academia and industry. Universal quantum computers have supported up to hundreds of qubits, while the scale of quantum annealers has reached three orders of magnitude (i.e., thousands of qubits). Quantum computing power keeps climbing. Race has consequently generated an overwhelming number of research papers and documents. This article provides an entry point for interested readers to learn the key aspects of quantum computing and communications from a computer science perspective. It begins with a pedagogical introduction and then reviews the key milestones and recent advances in quantum computing. In this article, the key elements of a quantum Internet are categorized into four important issues, which are investigated in detail: a) quantum computers, b) quantum networks, c) quantum cryptography, and d) quantum machine learning. Finally, the article identifies and discusses the main barriers, the major research directions, and trends.","PeriodicalId":55029,"journal":{"name":"IEEE Communications Surveys and Tutorials","volume":"25 2","pages":"1059-1094"},"PeriodicalIF":35.6,"publicationDate":"2023-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/9739/10130694/10064036.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49952907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-08DOI: 10.1109/COMST.2023.3273121
Nassima Toumi;Miloud Bagaa;Adlen Ksentini
Future communication networks are envisioned to satisfy increasingly granular and dynamic requirements to accommodate the application and user demands. Indeed, novel immersive and mission-critical services necessitate increased computing and network resources, reduced communication latency, and guaranteed reliability. Thus, efficient and adaptive resource management schemes are required to provide and maintain sufficient levels of Quality of Experience (QoE) during the service life-cycle. Service migration is considered a key enabler of dynamic service orchestration. Indeed, moving services on demand is an efficient mechanism for user mobility support, load balancing in case of fluctuations in service demands, and hardware failure mitigation. However, service migration requires planning, as multiple parameters must be optimized to reduce service disruption to a minimum. Recent breakthroughs in computational capabilities allowed the emergence of Machine Learning as a tool for decision making that is expected to enable seamless automation of network resource management by predicting events and learning optimal decision policies. This paper surveys contributions applying Machine Learning (ML) methods to optimize service migration, providing a detailed literature review on recent advances in the field and establishing a classification of current research efforts with an analysis of their strengths and limitations. Finally, the paper provides insights on the main directions for future research.
{"title":"Machine Learning for Service Migration: A Survey","authors":"Nassima Toumi;Miloud Bagaa;Adlen Ksentini","doi":"10.1109/COMST.2023.3273121","DOIUrl":"https://doi.org/10.1109/COMST.2023.3273121","url":null,"abstract":"Future communication networks are envisioned to satisfy increasingly granular and dynamic requirements to accommodate the application and user demands. Indeed, novel immersive and mission-critical services necessitate increased computing and network resources, reduced communication latency, and guaranteed reliability. Thus, efficient and adaptive resource management schemes are required to provide and maintain sufficient levels of Quality of Experience (QoE) during the service life-cycle. Service migration is considered a key enabler of dynamic service orchestration. Indeed, moving services on demand is an efficient mechanism for user mobility support, load balancing in case of fluctuations in service demands, and hardware failure mitigation. However, service migration requires planning, as multiple parameters must be optimized to reduce service disruption to a minimum. Recent breakthroughs in computational capabilities allowed the emergence of Machine Learning as a tool for decision making that is expected to enable seamless automation of network resource management by predicting events and learning optimal decision policies. This paper surveys contributions applying Machine Learning (ML) methods to optimize service migration, providing a detailed literature review on recent advances in the field and establishing a classification of current research efforts with an analysis of their strengths and limitations. Finally, the paper provides insights on the main directions for future research.","PeriodicalId":55029,"journal":{"name":"IEEE Communications Surveys and Tutorials","volume":"25 3","pages":"1991-2020"},"PeriodicalIF":35.6,"publicationDate":"2023-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49963431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The evolution of power generation systems, along with their related increase in complexity, led to the critical necessity of Wide-Area Monitoring, Protection, and Control (WAMPAC) systems in today’s smart grid. Recent developments in smart measurement devices coupled with data communication technologies allow for significant improvements in power systems’ reliability, efficiency, and security. These technological advancements make WAMPAC systems of significant practical interest. However, the geographically distributed nature of such systems increases the potential attack surface and the risk of critical vulnerabilities. Thus, it is of paramount importance to identify the related threats and vulnerabilities as well as the prominent solutions relevant to WAMPAC systems. Consequently, this paper aims to provide a comprehensive review of WAMPAC security aspects along with the state-of-the-art research initiatives addressing such aspects. Specifically, this paper provides critical taxonomies of the cyber-security scope of WAMPAC that guide the accompanying survey of the recent studies on the WAMPAC security domain. As such, this article aims to pave the way for prospective researchers to pursue further studies in areas that require in-depth investigation into the security, reliability, and efficiency of WAMPAC as the backbone of smart grids.
{"title":"Security of Wide-Area Monitoring, Protection, and Control (WAMPAC) Systems of the Smart Grid: A Survey on Challenges and Opportunities","authors":"Saghar Vahidi;Mohsen Ghafouri;Minh Au;Marthe Kassouf;Arash Mohammadi;Mourad Debbabi","doi":"10.1109/COMST.2023.3251899","DOIUrl":"https://doi.org/10.1109/COMST.2023.3251899","url":null,"abstract":"The evolution of power generation systems, along with their related increase in complexity, led to the critical necessity of Wide-Area Monitoring, Protection, and Control (WAMPAC) systems in today’s smart grid. Recent developments in smart measurement devices coupled with data communication technologies allow for significant improvements in power systems’ reliability, efficiency, and security. These technological advancements make WAMPAC systems of significant practical interest. However, the geographically distributed nature of such systems increases the potential attack surface and the risk of critical vulnerabilities. Thus, it is of paramount importance to identify the related threats and vulnerabilities as well as the prominent solutions relevant to WAMPAC systems. Consequently, this paper aims to provide a comprehensive review of WAMPAC security aspects along with the state-of-the-art research initiatives addressing such aspects. Specifically, this paper provides critical taxonomies of the cyber-security scope of WAMPAC that guide the accompanying survey of the recent studies on the WAMPAC security domain. As such, this article aims to pave the way for prospective researchers to pursue further studies in areas that require in-depth investigation into the security, reliability, and efficiency of WAMPAC as the backbone of smart grids.","PeriodicalId":55029,"journal":{"name":"IEEE Communications Surveys and Tutorials","volume":"25 2","pages":"1294-1335"},"PeriodicalIF":35.6,"publicationDate":"2023-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49952789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-05DOI: 10.1109/COMST.2023.3273282
Nan Sun;Ming Ding;Jiaojiao Jiang;Weikang Xu;Xiaoxing Mo;Yonghang Tai;Jun Zhang
Today’s cyber attacks have become more severe and frequent, which calls for a new line of security defenses to protect against them. The dynamic nature of new-generation threats, which are evasive, resilient, and complex, makes traditional security systems based on heuristics and signatures struggle to match. Organizations aim to gather and share real-time cyber threat information and then turn it into threat intelligence for preventing attacks or, at the very least, responding quickly in a proactive manner. Cyber Threat Intelligence (CTI) mining, which uncovers, processes, and analyzes valuable information about cyber threats, is booming. However, most organizations today mainly focus on basic use cases, such as integrating threat data feeds with existing network and firewall systems, intrusion prevention systems, and Security Information and Event Management systems (SIEMs), without taking advantage of the insights that such new intelligence can deliver. In order to make the most of CTI so as to significantly strengthen security postures, we present a comprehensive review of recent research efforts on CTI mining from multiple data sources in this article. Specifically, we provide and devise a taxonomy to summarize the studies on CTI mining based on the intended purposes (i.e., cybersecurity-related entities and events, cyber attack tactics, techniques and procedures, profiles of hackers, indicators of compromise, vulnerability exploits and malware implementation, and threat hunting), along with a comprehensive review of the current state-of-the-art. Lastly, we discuss research challenges and possible future research directions for CTI mining.
{"title":"Cyber Threat Intelligence Mining for Proactive Cybersecurity Defense: A Survey and New Perspectives","authors":"Nan Sun;Ming Ding;Jiaojiao Jiang;Weikang Xu;Xiaoxing Mo;Yonghang Tai;Jun Zhang","doi":"10.1109/COMST.2023.3273282","DOIUrl":"https://doi.org/10.1109/COMST.2023.3273282","url":null,"abstract":"Today’s cyber attacks have become more severe and frequent, which calls for a new line of security defenses to protect against them. The dynamic nature of new-generation threats, which are evasive, resilient, and complex, makes traditional security systems based on heuristics and signatures struggle to match. Organizations aim to gather and share real-time cyber threat information and then turn it into threat intelligence for preventing attacks or, at the very least, responding quickly in a proactive manner. Cyber Threat Intelligence (CTI) mining, which uncovers, processes, and analyzes valuable information about cyber threats, is booming. However, most organizations today mainly focus on basic use cases, such as integrating threat data feeds with existing network and firewall systems, intrusion prevention systems, and Security Information and Event Management systems (SIEMs), without taking advantage of the insights that such new intelligence can deliver. In order to make the most of CTI so as to significantly strengthen security postures, we present a comprehensive review of recent research efforts on CTI mining from multiple data sources in this article. Specifically, we provide and devise a taxonomy to summarize the studies on CTI mining based on the intended purposes (i.e., cybersecurity-related entities and events, cyber attack tactics, techniques and procedures, profiles of hackers, indicators of compromise, vulnerability exploits and malware implementation, and threat hunting), along with a comprehensive review of the current state-of-the-art. Lastly, we discuss research challenges and possible future research directions for CTI mining.","PeriodicalId":55029,"journal":{"name":"IEEE Communications Surveys and Tutorials","volume":"25 3","pages":"1748-1774"},"PeriodicalIF":35.6,"publicationDate":"2023-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/9739/10226436/10117505.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49963512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The future of cellular networks is contingent on artificial intelligence (AI) based automation, particularly for radio access network (RAN) operation, optimization, and troubleshooting. To achieve such zero-touch automation, a myriad of AI-based solutions are being proposed in literature to leverage AI for modeling and optimizing network behavior to achieve the zero-touch automation goal. However, to work reliably, AI based automation, requires a deluge of training data. Consequently, the success of the proposed AI solutions is limited by a fundamental challenge faced by cellular network research community: scarcity of the training data. In this paper, we present an extensive review of classic and emerging techniques to address this challenge. We first identify the common data types in RAN and their known use-cases. We then present a taxonomized survey of techniques used in literature to address training data scarcity for various data types. This is followed by a framework to address the training data scarcity. The proposed framework builds on available information and combination of techniques including interpolation, domain-knowledge based, generative adversarial neural networks, transfer learning, autoencoders, few-shot learning, simulators and testbeds. Potential new techniques to enrich scarce data in cellular networks are also proposed, such as by matrix completion theory, and domain knowledge-based techniques leveraging different types of network geometries and network parameters. In addition, an overview of state-of-the art simulators and testbeds is also presented to make readers aware of current and emerging platforms to access real data in order to overcome the data scarcity challenge. The extensive survey of training data scarcity addressing techniques combined with proposed framework to select a suitable technique for given type of data, can assist researchers and network operators in choosing the appropriate methods to overcome the data scarcity challenge in leveraging AI to radio access network automation.
{"title":"Toward Addressing Training Data Scarcity Challenge in Emerging Radio Access Networks: A Survey and Framework","authors":"Haneya Naeem Qureshi;Usama Masood;Marvin Manalastas;Syed Muhammad Asad Zaidi;Hasan Farooq;Julien Forgeat;Maxime Bouton;Shruti Bothe;Per Karlsson;Ali Rizwan;Ali Imran","doi":"10.1109/COMST.2023.3271419","DOIUrl":"https://doi.org/10.1109/COMST.2023.3271419","url":null,"abstract":"The future of cellular networks is contingent on artificial intelligence (AI) based automation, particularly for radio access network (RAN) operation, optimization, and troubleshooting. To achieve such zero-touch automation, a myriad of AI-based solutions are being proposed in literature to leverage AI for modeling and optimizing network behavior to achieve the zero-touch automation goal. However, to work reliably, AI based automation, requires a deluge of training data. Consequently, the success of the proposed AI solutions is limited by a fundamental challenge faced by cellular network research community: scarcity of the training data. In this paper, we present an extensive review of classic and emerging techniques to address this challenge. We first identify the common data types in RAN and their known use-cases. We then present a taxonomized survey of techniques used in literature to address training data scarcity for various data types. This is followed by a framework to address the training data scarcity. The proposed framework builds on available information and combination of techniques including interpolation, domain-knowledge based, generative adversarial neural networks, transfer learning, autoencoders, few-shot learning, simulators and testbeds. Potential new techniques to enrich scarce data in cellular networks are also proposed, such as by matrix completion theory, and domain knowledge-based techniques leveraging different types of network geometries and network parameters. In addition, an overview of state-of-the art simulators and testbeds is also presented to make readers aware of current and emerging platforms to access real data in order to overcome the data scarcity challenge. The extensive survey of training data scarcity addressing techniques combined with proposed framework to select a suitable technique for given type of data, can assist researchers and network operators in choosing the appropriate methods to overcome the data scarcity challenge in leveraging AI to radio access network automation.","PeriodicalId":55029,"journal":{"name":"IEEE Communications Surveys and Tutorials","volume":"25 3","pages":"1954-1990"},"PeriodicalIF":35.6,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/9739/10226436/10113782.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49963432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-27DOI: 10.1109/COMST.2023.3249835
Cheng-Xiang Wang;Xiaohu You;Xiqi Gao;Xiuming Zhu;Zixin Li;Chuan Zhang;Haiming Wang;Yongming Huang;Yunfei Chen;Harald Haas;John S. Thompson;Erik G. Larsson;Marco Di Renzo;Wen Tong;Peiying Zhu;Xuemin Shen;H. Vincent Poor;Lajos Hanzo
Fifth generation (5G) mobile communication systems have entered the stage of commercial deployment, providing users with new services, improved user experiences as well as a host of novel opportunities to various industries. However, 5G still faces many challenges. To address these challenges, international industrial, academic, and standards organizations have commenced research on sixth generation (6G) wireless communication systems. A series of white papers and survey papers have been published, which aim to define 6G in terms of requirements, application scenarios, key technologies, etc. Although ITU-R has been working on the 6G vision and it is expected to reach a consensus on what 6G will be by mid-2023, the related global discussions are still wide open and the existing literature has identified numerous open issues. This paper first provides a comprehensive portrayal of the 6G vision, technical requirements, and application scenarios, covering the current common understanding of 6G. Then, a critical appraisal of the 6G network architecture and key technologies is presented. Furthermore, existing testbeds and advanced 6G verification platforms are detailed for the first time. In addition, future research directions and open challenges are identified to stimulate the on-going global debate. Finally, lessons learned to date concerning 6G networks are discussed.
{"title":"On the Road to 6G: Visions, Requirements, Key Technologies, and Testbeds","authors":"Cheng-Xiang Wang;Xiaohu You;Xiqi Gao;Xiuming Zhu;Zixin Li;Chuan Zhang;Haiming Wang;Yongming Huang;Yunfei Chen;Harald Haas;John S. Thompson;Erik G. Larsson;Marco Di Renzo;Wen Tong;Peiying Zhu;Xuemin Shen;H. Vincent Poor;Lajos Hanzo","doi":"10.1109/COMST.2023.3249835","DOIUrl":"https://doi.org/10.1109/COMST.2023.3249835","url":null,"abstract":"Fifth generation (5G) mobile communication systems have entered the stage of commercial deployment, providing users with new services, improved user experiences as well as a host of novel opportunities to various industries. However, 5G still faces many challenges. To address these challenges, international industrial, academic, and standards organizations have commenced research on sixth generation (6G) wireless communication systems. A series of white papers and survey papers have been published, which aim to define 6G in terms of requirements, application scenarios, key technologies, etc. Although ITU-R has been working on the 6G vision and it is expected to reach a consensus on what 6G will be by mid-2023, the related global discussions are still wide open and the existing literature has identified numerous open issues. This paper first provides a comprehensive portrayal of the 6G vision, technical requirements, and application scenarios, covering the current common understanding of 6G. Then, a critical appraisal of the 6G network architecture and key technologies is presented. Furthermore, existing testbeds and advanced 6G verification platforms are detailed for the first time. In addition, future research directions and open challenges are identified to stimulate the on-going global debate. Finally, lessons learned to date concerning 6G networks are discussed.","PeriodicalId":55029,"journal":{"name":"IEEE Communications Surveys and Tutorials","volume":"25 2","pages":"905-974"},"PeriodicalIF":35.6,"publicationDate":"2023-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49952904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-23DOI: 10.1109/COMST.2023.3238202
Dusit Niyato
Iwelcome you to the first issue of the IEEE COMMUNICATIONS SURVEYS AND TUTORIALS in 2023. This issue includes 25 papers covering different aspects of communication networks. In particular, these articles survey and tutor various issues in “Wireless Communications”, “Cyber Security”, “IoT and M2M”, “Internet Technologies”, “Network Virtualization”, “Network and Service Management and Green Communications”, and “Vehicular and Sensor Communications”. A brief account for each of these papers is given below.
{"title":"Editorial: First Quarter 2023 IEEE Communications Surveys and Tutorials","authors":"Dusit Niyato","doi":"10.1109/COMST.2023.3238202","DOIUrl":"https://doi.org/10.1109/COMST.2023.3238202","url":null,"abstract":"Iwelcome you to the first issue of the IEEE COMMUNICATIONS SURVEYS AND TUTORIALS in 2023. This issue includes 25 papers covering different aspects of communication networks. In particular, these articles survey and tutor various issues in “Wireless Communications”, “Cyber Security”, “IoT and M2M”, “Internet Technologies”, “Network Virtualization”, “Network and Service Management and Green Communications”, and “Vehicular and Sensor Communications”. A brief account for each of these papers is given below.","PeriodicalId":55029,"journal":{"name":"IEEE Communications Surveys and Tutorials","volume":"25 1","pages":"i-vii"},"PeriodicalIF":35.6,"publicationDate":"2023-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/9739/10051138/10051143.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49931811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}