Pub Date : 2023-08-22DOI: 10.1109/COMST.2023.3301328
Dusit Niyato
I welcome you to the third issue of the IEEE Communications Surveys and Tutorials in 2023. This issue includes 18 papers covering different aspects of communication networks. In particular, these articles survey and tutor various issues in “Wireless Communications,” “Cyber Security,” “IoT and M2M,” “Internet Technologies,” “Network Virtualization,” and “Network and Service Management and Green Communications.” A brief account for each of these papers is given below.
{"title":"Editorial: Third Quarter 2023 IEEE Communications Surveys and Tutorials","authors":"Dusit Niyato","doi":"10.1109/COMST.2023.3301328","DOIUrl":"https://doi.org/10.1109/COMST.2023.3301328","url":null,"abstract":"I welcome you to the third issue of the IEEE Communications Surveys and Tutorials in 2023. This issue includes 18 papers covering different aspects of communication networks. In particular, these articles survey and tutor various issues in “Wireless Communications,” “Cyber Security,” “IoT and M2M,” “Internet Technologies,” “Network Virtualization,” and “Network and Service Management and Green Communications.” A brief account for each of these papers is given below.","PeriodicalId":55029,"journal":{"name":"IEEE Communications Surveys and Tutorials","volume":"25 3","pages":"i-vi"},"PeriodicalIF":35.6,"publicationDate":"2023-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/9739/10226436/10226448.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49963699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-17DOI: 10.1109/COMST.2023.3304982
Muhammad Waseem Khan;Guojie Li;Keyou Wang;Muhammad Numan;Linyun Xiong;Muhammad Azam Khan
Multi-energy generation grids (MEGGs) provide a promising solution for reliable operations of cooperative various distributed energy resources (DERs), supply environmentally friendly energy to remote/off-grid areas, and improve overall system performance in terms of efficiency, reliability, flexibility, and resiliency. However, with the penetration of grids and the presence of various DERs with unpredictable renewables-based power generation and intermittent power loads, the operational coordination and supervision tasks become more complex. The communication-based optimal distributed control approach plays a significant role in MEGGs for coordinating an assembly of spatially and heterogeneous DERs, which improves reliability, efficiency, scalability, robustness, and privacy-preserving compared with traditional centralized-based controls. Therefore, this article aims to study different grid architectures and provide a comprehensive survey of optimal control and communication strategies/systems (CCS) in MEGG. A well-organized and systematic discussion related to the topic has been provided and elaborated on: 1) energy production and distribution with various grid architectures and distributed generating units (DGUs) integration for sustainable power generation, importance of unit sizing and technologies selection, and their implementations and operations; 2) classification on numerous control architectures and techniques, their prominent features and impact on MEGG stability; 3) multiple advanced intelligent control strategies and their essential aspects and merits; 4) different promising communication networks and technologies with optimal communication protocols and standards along with their computational mechanism and potential operational objectives in MEGGs; 5) communication strategies features and reliability issues concerning data volume, data availability, data accuracy, data security and authentication, time synchronization, and the growth of countermeasures; and 6) finally, key research gaps are highlighted and some recommendations are provided for future research works to efficiently handle the MEGG control, security, and communication network requirements.
{"title":"Optimal Control and Communication Strategies in Multi-Energy Generation Grid","authors":"Muhammad Waseem Khan;Guojie Li;Keyou Wang;Muhammad Numan;Linyun Xiong;Muhammad Azam Khan","doi":"10.1109/COMST.2023.3304982","DOIUrl":"10.1109/COMST.2023.3304982","url":null,"abstract":"Multi-energy generation grids (MEGGs) provide a promising solution for reliable operations of cooperative various distributed energy resources (DERs), supply environmentally friendly energy to remote/off-grid areas, and improve overall system performance in terms of efficiency, reliability, flexibility, and resiliency. However, with the penetration of grids and the presence of various DERs with unpredictable renewables-based power generation and intermittent power loads, the operational coordination and supervision tasks become more complex. The communication-based optimal distributed control approach plays a significant role in MEGGs for coordinating an assembly of spatially and heterogeneous DERs, which improves reliability, efficiency, scalability, robustness, and privacy-preserving compared with traditional centralized-based controls. Therefore, this article aims to study different grid architectures and provide a comprehensive survey of optimal control and communication strategies/systems (CCS) in MEGG. A well-organized and systematic discussion related to the topic has been provided and elaborated on: 1) energy production and distribution with various grid architectures and distributed generating units (DGUs) integration for sustainable power generation, importance of unit sizing and technologies selection, and their implementations and operations; 2) classification on numerous control architectures and techniques, their prominent features and impact on MEGG stability; 3) multiple advanced intelligent control strategies and their essential aspects and merits; 4) different promising communication networks and technologies with optimal communication protocols and standards along with their computational mechanism and potential operational objectives in MEGGs; 5) communication strategies features and reliability issues concerning data volume, data availability, data accuracy, data security and authentication, time synchronization, and the growth of countermeasures; and 6) finally, key research gaps are highlighted and some recommendations are provided for future research works to efficiently handle the MEGG control, security, and communication network requirements.","PeriodicalId":55029,"journal":{"name":"IEEE Communications Surveys and Tutorials","volume":"25 4","pages":"2599-2653"},"PeriodicalIF":35.6,"publicationDate":"2023-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125556838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-15DOI: 10.1109/COMST.2023.3305468
José Gaspar;Tiago Cruz;Chan-Tong Lam;Paulo Simões
Electrical grids generate, transport, distribute and deliver electrical power to consumers through a complex Critical Infrastructure which progressively shifted from an air-gaped to a connected architecture. Specifically, Smart Substations are important parts of Smart Grids, providing switching, transforming, monitoring, metering and protection functions to offer a safe, efficient and reliable distribution of electrical power to consumers. The evolution of electrical power grids was closely followed by the digitization of all its parts and improvements in communication and computing infrastructures, leading to an evolution towards digital smart substations with improved connectivity. However, connected smart substations are exposed to cyber threats which can result in blackouts and faults which may propagate in a chain reaction and damage electrical appliances connected across the electrical grid. This work organizes and offers a comprehensive review of architectural, communications and cybersecurity standards for smart substations, complemented by a threat landscape analysis and the presentation of a Defense-in-Depth strategy blueprint. Furthermore, this work examines several defense mechanisms documented in the literature, existing datasets, testbeds and evaluation methodologies, identifying the most relevant open issues which may guide and inspire future research work.
{"title":"Smart Substation Communications and Cybersecurity: A Comprehensive Survey","authors":"José Gaspar;Tiago Cruz;Chan-Tong Lam;Paulo Simões","doi":"10.1109/COMST.2023.3305468","DOIUrl":"10.1109/COMST.2023.3305468","url":null,"abstract":"Electrical grids generate, transport, distribute and deliver electrical power to consumers through a complex Critical Infrastructure which progressively shifted from an air-gaped to a connected architecture. Specifically, Smart Substations are important parts of Smart Grids, providing switching, transforming, monitoring, metering and protection functions to offer a safe, efficient and reliable distribution of electrical power to consumers. The evolution of electrical power grids was closely followed by the digitization of all its parts and improvements in communication and computing infrastructures, leading to an evolution towards digital smart substations with improved connectivity. However, connected smart substations are exposed to cyber threats which can result in blackouts and faults which may propagate in a chain reaction and damage electrical appliances connected across the electrical grid. This work organizes and offers a comprehensive review of architectural, communications and cybersecurity standards for smart substations, complemented by a threat landscape analysis and the presentation of a Defense-in-Depth strategy blueprint. Furthermore, this work examines several defense mechanisms documented in the literature, existing datasets, testbeds and evaluation methodologies, identifying the most relevant open issues which may guide and inspire future research work.","PeriodicalId":55029,"journal":{"name":"IEEE Communications Surveys and Tutorials","volume":"25 4","pages":"2456-2493"},"PeriodicalIF":35.6,"publicationDate":"2023-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10217175","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113955447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the development of communication and networking technologies, the Internet of Vehicles (IoV) has become the foundation of smart transportation. The development of blockchain and Machine Learning (ML) has contributed to the pervasiveness of the IoV, and they can effectively address the current issues of decentralisation, cyber security and data privacy in the IoV. In this article, blockchain and ML in the IoV are both reviewed, and corresponding technologies to support blockchain intelligence in the IoV are summarized. Importantly, blockchain intelligence is proposed as a key to integrate blockchain and ML, combining the advantages of both to drive the rapid development of the IoV. We discuss general frameworks, issuses, requirements and advantages for the implementation of blockchain intelligence in the IoV. Driven by its advantages, we summarize solutions of blockchain intelligence in the IoV from four aspects, including reliable interaction, network security and data privacy, trustworthy environment and scalability. Finally, a summary of current unresolved issues and challenges of blockchain intelligence in the IoV is presented, which provides guidelines for the future development of the IoV.
{"title":"Blockchain Intelligence for Internet of Vehicles: Challenges and Solutions","authors":"Xiaojie Wang;Hailin Zhu;Zhaolong Ning;Lei Guo;Yan Zhang","doi":"10.1109/COMST.2023.3305312","DOIUrl":"10.1109/COMST.2023.3305312","url":null,"abstract":"With the development of communication and networking technologies, the Internet of Vehicles (IoV) has become the foundation of smart transportation. The development of blockchain and Machine Learning (ML) has contributed to the pervasiveness of the IoV, and they can effectively address the current issues of decentralisation, cyber security and data privacy in the IoV. In this article, blockchain and ML in the IoV are both reviewed, and corresponding technologies to support blockchain intelligence in the IoV are summarized. Importantly, blockchain intelligence is proposed as a key to integrate blockchain and ML, combining the advantages of both to drive the rapid development of the IoV. We discuss general frameworks, issuses, requirements and advantages for the implementation of blockchain intelligence in the IoV. Driven by its advantages, we summarize solutions of blockchain intelligence in the IoV from four aspects, including reliable interaction, network security and data privacy, trustworthy environment and scalability. Finally, a summary of current unresolved issues and challenges of blockchain intelligence in the IoV is presented, which provides guidelines for the future development of the IoV.","PeriodicalId":55029,"journal":{"name":"IEEE Communications Surveys and Tutorials","volume":"25 4","pages":"2325-2355"},"PeriodicalIF":35.6,"publicationDate":"2023-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129841311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-14DOI: 10.1109/COMST.2023.3299519
Boubakr Nour;Makan Pourzandi;Mourad Debbabi
With the rapidly evolving technological landscape, the huge development of the Internet of Things, and the embracing of digital transformation, the world is witnessing an explosion in data generation and a rapid evolution of new applications that lead to new, wider, and more sophisticated threats that are complex and hard to be detected. Advanced persistence threats use continuous, clandestine, and sophisticated techniques to gain access to a system and remain hidden for a prolonged period of time, with potentially destructive consequences. Those stealthy attacks are often not detectable by advanced intrusion detection systems (e.g., LightBasin attack was detected in 2022 and has been active since 2016). Indeed, threat actors are able to quickly and intelligently alter their tactics to avoid being detected by security defense lines (e.g., prevention and detection mechanisms). In response to these evolving threats, organizations need to adopt new proactive defense approaches. Threat hunting is a proactive security line exercised to uncover stealthy attacks, malicious activities, and suspicious entities that could circumvent standard detection mechanisms. Additionally, threat hunting is an iterative approach to generate and revise threat hypotheses endeavoring to provide early attack detection in a proactive way. The proactiveness consists of testing and validating the initial hypothesis using various manual and automated tools/techniques with the objective of confirming/refuting the existence of an attack. This survey studies the threat hunting concept and provides a comprehensive review of the existing solutions for Enterprise networks. In particular, we provide a threat hunting taxonomy based on the used technique and a sub-classification based on the detailed approach. Furthermore, we discuss the existing standardization efforts. Finally, we provide a qualitative discussion on current advances and identify various research gaps and challenges that may be considered by the research community to design concrete and efficient threat hunting solutions.
{"title":"A Survey on Threat Hunting in Enterprise Networks","authors":"Boubakr Nour;Makan Pourzandi;Mourad Debbabi","doi":"10.1109/COMST.2023.3299519","DOIUrl":"10.1109/COMST.2023.3299519","url":null,"abstract":"With the rapidly evolving technological landscape, the huge development of the Internet of Things, and the embracing of digital transformation, the world is witnessing an explosion in data generation and a rapid evolution of new applications that lead to new, wider, and more sophisticated threats that are complex and hard to be detected. Advanced persistence threats use continuous, clandestine, and sophisticated techniques to gain access to a system and remain hidden for a prolonged period of time, with potentially destructive consequences. Those stealthy attacks are often not detectable by advanced intrusion detection systems (e.g., LightBasin attack was detected in 2022 and has been active since 2016). Indeed, threat actors are able to quickly and intelligently alter their tactics to avoid being detected by security defense lines (e.g., prevention and detection mechanisms). In response to these evolving threats, organizations need to adopt new proactive defense approaches. Threat hunting is a proactive security line exercised to uncover stealthy attacks, malicious activities, and suspicious entities that could circumvent standard detection mechanisms. Additionally, threat hunting is an iterative approach to generate and revise threat hypotheses endeavoring to provide early attack detection in a proactive way. The proactiveness consists of testing and validating the initial hypothesis using various manual and automated tools/techniques with the objective of confirming/refuting the existence of an attack. This survey studies the threat hunting concept and provides a comprehensive review of the existing solutions for Enterprise networks. In particular, we provide a threat hunting taxonomy based on the used technique and a sub-classification based on the detailed approach. Furthermore, we discuss the existing standardization efforts. Finally, we provide a qualitative discussion on current advances and identify various research gaps and challenges that may be considered by the research community to design concrete and efficient threat hunting solutions.","PeriodicalId":55029,"journal":{"name":"IEEE Communications Surveys and Tutorials","volume":"25 4","pages":"2299-2324"},"PeriodicalIF":35.6,"publicationDate":"2023-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133981110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-09DOI: 10.1109/COMST.2023.3302474
Dewant Katare;Diego Perino;Jari Nurmi;Martijn Warnier;Marijn Janssen;Aaron Yi Ding
Autonomous driving services depends on active sensing from modules such as camera, LiDAR, radar, and communication units. Traditionally, these modules process the sensed data on high-performance computing units inside the vehicle, which can deploy intelligent algorithms and AI models. The sensors mentioned above can produce large volumes of data, potentially reaching up to 20 Terabytes. This data size is influenced by factors such as the duration of driving, the data rate, and the sensor specifications. Consequently, this substantial amount of data can lead to significant power consumption on the vehicle. Similarly, a substantial amount of data will be exchanged between infrastructure sensors and vehicles for collaborative vehicle applications or fully connected autonomous vehicles. This communication process generates an additional surge of energy consumption. Although the autonomous vehicle domain has seen advancements in sensory technologies, wireless communication, computing and AI/ML algorithms, the challenge still exists in how to apply and integrate these technology innovations to achieve energy efficiency. This survey reviews and compares the connected vehicular applications, vehicular communications, approximation and Edge AI techniques. The focus is on energy efficiency by covering newly proposed approximation and enabling frameworks. To the best of our knowledge, this survey is the first to review the latest approximate Edge AI frameworks and publicly available datasets in energy-efficient autonomous driving. The insights from this survey can benefit the collaborative driving service development on low-power and memory-constrained systems and the energy optimization of autonomous vehicles.
{"title":"A Survey on Approximate Edge AI for Energy Efficient Autonomous Driving Services","authors":"Dewant Katare;Diego Perino;Jari Nurmi;Martijn Warnier;Marijn Janssen;Aaron Yi Ding","doi":"10.1109/COMST.2023.3302474","DOIUrl":"10.1109/COMST.2023.3302474","url":null,"abstract":"Autonomous driving services depends on active sensing from modules such as camera, LiDAR, radar, and communication units. Traditionally, these modules process the sensed data on high-performance computing units inside the vehicle, which can deploy intelligent algorithms and AI models. The sensors mentioned above can produce large volumes of data, potentially reaching up to 20 Terabytes. This data size is influenced by factors such as the duration of driving, the data rate, and the sensor specifications. Consequently, this substantial amount of data can lead to significant power consumption on the vehicle. Similarly, a substantial amount of data will be exchanged between infrastructure sensors and vehicles for collaborative vehicle applications or fully connected autonomous vehicles. This communication process generates an additional surge of energy consumption. Although the autonomous vehicle domain has seen advancements in sensory technologies, wireless communication, computing and AI/ML algorithms, the challenge still exists in how to apply and integrate these technology innovations to achieve energy efficiency. This survey reviews and compares the connected vehicular applications, vehicular communications, approximation and Edge AI techniques. The focus is on energy efficiency by covering newly proposed approximation and enabling frameworks. To the best of our knowledge, this survey is the first to review the latest approximate Edge AI frameworks and publicly available datasets in energy-efficient autonomous driving. The insights from this survey can benefit the collaborative driving service development on low-power and memory-constrained systems and the energy optimization of autonomous vehicles.","PeriodicalId":55029,"journal":{"name":"IEEE Communications Surveys and Tutorials","volume":"25 4","pages":"2714-2754"},"PeriodicalIF":35.6,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10213996","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136029711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Low Earth Orbit (LEO) satellites undergo a period of rapid development driven by ever-increasing user demands, reduced costs, and technological progress. Since there is a lack of literature on the security and reliability issues of LEO Satellite Communication Systems (SCSs), we aim to fill this knowledge gap. Specifically, we critically appraise the inherent characteristics of LEO SCSs and elaborate on their security and reliability requirements. In light of this, we further discuss their vulnerabilities, including potential security attacks launched against them and reliability risks, followed by outlining the associated lessons learned. Subsequently, we discuss the corresponding security and reliability enhancement solutions, unveil a range of trade-offs, and summarize the lessons gleaned. Furthermore, we shed light on several promising future research directions for enhancing the security and reliability of LEO SCSs, such as integrated sensing and communication, computer vision aided communications, as well as challenges brought about by mega-constellation and commercialization. Finally, we summarize the lessons inferred and crystallize the take-away messages in our design guidelines.
{"title":"Low Earth Orbit Satellite Security and Reliability: Issues, Solutions, and the Road Ahead","authors":"Pingyue Yue;Jianping An;Jiankang Zhang;Jia Ye;Gaofeng Pan;Shuai Wang;Pei Xiao;Lajos Hanzo","doi":"10.1109/COMST.2023.3296160","DOIUrl":"https://doi.org/10.1109/COMST.2023.3296160","url":null,"abstract":"Low Earth Orbit (LEO) satellites undergo a period of rapid development driven by ever-increasing user demands, reduced costs, and technological progress. Since there is a lack of literature on the security and reliability issues of LEO Satellite Communication Systems (SCSs), we aim to fill this knowledge gap. Specifically, we critically appraise the inherent characteristics of LEO SCSs and elaborate on their security and reliability requirements. In light of this, we further discuss their vulnerabilities, including potential security attacks launched against them and reliability risks, followed by outlining the associated lessons learned. Subsequently, we discuss the corresponding security and reliability enhancement solutions, unveil a range of trade-offs, and summarize the lessons gleaned. Furthermore, we shed light on several promising future research directions for enhancing the security and reliability of LEO SCSs, such as integrated sensing and communication, computer vision aided communications, as well as challenges brought about by mega-constellation and commercialization. Finally, we summarize the lessons inferred and crystallize the take-away messages in our design guidelines.","PeriodicalId":55029,"journal":{"name":"IEEE Communications Surveys and Tutorials","volume":"25 3","pages":"1604-1652"},"PeriodicalIF":35.6,"publicationDate":"2023-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49963707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-04DOI: 10.1109/COMST.2023.3302157
Abdeldjalil Tabouche;Badis Djamaa;Mustapha Reda Senouci
Recently, mission-critical Industrial Internet of Things (IIoT) applications such as system automation, predictive maintenance, and anomaly detection have come into the spotlight of Industry 4.0 thanks to the promised benefits. The IEEE 802.15.4 Time-Slotted Channel Hopping (TSCH) mode, along with the IPv6 over TSCH (6TiSCH) initiative, are two key standards to accommodate the diverse traffic patterns, reliability, latency, and power efficiency needs of such IIoT applications. To manage the allocation of communication resources in TSCH networks, a Scheduling Function (SF) is implemented. Even though scheduling in the IIoT has been the subject of numerous reviews, the potential of taking traffic-awareness into account has not been fully investigated. Motivated by these facts, we classify and analyze, in this systematic mapping review, prominent SFs dealing with traffic-awareness in TSCH networks published between 2012 and 2022. As a result, we provide a multi-dimensional map to identify the current trends in traffic-aware TSCH scheduling and help assess how far a given proposal is supported or contradicted by the empirical evidence in the field. Consequently, we discuss some open challenges that require community attention and point out potential future research directions regarding the design, implementation, and evaluation of new traffic-aware SFs.
最近,关键任务的工业物联网(IIoT)应用,如系统自动化、预测性维护和异常检测,由于其承诺的好处,已经成为工业4.0的焦点。IEEE 802.15.4时隙信道跳变(TSCH)模式以及IPv6 over TSCH (6TiSCH)倡议是适应此类IIoT应用的各种流量模式、可靠性、延迟和能效需求的两个关键标准。为了管理TSCH网络中通信资源的分配,调度功能(Scheduling Function, SF)被实现。尽管工业物联网中的调度已经成为众多审查的主题,但考虑到交通意识的潜力尚未得到充分调查。基于这些事实,我们对2012年至2022年期间发表的处理TSCH网络交通意识的杰出sf进行了分类和分析。因此,我们提供了一个多维地图,以确定交通感知的TSCH调度的当前趋势,并帮助评估给定建议在多大程度上得到了该领域经验证据的支持或反对。因此,我们讨论了一些需要社区关注的开放挑战,并指出了未来潜在的研究方向,包括设计、实施和评估新的交通感知安全系统。
{"title":"Traffic-Aware Reliable Scheduling in TSCH Networks for Industry 4.0: A Systematic Mapping Review","authors":"Abdeldjalil Tabouche;Badis Djamaa;Mustapha Reda Senouci","doi":"10.1109/COMST.2023.3302157","DOIUrl":"10.1109/COMST.2023.3302157","url":null,"abstract":"Recently, mission-critical Industrial Internet of Things (IIoT) applications such as system automation, predictive maintenance, and anomaly detection have come into the spotlight of Industry 4.0 thanks to the promised benefits. The IEEE 802.15.4 Time-Slotted Channel Hopping (TSCH) mode, along with the IPv6 over TSCH (6TiSCH) initiative, are two key standards to accommodate the diverse traffic patterns, reliability, latency, and power efficiency needs of such IIoT applications. To manage the allocation of communication resources in TSCH networks, a Scheduling Function (SF) is implemented. Even though scheduling in the IIoT has been the subject of numerous reviews, the potential of taking traffic-awareness into account has not been fully investigated. Motivated by these facts, we classify and analyze, in this systematic mapping review, prominent SFs dealing with traffic-awareness in TSCH networks published between 2012 and 2022. As a result, we provide a multi-dimensional map to identify the current trends in traffic-aware TSCH scheduling and help assess how far a given proposal is supported or contradicted by the empirical evidence in the field. Consequently, we discuss some open challenges that require community attention and point out potential future research directions regarding the design, implementation, and evaluation of new traffic-aware SFs.","PeriodicalId":55029,"journal":{"name":"IEEE Communications Surveys and Tutorials","volume":"25 4","pages":"2834-2861"},"PeriodicalIF":35.6,"publicationDate":"2023-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126983809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-03DOI: 10.1109/COMST.2023.3301820
E. Khorov;A. Krasilov;M. Susloparov;L. Kong
Emerging wireless systems target to provide multi-Gbps data rates for each user, which can be achieved by utilizing ultra-wide channels available at mmWave, terahertz, and lightwave frequencies. In contrast to the traditional spectrum below 6 GHz, these high-frequency bands raise many issues, complicating their usage. For example, because of high signal attenuation and blockage by obstacles, the data rates in a high-frequency band may quickly vary by several orders of magnitude. This peculiarity is often considered a challenge for modern transport layer protocols, such as Transmission Control Protocol (TCP) or Quick UDP Internet Connections (QUIC). Their key component is the Congestion Control Algorithm (CCA), which tries to determine a data sending rate that maximizes throughput and avoids network congestion. Many recent studies show that the performance of the existing CCAs significantly degrades if mobile devices communicate with high-frequency bands and propose some solutions to address this problem. The goal of this survey is twofold. First, we classify the reasons for poor TCP & QUIC performance in high-frequency bands. Second, we comprehensively review the solutions already designed to solve these problems. In contrast to existing studies and reviews that mainly focus on the comparison of various CCAs, we consider solutions working at different layers of the protocol stack, i.e., from the transport layer down to the physical layer, as well as cross-layer solutions. Based on the analysis, we conclude the survey with recommendations on which solutions provide the highest gains in high-frequency bands.
{"title":"Boosting TCP & QUIC Performance in mmWave, Terahertz, and Lightwave Wireless Networks: A Survey","authors":"E. Khorov;A. Krasilov;M. Susloparov;L. Kong","doi":"10.1109/COMST.2023.3301820","DOIUrl":"10.1109/COMST.2023.3301820","url":null,"abstract":"Emerging wireless systems target to provide multi-Gbps data rates for each user, which can be achieved by utilizing ultra-wide channels available at mmWave, terahertz, and lightwave frequencies. In contrast to the traditional spectrum below 6 GHz, these high-frequency bands raise many issues, complicating their usage. For example, because of high signal attenuation and blockage by obstacles, the data rates in a high-frequency band may quickly vary by several orders of magnitude. This peculiarity is often considered a challenge for modern transport layer protocols, such as Transmission Control Protocol (TCP) or Quick UDP Internet Connections (QUIC). Their key component is the Congestion Control Algorithm (CCA), which tries to determine a data sending rate that maximizes throughput and avoids network congestion. Many recent studies show that the performance of the existing CCAs significantly degrades if mobile devices communicate with high-frequency bands and propose some solutions to address this problem. The goal of this survey is twofold. First, we classify the reasons for poor TCP & QUIC performance in high-frequency bands. Second, we comprehensively review the solutions already designed to solve these problems. In contrast to existing studies and reviews that mainly focus on the comparison of various CCAs, we consider solutions working at different layers of the protocol stack, i.e., from the transport layer down to the physical layer, as well as cross-layer solutions. Based on the analysis, we conclude the survey with recommendations on which solutions provide the highest gains in high-frequency bands.","PeriodicalId":55029,"journal":{"name":"IEEE Communications Surveys and Tutorials","volume":"25 4","pages":"2862-2891"},"PeriodicalIF":35.6,"publicationDate":"2023-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132056102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The sixth generation (6G) wireless systems are envisioned to enable the paradigm shift from “connected things” to “connected intelligence”, featured by ultra high density, large-scale, dynamic heterogeneity, diversified functional requirements, and machine learning capabilities, which leads to a growing need for highly efficient intelligent algorithms. The classic optimization-based algorithms usually require highly precise mathematical model of data links and suffer from poor performance with high computational cost in realistic 6G applications. Based on domain knowledge (e.g., optimization models and theoretical tools), machine learning (ML) stands out as a promising and viable methodology for many complex large-scale optimization problems in 6G, due to its superior performance, computational efficiency, scalability, and generalizability. In this paper, we systematically review the most representative “learning to optimize” techniques in diverse domains of 6G wireless networks by identifying the inherent feature of the underlying optimization problem and investigating the specifically designed ML frameworks from the perspective of optimization. In particular, we will cover algorithm unrolling, learning to branch-and-bound, graph neural network for structured optimization, deep reinforcement learning for stochastic optimization, end-to-end learning for semantic optimization, as well as wireless federated learning for distributed optimization, which are capable of addressing challenging large-scale problems arising from a variety of crucial wireless applications. Through the in-depth discussion, we shed light on the excellent performance of ML-based optimization algorithms with respect to the classical methods, and provide insightful guidance to develop advanced ML techniques in 6G networks. Neural network design, theoretical tools of different ML methods, implementation issues, as well as challenges and future research directions are also discussed to support the practical use of the ML model in 6G wireless networks.
{"title":"Machine Learning for Large-Scale Optimization in 6G Wireless Networks","authors":"Yandong Shi;Lixiang Lian;Yuanming Shi;Zixin Wang;Yong Zhou;Liqun Fu;Lin Bai;Jun Zhang;Wei Zhang","doi":"10.1109/COMST.2023.3300664","DOIUrl":"10.1109/COMST.2023.3300664","url":null,"abstract":"The sixth generation (6G) wireless systems are envisioned to enable the paradigm shift from “connected things” to “connected intelligence”, featured by ultra high density, large-scale, dynamic heterogeneity, diversified functional requirements, and machine learning capabilities, which leads to a growing need for highly efficient intelligent algorithms. The classic optimization-based algorithms usually require highly precise mathematical model of data links and suffer from poor performance with high computational cost in realistic 6G applications. Based on domain knowledge (e.g., optimization models and theoretical tools), machine learning (ML) stands out as a promising and viable methodology for many complex large-scale optimization problems in 6G, due to its superior performance, computational efficiency, scalability, and generalizability. In this paper, we systematically review the most representative “learning to optimize” techniques in diverse domains of 6G wireless networks by identifying the inherent feature of the underlying optimization problem and investigating the specifically designed ML frameworks from the perspective of optimization. In particular, we will cover algorithm unrolling, learning to branch-and-bound, graph neural network for structured optimization, deep reinforcement learning for stochastic optimization, end-to-end learning for semantic optimization, as well as wireless federated learning for distributed optimization, which are capable of addressing challenging large-scale problems arising from a variety of crucial wireless applications. Through the in-depth discussion, we shed light on the excellent performance of ML-based optimization algorithms with respect to the classical methods, and provide insightful guidance to develop advanced ML techniques in 6G networks. Neural network design, theoretical tools of different ML methods, implementation issues, as well as challenges and future research directions are also discussed to support the practical use of the ML model in 6G wireless networks.","PeriodicalId":55029,"journal":{"name":"IEEE Communications Surveys and Tutorials","volume":"25 4","pages":"2088-2132"},"PeriodicalIF":35.6,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135182117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}