The Discontinuous Reception (DRX) is the most effective timer-based mechanism for User Equipment (UE) power saving. In Long Term Evolution (LTE) systems, the development of the DRX mechanism enormously extends the UE battery life. With the DRX mechanism, a UE is allowed to enter a dormant state. Given a DRX cycle, the UE needs to wake up periodically during the dormancy to check whether it receives new downlink packets or not. The UE can achieve a high sleeping ratio by skipping most channel monitoring occasions. As the mobile network evolved to 5G, the battery life requirement increased to support various new services. 3rd Generation Partnership Project (3GPP) also enhances the DRX mechanism and adds new DRX-related features in the New Radio (NR) Release 16 standard. In addition to the time-based design, 3GPP proposed two signaling-based mechanisms: power saving signal and UE assistance information. This survey paper introduces the latest DRX mechanism in the 3GPP NR standard and summarizes the state-of-the-art research. Researchers have investigated the DRX mechanism in various use cases, such as Web browsing services and heterogeneous networks. They focus on the UE sleep ratio and packet delay and propose corresponding analytical models. New DRX architectures are also discussed to conquer the power-saving problem in specific schemes, especially in the 5G NR networks. This paper categorizes and presents the papers according to the target services and the network scenarios in detail. We also survey the work focusing on the new challenges (such as beamforming and thermal issue) in the NR network and introduce the future research directions in the 6G era.
{"title":"A Survey on DRX Mechanism: Device Power Saving From LTE and 5G New Radio to 6G Communication Systems","authors":"Kuang-Hsun Lin;He-Hsuan Liu;Kai-Hsin Hu;An Huang;Hung-Yu Wei","doi":"10.1109/COMST.2022.3217854","DOIUrl":"https://doi.org/10.1109/COMST.2022.3217854","url":null,"abstract":"The Discontinuous Reception (DRX) is the most effective timer-based mechanism for User Equipment (UE) power saving. In Long Term Evolution (LTE) systems, the development of the DRX mechanism enormously extends the UE battery life. With the DRX mechanism, a UE is allowed to enter a dormant state. Given a DRX cycle, the UE needs to wake up periodically during the dormancy to check whether it receives new downlink packets or not. The UE can achieve a high sleeping ratio by skipping most channel monitoring occasions. As the mobile network evolved to 5G, the battery life requirement increased to support various new services. 3rd Generation Partnership Project (3GPP) also enhances the DRX mechanism and adds new DRX-related features in the New Radio (NR) Release 16 standard. In addition to the time-based design, 3GPP proposed two signaling-based mechanisms: power saving signal and UE assistance information. This survey paper introduces the latest DRX mechanism in the 3GPP NR standard and summarizes the state-of-the-art research. Researchers have investigated the DRX mechanism in various use cases, such as Web browsing services and heterogeneous networks. They focus on the UE sleep ratio and packet delay and propose corresponding analytical models. New DRX architectures are also discussed to conquer the power-saving problem in specific schemes, especially in the 5G NR networks. This paper categorizes and presents the papers according to the target services and the network scenarios in detail. We also survey the work focusing on the new challenges (such as beamforming and thermal issue) in the NR network and introduce the future research directions in the 6G era.","PeriodicalId":55029,"journal":{"name":"IEEE Communications Surveys and Tutorials","volume":"25 1","pages":"156-183"},"PeriodicalIF":35.6,"publicationDate":"2022-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49931817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The existing packet forwarding technology cannot meet the increasing requirements of Internet development due to its rigid framework. Application of artificial intelligence (AI) for efficient packet forwarding is gaining more and more interest as a new direction. Recently, the explosive development of programmable data plane (PDP) has provided a potential impetus to packet forwarding driven by AI. Therefore, this paper presents a survey on the recent research in AI-driven packet forwarding with PDP. First, we describe two of the most representative frameworks of the packet forwarding, i.e., the traditional AI-driven forwarding framework and the new one assisted by the PDP. Then, we focus on capacity of the packet forwarding under the two frameworks in four measures: delay, throughput, security, and reliability. For each measure, we organize the content with the evolution from simple packet forwarding, to packet forwarding capacity enhancement with the assistance of AI, to the latest research on AI-driven packet forwarding supported by the PDP. Finally, we identify three directions in the development of AI-driven packet forwarding, and highlight the challenges and issues in future research.
{"title":"AI-Driven Packet Forwarding With Programmable Data Plane: A Survey","authors":"Wei Quan;Ziheng Xu;Mingyuan Liu;Nan Cheng;Gang Liu;Deyun Gao;Hongke Zhang;Xuemin Shen;Weihua Zhuang","doi":"10.1109/COMST.2022.3217613","DOIUrl":"https://doi.org/10.1109/COMST.2022.3217613","url":null,"abstract":"The existing packet forwarding technology cannot meet the increasing requirements of Internet development due to its rigid framework. Application of artificial intelligence (AI) for efficient packet forwarding is gaining more and more interest as a new direction. Recently, the explosive development of programmable data plane (PDP) has provided a potential impetus to packet forwarding driven by AI. Therefore, this paper presents a survey on the recent research in AI-driven packet forwarding with PDP. First, we describe two of the most representative frameworks of the packet forwarding, i.e., the traditional AI-driven forwarding framework and the new one assisted by the PDP. Then, we focus on capacity of the packet forwarding under the two frameworks in four measures: delay, throughput, security, and reliability. For each measure, we organize the content with the evolution from simple packet forwarding, to packet forwarding capacity enhancement with the assistance of AI, to the latest research on AI-driven packet forwarding supported by the PDP. Finally, we identify three directions in the development of AI-driven packet forwarding, and highlight the challenges and issues in future research.","PeriodicalId":55029,"journal":{"name":"IEEE Communications Surveys and Tutorials","volume":"25 1","pages":"762-790"},"PeriodicalIF":35.6,"publicationDate":"2022-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49963230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-20DOI: 10.1109/COMST.2022.3215919
Aris Leivadeas;Matthias Falkner
Current and future network services and applications are expected to revolutionize our society and lifestyle. At the same time, the abundant possibilities that new network technologies offer to end users, network operators and administrators have created a cumbersome network configuration process to accommodate all different stakeholders and applications. Thus, lately, there is a need to simplify the management and configuration of the network, through possibly an autonomic and automatic way. Intent Based Networking (IBN) is such a paradigm that envisions flexible, agile, and simplified network configuration with minimal external intervention. This paper provides a detailed survey of how the IBN concept works and what are the main components to guarantee a fully autonomous IBN system (IBNS). Particular emphasis is given on the intent expression, intent translation, intent resolution, intent activation and intent assurance components, which form the closed loop automation system of an IBNS. The survey concludes with identifying open challenges and future directions of the problem at hand.
{"title":"A Survey on Intent-Based Networking","authors":"Aris Leivadeas;Matthias Falkner","doi":"10.1109/COMST.2022.3215919","DOIUrl":"https://doi.org/10.1109/COMST.2022.3215919","url":null,"abstract":"Current and future network services and applications are expected to revolutionize our society and lifestyle. At the same time, the abundant possibilities that new network technologies offer to end users, network operators and administrators have created a cumbersome network configuration process to accommodate all different stakeholders and applications. Thus, lately, there is a need to simplify the management and configuration of the network, through possibly an autonomic and automatic way. Intent Based Networking (IBN) is such a paradigm that envisions flexible, agile, and simplified network configuration with minimal external intervention. This paper provides a detailed survey of how the IBN concept works and what are the main components to guarantee a fully autonomous IBN system (IBNS). Particular emphasis is given on the intent expression, intent translation, intent resolution, intent activation and intent assurance components, which form the closed loop automation system of an IBNS. The survey concludes with identifying open challenges and future directions of the problem at hand.","PeriodicalId":55029,"journal":{"name":"IEEE Communications Surveys and Tutorials","volume":"25 1","pages":"625-655"},"PeriodicalIF":35.6,"publicationDate":"2022-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49963227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-14DOI: 10.1109/COMST.2022.3213237
Somayeh Kianpisheh;Tarik Taleb
In comparison with cloud computing, edge computing offers processing at locations closer to end devices and reduces the user experienced latency. The new recent paradigm of in-network computing employs programmable network elements to compute on the path and prior to traffic reaching the edge or cloud servers. It advances common edge/cloud server based computing through proposing line rate processing capabilities at closer locations to the end devices. This paper discusses use cases, enabler technologies and protocols for in-network computing. According to our study, considering programmable data plane as an enabler technology, potential in-network computing applications are in-network analytics, in-network caching, in-network security, and in-network coordination. There are also technology specific applications of in-network computing in the scopes of cloud computing, edge computing, 5G/6G, and NFV. In this survey, the state of the art, in the framework of the proposed categorization, is reviewed. Furthermore, comparisons are provided in terms of a set of proposed criteria which assess the methods from the aspects of methodology, main results, as well as application-specific criteria. Finally, we discuss lessons learned and highlight some potential research directions.
{"title":"A Survey on In-Network Computing: Programmable Data Plane and Technology Specific Applications","authors":"Somayeh Kianpisheh;Tarik Taleb","doi":"10.1109/COMST.2022.3213237","DOIUrl":"https://doi.org/10.1109/COMST.2022.3213237","url":null,"abstract":"In comparison with cloud computing, edge computing offers processing at locations closer to end devices and reduces the user experienced latency. The new recent paradigm of in-network computing employs programmable network elements to compute on the path and prior to traffic reaching the edge or cloud servers. It advances common edge/cloud server based computing through proposing line rate processing capabilities at closer locations to the end devices. This paper discusses use cases, enabler technologies and protocols for in-network computing. According to our study, considering programmable data plane as an enabler technology, potential in-network computing applications are in-network analytics, in-network caching, in-network security, and in-network coordination. There are also technology specific applications of in-network computing in the scopes of cloud computing, edge computing, 5G/6G, and NFV. In this survey, the state of the art, in the framework of the proposed categorization, is reviewed. Furthermore, comparisons are provided in terms of a set of proposed criteria which assess the methods from the aspects of methodology, main results, as well as application-specific criteria. Finally, we discuss lessons learned and highlight some potential research directions.","PeriodicalId":55029,"journal":{"name":"IEEE Communications Surveys and Tutorials","volume":"25 1","pages":"701-761"},"PeriodicalIF":35.6,"publicationDate":"2022-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/9739/10051138/09919270.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49963229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mobile networks are facing an unprecedented demand for high-speed connectivity originating from novel mobile applications and services and, in general, from the adoption curve of mobile devices. However, coping with the service requirements imposed by current and future applications and services is very difficult since mobile networks are becoming progressively more heterogeneous and more complex. In this context, a promising approach is the adoption of novel network automation solutions and, in particular, of zero-touch management techniques. In this work, we refer to zero-touch management as a fully autonomous network management solution with human oversight. This survey sits at the crossroad between zero-touch management and mobile and wireless network research, effectively bridging a gap in terms of literature review between the two domains. In this paper, we first provide a taxonomy of network management solutions. We then discuss the relevant state-of-the-art on autonomous mobile networks. The concept of zero-touch management and the associated standardization efforts are then introduced. The survey continues with a review of the most important technological enablers for zero-touch management. The network automation solutions from the RAN to the core network, including end-to-end aspects such as security, are then surveyed. Finally, we close this article with the current challenges and research directions.
{"title":"Zero Touch Management: A Survey of Network Automation Solutions for 5G and 6G Networks","authors":"Estefanía Coronado;Rasoul Behravesh;Tejas Subramanya;Adriana Fernàndez-Fernàndez;Muhammad Shuaib Siddiqui;Xavier Costa-Pérez;Roberto Riggio","doi":"10.1109/COMST.2022.3212586","DOIUrl":"https://doi.org/10.1109/COMST.2022.3212586","url":null,"abstract":"Mobile networks are facing an unprecedented demand for high-speed connectivity originating from novel mobile applications and services and, in general, from the adoption curve of mobile devices. However, coping with the service requirements imposed by current and future applications and services is very difficult since mobile networks are becoming progressively more heterogeneous and more complex. In this context, a promising approach is the adoption of novel network automation solutions and, in particular, of zero-touch management techniques. In this work, we refer to zero-touch management as a fully autonomous network management solution with human oversight. This survey sits at the crossroad between zero-touch management and mobile and wireless network research, effectively bridging a gap in terms of literature review between the two domains. In this paper, we first provide a taxonomy of network management solutions. We then discuss the relevant state-of-the-art on autonomous mobile networks. The concept of zero-touch management and the associated standardization efforts are then introduced. The survey continues with a review of the most important technological enablers for zero-touch management. The network automation solutions from the RAN to the core network, including end-to-end aspects such as security, are then surveyed. Finally, we close this article with the current challenges and research directions.","PeriodicalId":55029,"journal":{"name":"IEEE Communications Surveys and Tutorials","volume":"24 4","pages":"2535-2578"},"PeriodicalIF":35.6,"publicationDate":"2022-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/9739/9956928/09913206.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49985900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Edge computing leverages computing resources closer to the end-users at the edge of the network, rather than distant cloud servers in the centralized IoT architecture. Edge computing nodes (ECNs), experience less transmission latency and usually save on energy while network overheads are mitigated. The ECNs can be fixed or mobile in their positions. We will focus on mobile ECNs in this survey. This paper presents a comprehensive survey on mobile ECNs and identifies some open research questions. In particular, mobile ECNs are classified into four categories, namely aerial, ground vehicular, spatial, and maritime nodes. For each specific group, any mutual basic terms used in the state-of-the-art are described, different types of nodes employed in the group are reviewed, the general network architecture is introduced, the existing methods and algorithms are studied, and the challenges that the group is scrimmaging against are explored. Moreover, the integrated architectures are surveyed, wherein two different categories of the aforementioned nodes jointly play the role of ECNs in the network. Finally, the research gaps, that are yet to be filled in the area of mobile ECNs, are discussed along with directions for future research and investigation in this promising area.
{"title":"A Survey on Mobility of Edge Computing Networks in IoT: State-of-the-Art, Architectures, and Challenges","authors":"Forough Shirin Abkenar;Parisa Ramezani;Saeid Iranmanesh;Sarumathi Murali;Donpiti Chulerttiyawong;Xinyu Wan;Abbas Jamalipour;Raad Raad","doi":"10.1109/COMST.2022.3211462","DOIUrl":"https://doi.org/10.1109/COMST.2022.3211462","url":null,"abstract":"Edge computing leverages computing resources closer to the end-users at the edge of the network, rather than distant cloud servers in the centralized IoT architecture. Edge computing nodes (ECNs), experience less transmission latency and usually save on energy while network overheads are mitigated. The ECNs can be fixed or mobile in their positions. We will focus on mobile ECNs in this survey. This paper presents a comprehensive survey on mobile ECNs and identifies some open research questions. In particular, mobile ECNs are classified into four categories, namely aerial, ground vehicular, spatial, and maritime nodes. For each specific group, any mutual basic terms used in the state-of-the-art are described, different types of nodes employed in the group are reviewed, the general network architecture is introduced, the existing methods and algorithms are studied, and the challenges that the group is scrimmaging against are explored. Moreover, the integrated architectures are surveyed, wherein two different categories of the aforementioned nodes jointly play the role of ECNs in the network. Finally, the research gaps, that are yet to be filled in the area of mobile ECNs, are discussed along with directions for future research and investigation in this promising area.","PeriodicalId":55029,"journal":{"name":"IEEE Communications Surveys and Tutorials","volume":"24 4","pages":"2329-2365"},"PeriodicalIF":35.6,"publicationDate":"2022-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49985931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-23DOI: 10.1109/COMST.2022.3209144
Steven M. Hernandez;Eyuphan Bulut
In this work, we evaluate the feasibility of deploying ubiquitous WiFi sensing systems at the edge and consider the applicability of existing techniques on constrained edge devices and what challenges still exist for deploying WiFi sensing devices outside of laboratory environments. Through an extensive survey of existing literature in the area of WiFi sensing, we discover common signal processing techniques and evaluate the applicability of these techniques for online edge systems. Based on these techniques, we develop a topology of components required for a low-cost WiFi sensing system and develop a low-cost WiFi sensing system using ESP32 IoT microcontroller edge devices. We perform numerous real world WiFi sensing experiments to thoroughly evaluate machine learning prediction accuracy by performing Tree-structured Parzen Estimator (TPE) hyperparameter optimization to independently identify optimal hyperparameters for each method. Additionally, we evaluate our system directly on-board the ESP32 with respect to computation time per method and overall sample throughput rate. Through this evaluation, we demonstrate how an edge WiFi sensing system enables online machine learning through the use of on-device inference and thus can be used for ubiquitous WiFi sensing system deployments.
{"title":"WiFi Sensing on the Edge: Signal Processing Techniques and Challenges for Real-World Systems","authors":"Steven M. Hernandez;Eyuphan Bulut","doi":"10.1109/COMST.2022.3209144","DOIUrl":"https://doi.org/10.1109/COMST.2022.3209144","url":null,"abstract":"In this work, we evaluate the feasibility of deploying ubiquitous WiFi sensing systems at the edge and consider the applicability of existing techniques on constrained edge devices and what challenges still exist for deploying WiFi sensing devices outside of laboratory environments. Through an extensive survey of existing literature in the area of WiFi sensing, we discover common signal processing techniques and evaluate the applicability of these techniques for online edge systems. Based on these techniques, we develop a topology of components required for a low-cost WiFi sensing system and develop a low-cost WiFi sensing system using ESP32 IoT microcontroller edge devices. We perform numerous real world WiFi sensing experiments to thoroughly evaluate machine learning prediction accuracy by performing Tree-structured Parzen Estimator (TPE) hyperparameter optimization to independently identify optimal hyperparameters for each method. Additionally, we evaluate our system directly on-board the ESP32 with respect to computation time per method and overall sample throughput rate. Through this evaluation, we demonstrate how an edge WiFi sensing system enables online machine learning through the use of on-device inference and thus can be used for ubiquitous WiFi sensing system deployments.","PeriodicalId":55029,"journal":{"name":"IEEE Communications Surveys and Tutorials","volume":"25 1","pages":"46-76"},"PeriodicalIF":35.6,"publicationDate":"2022-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49931813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-22DOI: 10.1109/COMST.2022.3208773
Stefan Mihai;Mahnoor Yaqoob;Dang V. Hung;William Davis;Praveer Towakel;Mohsin Raza;Mehmet Karamanoglu;Balbir Barn;Dattaprasad Shetve;Raja V. Prasad;Hrishikesh Venkataraman;Ramona Trestian;Huan X. Nguyen
Digital Twin (DT) is an emerging technology surrounded by many promises, and potentials to reshape the future of industries and society overall. A DT is a system-of-systems which goes far beyond the traditional computer-based simulations and analysis. It is a replication of all the elements, processes, dynamics, and firmware of a physical system into a digital counterpart. The two systems (physical and digital) exist side by side, sharing all the inputs and operations using real-time data communications and information transfer. With the incorporation of Internet of Things (IoT), Artificial Intelligence (AI), 3D models, next generation mobile communications (5G/6G), Augmented Reality (AR), Virtual Reality (VR), distributed computing, Transfer Learning (TL), and electronic sensors, the digital/virtual counterpart of the real-world system is able to provide seamless monitoring, analysis, evaluation and predictions. The DT offers a platform for the testing and analysing of complex systems, which would be impossible in traditional simulations and modular evaluations. However, the development of this technology faces many challenges including the complexities in effective communication and data accumulation, data unavailability to train Machine Learning (ML) models, lack of processing power to support high fidelity twins, the high need for interdisciplinary collaboration, and the absence of standardized development methodologies and validation measures. Being in the early stages of development, DTs lack sufficient documentation. In this context, this survey paper aims to cover the important aspects in realization of the technology. The key enabling technologies, challenges and prospects of DTs are highlighted. The paper provides a deep insight into the technology, lists design goals and objectives, highlights design challenges and limitations across industries, discusses research and commercial developments, provides its applications and use cases, offers case studies in industry, infrastructure and healthcare, lists main service providers and stakeholders, and covers developments to date, as well as viable research dimensions for future developments in DTs.
{"title":"Digital Twins: A Survey on Enabling Technologies, Challenges, Trends and Future Prospects","authors":"Stefan Mihai;Mahnoor Yaqoob;Dang V. Hung;William Davis;Praveer Towakel;Mohsin Raza;Mehmet Karamanoglu;Balbir Barn;Dattaprasad Shetve;Raja V. Prasad;Hrishikesh Venkataraman;Ramona Trestian;Huan X. Nguyen","doi":"10.1109/COMST.2022.3208773","DOIUrl":"https://doi.org/10.1109/COMST.2022.3208773","url":null,"abstract":"Digital Twin (DT) is an emerging technology surrounded by many promises, and potentials to reshape the future of industries and society overall. A DT is a system-of-systems which goes far beyond the traditional computer-based simulations and analysis. It is a replication of all the elements, processes, dynamics, and firmware of a physical system into a digital counterpart. The two systems (physical and digital) exist side by side, sharing all the inputs and operations using real-time data communications and information transfer. With the incorporation of Internet of Things (IoT), Artificial Intelligence (AI), 3D models, next generation mobile communications (5G/6G), Augmented Reality (AR), Virtual Reality (VR), distributed computing, Transfer Learning (TL), and electronic sensors, the digital/virtual counterpart of the real-world system is able to provide seamless monitoring, analysis, evaluation and predictions. The DT offers a platform for the testing and analysing of complex systems, which would be impossible in traditional simulations and modular evaluations. However, the development of this technology faces many challenges including the complexities in effective communication and data accumulation, data unavailability to train Machine Learning (ML) models, lack of processing power to support high fidelity twins, the high need for interdisciplinary collaboration, and the absence of standardized development methodologies and validation measures. Being in the early stages of development, DTs lack sufficient documentation. In this context, this survey paper aims to cover the important aspects in realization of the technology. The key enabling technologies, challenges and prospects of DTs are highlighted. The paper provides a deep insight into the technology, lists design goals and objectives, highlights design challenges and limitations across industries, discusses research and commercial developments, provides its applications and use cases, offers case studies in industry, infrastructure and healthcare, lists main service providers and stakeholders, and covers developments to date, as well as viable research dimensions for future developments in DTs.","PeriodicalId":55029,"journal":{"name":"IEEE Communications Surveys and Tutorials","volume":"24 4","pages":"2255-2291"},"PeriodicalIF":35.6,"publicationDate":"2022-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49985929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Traffic analysis is the process of monitoring network activities, discovering specific patterns, and gleaning valuable information from network traffic. It can be applied in various fields such as network assert probing and anomaly detection. With the advent of network traffic encryption, however, traffic analysis becomes an arduous task. Due to the invisibility of packet payload, traditional traffic analysis methods relying on capturing valuable information from plaintext payload are likely to lose efficacy. Machine learning has been emerging as a powerful tool to extract informative features without getting access to payload, and thus is widely employed in encrypted traffic analysis. In this paper, we present a comprehensive survey on recent achievements in machine learning-powered encrypted traffic analysis. To begin with, we review the literature in this area and summarize the analysis goals that serve as the basis for literature classification. Then, we abstract the workflow of encrypted traffic analysis with machine learning tools, including traffic collection, traffic representation, traffic analysis method, and performance evaluation. For the surveyed studies, the requirements of classification granularity and information timeliness may vary a lot for different analysis goals. Hence, in terms of the goal of traffic analysis, we present a comprehensive review on existing studies according to four categories: network asset identification, network characterization, privacy leakage detection, and anomaly detection. Finally, we discuss the challenges and directions for future research on encrypted traffic analysis.
{"title":"Machine Learning-Powered Encrypted Network Traffic Analysis: A Comprehensive Survey","authors":"Meng Shen;Ke Ye;Xingtong Liu;Liehuang Zhu;Jiawen Kang;Shui Yu;Qi Li;Ke Xu","doi":"10.1109/COMST.2022.3208196","DOIUrl":"https://doi.org/10.1109/COMST.2022.3208196","url":null,"abstract":"Traffic analysis is the process of monitoring network activities, discovering specific patterns, and gleaning valuable information from network traffic. It can be applied in various fields such as network assert probing and anomaly detection. With the advent of network traffic encryption, however, traffic analysis becomes an arduous task. Due to the invisibility of packet payload, traditional traffic analysis methods relying on capturing valuable information from plaintext payload are likely to lose efficacy. Machine learning has been emerging as a powerful tool to extract informative features without getting access to payload, and thus is widely employed in encrypted traffic analysis. In this paper, we present a comprehensive survey on recent achievements in machine learning-powered encrypted traffic analysis. To begin with, we review the literature in this area and summarize the analysis goals that serve as the basis for literature classification. Then, we abstract the workflow of encrypted traffic analysis with machine learning tools, including traffic collection, traffic representation, traffic analysis method, and performance evaluation. For the surveyed studies, the requirements of classification granularity and information timeliness may vary a lot for different analysis goals. Hence, in terms of the goal of traffic analysis, we present a comprehensive review on existing studies according to four categories: network asset identification, network characterization, privacy leakage detection, and anomaly detection. Finally, we discuss the challenges and directions for future research on encrypted traffic analysis.","PeriodicalId":55029,"journal":{"name":"IEEE Communications Surveys and Tutorials","volume":"25 1","pages":"791-824"},"PeriodicalIF":35.6,"publicationDate":"2022-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49931808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The 5G network technologies are intended to accommodate innovative services with a large influx of data traffic with lower energy consumption and increased quality of service and user quality of experience levels. In order to meet 5G expectations, heterogeneous networks (HetNets) have been introduced. They involve deployment of additional low power nodes within the coverage area of conventional high power nodes and their placement closer to user underlay HetNets. Due to the increased density of small-cell networks and radio access technologies, radio resource management (RRM) for potential 5G HetNets has emerged as a critical avenue. It plays a pivotal role in enhancing spectrum utilization, load balancing, and network energy efficiency. In this paper, we summarize the key challenges, i.e., cross-tier interference, co-tier interference, and user association-resource-power allocation (UA-RA-PA) emerging in 5G HetNets and highlight their significance. In addition, we present a comprehensive survey of RRM schemes based on interference management (IM), UA-RA-PA and combined approaches (UA-RA-PA + IM). We introduce a taxonomy for individual (IM, UA-RA-PA) and combined approaches as a framework for systematically studying the existing schemes. These schemes are also qualitatively analyzed and compared to each other. Finally, challenges and opportunities for RRM in 5G are outlined, and design guidelines along with possible solutions for advanced mechanisms are presented.
{"title":"A Comprehensive Survey on Radio Resource Management in 5G HetNets: Current Solutions, Future Trends and Open Issues","authors":"Bharat Agarwal;Mohammed Amine Togou;Marco Marco;Gabriel-Miro Muntean","doi":"10.1109/COMST.2022.3207967","DOIUrl":"https://doi.org/10.1109/COMST.2022.3207967","url":null,"abstract":"The 5G network technologies are intended to accommodate innovative services with a large influx of data traffic with lower energy consumption and increased quality of service and user quality of experience levels. In order to meet 5G expectations, heterogeneous networks (HetNets) have been introduced. They involve deployment of additional low power nodes within the coverage area of conventional high power nodes and their placement closer to user underlay HetNets. Due to the increased density of small-cell networks and radio access technologies, radio resource management (RRM) for potential 5G HetNets has emerged as a critical avenue. It plays a pivotal role in enhancing spectrum utilization, load balancing, and network energy efficiency. In this paper, we summarize the key challenges, i.e., cross-tier interference, co-tier interference, and user association-resource-power allocation (UA-RA-PA) emerging in 5G HetNets and highlight their significance. In addition, we present a comprehensive survey of RRM schemes based on interference management (IM), UA-RA-PA and combined approaches (UA-RA-PA + IM). We introduce a taxonomy for individual (IM, UA-RA-PA) and combined approaches as a framework for systematically studying the existing schemes. These schemes are also qualitatively analyzed and compared to each other. Finally, challenges and opportunities for RRM in 5G are outlined, and design guidelines along with possible solutions for advanced mechanisms are presented.","PeriodicalId":55029,"journal":{"name":"IEEE Communications Surveys and Tutorials","volume":"24 4","pages":"2495-2534"},"PeriodicalIF":35.6,"publicationDate":"2022-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/9739/9956928/09896125.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49985899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}