Multicasting in wireless access networks is a functionality that, by leveraging group communications, turns out to be essential for reducing the amount of resources needed to serve users requesting the same content. The support of this functionality in the modern 5G New Radio (NR) and future sub-Terahertz (sub-THz) 6G systems faces critical challenges related to the utilization of massive antenna arrays forming directional radiation patterns, multi-beam functionality, and use of multiple Radio Access Technologys (RATs) having distinctively different coverage and technological specifics. As a result, optimal multicasting in these systems requires novel solutions. This article aims to provide an exhaustive treatment of performance optimization methods for 5G/6G mmWave/sub-THz systems and discuss the associated challenges and opportunities. We start by surveying 3rd Generation Partnership Project (3GPP) mechanisms to support multicasting at the NR radio interface and approaches to modeling the 5G/6G radio segment. Then, we illustrate optimal multicast solutions for different 5G NR deployments and antenna patterns, including single- and multi-beam antenna arrays and single- and multiple RAT deployments. Further, we survey new advanced functionalities for improving multicasting performance in 5G/6G systems, encompassing Reflective Intelligent Surfaces (RISs), NR-sidelink technology, and mobile edge enhancements, among many others. Finally, we outline perspectives of multicasting in future 6G networks.
无线接入网络中的组播是一种利用群组通信的功能,对于减少为请求相同内容的用户提供服务所需的资源量至关重要。在现代 5G 新无线电(NR)和未来的亚太赫兹(sub-THz)6G 系统中支持这一功能面临着严峻的挑战,这些挑战与利用大规模天线阵列形成定向辐射模式、多波束功能以及使用具有明显不同覆盖范围和技术特性的多种无线接入技术(RAT)有关。因此,这些系统中的最佳组播需要新颖的解决方案。本文旨在详尽论述 5G/6G 毫米波/次 THz 系统的性能优化方法,并讨论相关的挑战和机遇。我们首先介绍了第三代合作伙伴计划(3GPP)在 NR 无线电接口上支持组播的机制,以及 5G/6G 无线电段的建模方法。然后,我们说明了针对不同 5G NR 部署和天线模式的最佳组播解决方案,包括单波束和多波束天线阵列以及单 RAT 和多 RAT 部署。此外,我们还介绍了用于提高 5G/6G 系统组播性能的新型高级功能,包括反射智能表面 (RIS)、NR 侧联技术和移动边缘增强技术等。最后,我们概述了未来 6G 网络中的组播前景。
{"title":"Models, Methods, and Solutions for Multicasting in 5G/6G mmWave and Sub-THz Systems","authors":"Nadezhda Chukhno;Olga Chukhno;Dmitri Moltchanov;Sara Pizzi;Anna Gaydamaka;Andrey Samuylov;Antonella Molinaro;Yevgeni Koucheryavy;Antonio Iera;Giuseppe Araniti","doi":"10.1109/COMST.2023.3319354","DOIUrl":"10.1109/COMST.2023.3319354","url":null,"abstract":"Multicasting in wireless access networks is a functionality that, by leveraging group communications, turns out to be essential for reducing the amount of resources needed to serve users requesting the same content. The support of this functionality in the modern 5G New Radio (NR) and future sub-Terahertz (sub-THz) 6G systems faces critical challenges related to the utilization of massive antenna arrays forming directional radiation patterns, multi-beam functionality, and use of multiple Radio Access Technologys (RATs) having distinctively different coverage and technological specifics. As a result, optimal multicasting in these systems requires novel solutions. This article aims to provide an exhaustive treatment of performance optimization methods for 5G/6G mmWave/sub-THz systems and discuss the associated challenges and opportunities. We start by surveying 3rd Generation Partnership Project (3GPP) mechanisms to support multicasting at the NR radio interface and approaches to modeling the 5G/6G radio segment. Then, we illustrate optimal multicast solutions for different 5G NR deployments and antenna patterns, including single- and multi-beam antenna arrays and single- and multiple RAT deployments. Further, we survey new advanced functionalities for improving multicasting performance in 5G/6G systems, encompassing Reflective Intelligent Surfaces (RISs), NR-sidelink technology, and mobile edge enhancements, among many others. Finally, we outline perspectives of multicasting in future 6G networks.","PeriodicalId":34,"journal":{"name":"Crystal Growth & Design","volume":"26 1","pages":"119-159"},"PeriodicalIF":35.6,"publicationDate":"2023-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10263616","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135750412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Full leverage of the huge volume of data generated on a large number of user devices for providing intelligent services in the 6G network calls for Ubiquitous Intelligence (UI). A key to developing UI lies in the involvement of the large number of network devices, which contribute their data to collaborative Machine Learning (ML) and provide their computational resources to support the learning process. Federated Learning (FL) is a new ML method that enables data owners to collaborate in model training without exposing private data, which allows user devices to contribute their data to developing UI. Edge computing deploys cloud-like capabilities at the network edge, which enables network devices to offer their computational resources for supporting FL. Therefore, a combination of FL and edge computing may greatly facilitate the development of ubiquitous intelligence in the 6G network. In this article, we present a comprehensive survey of the recent developments in technologies for combining FL and edge computing with a holistic vision across the fields of FL and edge computing. We conduct our survey from both the perspective of an FL framework deployed in an edge computing environment (FL in Edge) and the perspective of an edge computing system providing a platform for FL (Edge for FL). From the FL in Edge perspective, we first identify the main challenges to FL in edge computing and then survey the representative technical strategies for addressing the challenges. From the Edge for FL perspective, we first analyze the key requirements for edge computing to support FL and then review the recent advances in edge computing technologies that may be exploited to meet the requirements. Then we discuss open problems and identify some possible directions for future research on combining FL and edge computing, with the hope of arousing the research community’s interest in this emerging and exciting interdisciplinary field.
充分利用大量用户设备产生的海量数据,在6G网络中提供智能服务,需要普适智能(Ubiquitous Intelligence, UI)。开发UI的关键在于大量网络设备的参与,这些设备将其数据贡献给协作机器学习(ML),并提供其计算资源来支持学习过程。联邦学习(FL)是一种新的机器学习方法,它使数据所有者能够在不暴露私有数据的情况下协作进行模型训练,从而允许用户设备将其数据贡献给开发UI。边缘计算在网络边缘部署了类似云的能力,使网络设备能够提供自己的计算资源来支持FL。因此,FL和边缘计算的结合将极大地促进6G网络中泛在智能的发展。在本文中,我们对FL和边缘计算结合技术的最新发展进行了全面的调查,并对FL和边缘计算领域进行了全面的展望。我们从部署在边缘计算环境中的FL框架(FL in edge)和为FL提供平台的边缘计算系统(edge for FL)的角度进行调查。从边缘计算中的FL角度来看,我们首先确定了边缘计算中FL的主要挑战,然后调查了解决这些挑战的代表性技术策略。从FL的边缘角度来看,我们首先分析了支持FL的边缘计算的关键要求,然后回顾了可能被利用来满足要求的边缘计算技术的最新进展。然后,我们讨论了一些开放的问题,并确定了将FL和边缘计算结合起来的未来研究的一些可能方向,希望引起研究界对这个新兴的、令人兴奋的跨学科领域的兴趣。
{"title":"Combining Federated Learning and Edge Computing Toward Ubiquitous Intelligence in 6G Network: Challenges, Recent Advances, and Future Directions","authors":"Qiang Duan;Jun Huang;Shijing Hu;Ruijun Deng;Zhihui Lu;Shui Yu","doi":"10.1109/COMST.2023.3316615","DOIUrl":"10.1109/COMST.2023.3316615","url":null,"abstract":"Full leverage of the huge volume of data generated on a large number of user devices for providing intelligent services in the 6G network calls for Ubiquitous Intelligence (UI). A key to developing UI lies in the involvement of the large number of network devices, which contribute their data to collaborative Machine Learning (ML) and provide their computational resources to support the learning process. Federated Learning (FL) is a new ML method that enables data owners to collaborate in model training without exposing private data, which allows user devices to contribute their data to developing UI. Edge computing deploys cloud-like capabilities at the network edge, which enables network devices to offer their computational resources for supporting FL. Therefore, a combination of FL and edge computing may greatly facilitate the development of ubiquitous intelligence in the 6G network. In this article, we present a comprehensive survey of the recent developments in technologies for combining FL and edge computing with a holistic vision across the fields of FL and edge computing. We conduct our survey from both the perspective of an FL framework deployed in an edge computing environment (FL in Edge) and the perspective of an edge computing system providing a platform for FL (Edge for FL). From the FL in Edge perspective, we first identify the main challenges to FL in edge computing and then survey the representative technical strategies for addressing the challenges. From the Edge for FL perspective, we first analyze the key requirements for edge computing to support FL and then review the recent advances in edge computing technologies that may be exploited to meet the requirements. Then we discuss open problems and identify some possible directions for future research on combining FL and edge computing, with the hope of arousing the research community’s interest in this emerging and exciting interdisciplinary field.","PeriodicalId":34,"journal":{"name":"Crystal Growth & Design","volume":"25 4","pages":"2892-2950"},"PeriodicalIF":35.6,"publicationDate":"2023-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135783298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Full-duplex (FD) communication is a potential game changer for future wireless networks. It allows for simultaneous transmit and receive operations over the same frequency band, a doubling of the spectral efficiency. FD can also be a catalyst for supercharging other existing/emerging wireless technologies, including cooperative and cognitive communications, cellular networks, multiple-input multiple-output (MIMO), massive MIMO, non-orthogonal multiple access (NOMA), millimeter-wave (mmWave) communications, unmanned aerial vehicle (UAV)-aided communication, backscatter communication (BackCom), and reconfigurable intelligent surfaces (RISs). These integrated technologies can further improve spectral efficiency, enhance security, reduce latency, and boost the energy efficiency of future wireless networks. A comprehensive survey of such integration has thus far been lacking. This paper fills that need. Specifically, we first discuss the fundamentals, highlighting the FD transceiver structure and the self-interference (SI) cancellation techniques. Next, we discuss the coexistence of FD with the above-mentioned wireless technologies. We also provide case studies for some of the integration scenarios mentioned above and future research directions for each case. We further address the potential research directions, open challenges, and applications for future FD-assisted wireless, including cell-free massive MIMO, mmWave communications, UAV, BackCom, and RISs. Finally, potential applications and developments of other miscellaneous technologies, such as mixed radio-frequency/free-space optical, visible light communication, dual-functional radar-communication, underwater wireless communication, multi-user ultra-reliable low-latency communications, vehicle-to-everything communications, rate splitting multiple access, integrated sensing and communication, and age of information, are also highlighted.
{"title":"A Comprehensive Survey on Full-Duplex Communication: Current Solutions, Future Trends, and Open Issues","authors":"Mohammadali Mohammadi;Zahra Mobini;Diluka Galappaththige;Chintha Tellambura","doi":"10.1109/COMST.2023.3318198","DOIUrl":"10.1109/COMST.2023.3318198","url":null,"abstract":"Full-duplex (FD) communication is a potential game changer for future wireless networks. It allows for simultaneous transmit and receive operations over the same frequency band, a doubling of the spectral efficiency. FD can also be a catalyst for supercharging other existing/emerging wireless technologies, including cooperative and cognitive communications, cellular networks, multiple-input multiple-output (MIMO), massive MIMO, non-orthogonal multiple access (NOMA), millimeter-wave (mmWave) communications, unmanned aerial vehicle (UAV)-aided communication, backscatter communication (BackCom), and reconfigurable intelligent surfaces (RISs). These integrated technologies can further improve spectral efficiency, enhance security, reduce latency, and boost the energy efficiency of future wireless networks. A comprehensive survey of such integration has thus far been lacking. This paper fills that need. Specifically, we first discuss the fundamentals, highlighting the FD transceiver structure and the self-interference (SI) cancellation techniques. Next, we discuss the coexistence of FD with the above-mentioned wireless technologies. We also provide case studies for some of the integration scenarios mentioned above and future research directions for each case. We further address the potential research directions, open challenges, and applications for future FD-assisted wireless, including cell-free massive MIMO, mmWave communications, UAV, BackCom, and RISs. Finally, potential applications and developments of other miscellaneous technologies, such as mixed radio-frequency/free-space optical, visible light communication, dual-functional radar-communication, underwater wireless communication, multi-user ultra-reliable low-latency communications, vehicle-to-everything communications, rate splitting multiple access, integrated sensing and communication, and age of information, are also highlighted.","PeriodicalId":34,"journal":{"name":"Crystal Growth & Design","volume":"25 4","pages":"2190-2244"},"PeriodicalIF":35.6,"publicationDate":"2023-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135599233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-20DOI: 10.1109/COMST.2023.3316283
Malek Khammassi;Abla Kammoun;Mohamed-Slim Alouini
With the expanding demand for high data rates and extensive coverage, high throughput satellite (HTS) communication systems are emerging as a key technology for future communication generations. However, current frequency bands are increasingly congested. Until the maturity of communication systems to operate on higher bands, the solution is to exploit the already existing frequency bands more efficiently. In this context, precoding emerges as one of the prolific approaches to increasing spectral efficiency. This survey presents an overview and a classification of the recent precoding techniques for HTS communication systems from two main perspectives: 1) a problem formulation perspective and 2) a system design perspective. From a problem formulation point of view, precoding techniques are classified according to the precoding optimization problem, group, and level. From a system design standpoint, precoding is categorized based on the system architecture, the precoding implementation, and the type of the provided service. Further, practical system impairments are discussed, and robust precoding techniques are presented. Finally, future trends in precoding for satellites are addressed to spur further research.
{"title":"Precoding for High-Throughput Satellite Communication Systems: A Survey","authors":"Malek Khammassi;Abla Kammoun;Mohamed-Slim Alouini","doi":"10.1109/COMST.2023.3316283","DOIUrl":"10.1109/COMST.2023.3316283","url":null,"abstract":"With the expanding demand for high data rates and extensive coverage, high throughput satellite (HTS) communication systems are emerging as a key technology for future communication generations. However, current frequency bands are increasingly congested. Until the maturity of communication systems to operate on higher bands, the solution is to exploit the already existing frequency bands more efficiently. In this context, precoding emerges as one of the prolific approaches to increasing spectral efficiency. This survey presents an overview and a classification of the recent precoding techniques for HTS communication systems from two main perspectives: 1) a problem formulation perspective and 2) a system design perspective. From a problem formulation point of view, precoding techniques are classified according to the precoding optimization problem, group, and level. From a system design standpoint, precoding is categorized based on the system architecture, the precoding implementation, and the type of the provided service. Further, practical system impairments are discussed, and robust precoding techniques are presented. Finally, future trends in precoding for satellites are addressed to spur further research.","PeriodicalId":34,"journal":{"name":"Crystal Growth & Design","volume":"26 1","pages":"80-118"},"PeriodicalIF":35.6,"publicationDate":"2023-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135556044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The deployment of the fifth-generation (5G) wireless networks in Internet of Everything (IoE) applications and future networks (e.g., sixth-generation (6G) networks) has raised a number of operational challenges and limitations, for example in terms of security and privacy. Edge learning is an emerging approach to training models across distributed clients while ensuring data privacy. Such an approach when integrated in future network infrastructures (e.g., 6G) can potentially solve challenging problems such as resource management and behavior prediction. However, edge learning (including distributed deep learning) are known to be susceptible to tampering and manipulation. This survey article provides a holistic review of the extant literature focusing on edge learning-related vulnerabilities and defenses for 6G-enabled Internet of Things (IoT) systems. Existing machine learning approaches for 6G–IoT security and machine learning-associated threats are broadly categorized based on learning modes, namely: centralized, federated, and distributed. Then, we provide an overview of enabling emerging technologies for 6G–IoT intelligence. We also provide a holistic survey of existing research on attacks against machine learning and classify threat models into eight categories, namely: backdoor attacks, adversarial examples, combined attacks, poisoning attacks, Sybil attacks, byzantine attacks, inference attacks, and dropping attacks. In addition, we provide a comprehensive and detailed taxonomy and a comparative summary of the state-of-the-art defense methods against edge learning-related vulnerabilities. Finally, as new attacks and defense technologies are realized, new research and future overall prospects for 6G-enabled IoT are discussed.
{"title":"Edge Learning for 6G-Enabled Internet of Things: A Comprehensive Survey of Vulnerabilities, Datasets, and Defenses","authors":"Mohamed Amine Ferrag;Othmane Friha;Burak Kantarci;Norbert Tihanyi;Lucas Cordeiro;Merouane Debbah;Djallel Hamouda;Muna Al-Hawawreh;Kim-Kwang Raymond Choo","doi":"10.1109/COMST.2023.3317242","DOIUrl":"10.1109/COMST.2023.3317242","url":null,"abstract":"The deployment of the fifth-generation (5G) wireless networks in Internet of Everything (IoE) applications and future networks (e.g., sixth-generation (6G) networks) has raised a number of operational challenges and limitations, for example in terms of security and privacy. Edge learning is an emerging approach to training models across distributed clients while ensuring data privacy. Such an approach when integrated in future network infrastructures (e.g., 6G) can potentially solve challenging problems such as resource management and behavior prediction. However, edge learning (including distributed deep learning) are known to be susceptible to tampering and manipulation. This survey article provides a holistic review of the extant literature focusing on edge learning-related vulnerabilities and defenses for 6G-enabled Internet of Things (IoT) systems. Existing machine learning approaches for 6G–IoT security and machine learning-associated threats are broadly categorized based on learning modes, namely: centralized, federated, and distributed. Then, we provide an overview of enabling emerging technologies for 6G–IoT intelligence. We also provide a holistic survey of existing research on attacks against machine learning and classify threat models into eight categories, namely: backdoor attacks, adversarial examples, combined attacks, poisoning attacks, Sybil attacks, byzantine attacks, inference attacks, and dropping attacks. In addition, we provide a comprehensive and detailed taxonomy and a comparative summary of the state-of-the-art defense methods against edge learning-related vulnerabilities. Finally, as new attacks and defense technologies are realized, new research and future overall prospects for 6G-enabled IoT are discussed.","PeriodicalId":34,"journal":{"name":"Crystal Growth & Design","volume":"25 4","pages":"2654-2713"},"PeriodicalIF":35.6,"publicationDate":"2023-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135555969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-15DOI: 10.1109/COMST.2023.3315746
Enrique Tomás Martínez Beltrán;Mario Quiles Pérez;Pedro Miguel Sánchez Sánchez;Sergio López Bernal;Gérôme Bovet;Manuel Gil Pérez;Gregorio Martínez Pérez;Alberto Huertas Celdrán
In recent years, Federated Learning (FL) has gained relevance in training collaborative models without sharing sensitive data. Since its birth, Centralized FL (CFL) has been the most common approach in the literature, where a central entity creates a global model. However, a centralized approach leads to increased latency due to bottlenecks, heightened vulnerability to system failures, and trustworthiness concerns affecting the entity responsible for the global model creation. Decentralized Federated Learning (DFL) emerged to address these concerns by promoting decentralized model aggregation and minimizing reliance on centralized architectures. However, despite the work done in DFL, the literature has not (i) studied the main aspects differentiating DFL and CFL; (ii) analyzed DFL frameworks to create and evaluate new solutions; and (iii) reviewed application scenarios using DFL. Thus, this article identifies and analyzes the main fundamentals of DFL in terms of federation architectures, topologies, communication mechanisms, security approaches, and key performance indicators. Additionally, the paper at hand explores existing mechanisms to optimize critical DFL fundamentals. Then, the most relevant features of the current DFL frameworks are reviewed and compared. After that, it analyzes the most used DFL application scenarios, identifying solutions based on the fundamentals and frameworks previously defined. Finally, the evolution of existing DFL solutions is studied to provide a list of trends, lessons learned, and open challenges.
{"title":"Decentralized Federated Learning: Fundamentals, State of the Art, Frameworks, Trends, and Challenges","authors":"Enrique Tomás Martínez Beltrán;Mario Quiles Pérez;Pedro Miguel Sánchez Sánchez;Sergio López Bernal;Gérôme Bovet;Manuel Gil Pérez;Gregorio Martínez Pérez;Alberto Huertas Celdrán","doi":"10.1109/COMST.2023.3315746","DOIUrl":"10.1109/COMST.2023.3315746","url":null,"abstract":"In recent years, Federated Learning (FL) has gained relevance in training collaborative models without sharing sensitive data. Since its birth, Centralized FL (CFL) has been the most common approach in the literature, where a central entity creates a global model. However, a centralized approach leads to increased latency due to bottlenecks, heightened vulnerability to system failures, and trustworthiness concerns affecting the entity responsible for the global model creation. Decentralized Federated Learning (DFL) emerged to address these concerns by promoting decentralized model aggregation and minimizing reliance on centralized architectures. However, despite the work done in DFL, the literature has not (i) studied the main aspects differentiating DFL and CFL; (ii) analyzed DFL frameworks to create and evaluate new solutions; and (iii) reviewed application scenarios using DFL. Thus, this article identifies and analyzes the main fundamentals of DFL in terms of federation architectures, topologies, communication mechanisms, security approaches, and key performance indicators. Additionally, the paper at hand explores existing mechanisms to optimize critical DFL fundamentals. Then, the most relevant features of the current DFL frameworks are reviewed and compared. After that, it analyzes the most used DFL application scenarios, identifying solutions based on the fundamentals and frameworks previously defined. Finally, the evolution of existing DFL solutions is studied to provide a list of trends, lessons learned, and open challenges.","PeriodicalId":34,"journal":{"name":"Crystal Growth & Design","volume":"25 4","pages":"2983-3013"},"PeriodicalIF":35.6,"publicationDate":"2023-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136297476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-14DOI: 10.1109/COMST.2023.3315374
Yiping Zuo;Jiajia Guo;Ning Gao;Yongxu Zhu;Shi Jin;Xiao Li
The research on the sixth-generation (6G) wireless communications for the development of future mobile communication networks has been officially launched around the world. 6G networks face multifarious challenges, such as resource-constrained mobile devices, difficult wireless resource management, high complexity of heterogeneous network architectures, explosive computing and storage requirements, privacy and security threats. To address these challenges, deploying blockchain and artificial intelligence (AI) in 6G networks may realize new breakthroughs in advancing network performances in terms of security, privacy, efficiency, cost, and more. In this paper, we provide a detailed survey of existing works on the application of blockchain and AI to 6G wireless communications. More specifically, we start with a brief overview of blockchain and AI. Then, we mainly review the recent advances in the fusion of blockchain and AI, and highlight the inevitable trend of deploying both blockchain and AI in wireless communications. Furthermore, we extensively explore integrating blockchain and AI for wireless communication systems, involving secure services and Internet of Things (IoT) smart applications. Particularly, some of the most talked-about key services based on blockchain and AI are introduced, such as spectrum management, computation allocation, content caching, and security and privacy. Moreover, we also focus on some important IoT smart applications supported by blockchain and AI, covering smart healthcare, smart transportation, smart grid, and unmanned aerial vehicles (UAVs). Moreover, we thoroughly discuss operating frequencies, visions, and requirements from the 6G perspective. We also analyze the open issues and research challenges for the joint deployment of blockchain and AI in 6G wireless communications. Lastly, based on lots of existing meaningful works, this paper aims to provide a comprehensive survey of blockchain and AI in 6G networks. We hope this survey can shed new light on the research of this newly emerging area and serve as a roadmap for future studies.
{"title":"A Survey of Blockchain and Artificial Intelligence for 6G Wireless Communications","authors":"Yiping Zuo;Jiajia Guo;Ning Gao;Yongxu Zhu;Shi Jin;Xiao Li","doi":"10.1109/COMST.2023.3315374","DOIUrl":"10.1109/COMST.2023.3315374","url":null,"abstract":"The research on the sixth-generation (6G) wireless communications for the development of future mobile communication networks has been officially launched around the world. 6G networks face multifarious challenges, such as resource-constrained mobile devices, difficult wireless resource management, high complexity of heterogeneous network architectures, explosive computing and storage requirements, privacy and security threats. To address these challenges, deploying blockchain and artificial intelligence (AI) in 6G networks may realize new breakthroughs in advancing network performances in terms of security, privacy, efficiency, cost, and more. In this paper, we provide a detailed survey of existing works on the application of blockchain and AI to 6G wireless communications. More specifically, we start with a brief overview of blockchain and AI. Then, we mainly review the recent advances in the fusion of blockchain and AI, and highlight the inevitable trend of deploying both blockchain and AI in wireless communications. Furthermore, we extensively explore integrating blockchain and AI for wireless communication systems, involving secure services and Internet of Things (IoT) smart applications. Particularly, some of the most talked-about key services based on blockchain and AI are introduced, such as spectrum management, computation allocation, content caching, and security and privacy. Moreover, we also focus on some important IoT smart applications supported by blockchain and AI, covering smart healthcare, smart transportation, smart grid, and unmanned aerial vehicles (UAVs). Moreover, we thoroughly discuss operating frequencies, visions, and requirements from the 6G perspective. We also analyze the open issues and research challenges for the joint deployment of blockchain and AI in 6G wireless communications. Lastly, based on lots of existing meaningful works, this paper aims to provide a comprehensive survey of blockchain and AI in 6G networks. We hope this survey can shed new light on the research of this newly emerging area and serve as a roadmap for future studies.","PeriodicalId":34,"journal":{"name":"Crystal Growth & Design","volume":"25 4","pages":"2494-2528"},"PeriodicalIF":35.6,"publicationDate":"2023-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135784561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-11DOI: 10.1109/COMST.2023.3312221
Harrison Kurunathan;Hailong Huang;Kai Li;Wei Ni;Ekram Hossain
Over the past decade, Unmanned Aerial Vehicles (UAVs) have provided pervasive, efficient, and cost-effective solutions for data collection and communications. Their excellent mobility, flexibility, and fast deployment enable UAVs to be extensively utilized in agriculture, medical, rescue missions, smart cities, and intelligent transportation systems. Machine learning (ML) has been increasingly demonstrating its capability of improving the automation and operation precision of UAVs and many UAV-assisted applications, such as communications, sensing, and data collection. The ongoing amalgamation of UAV and ML techniques is creating a significant synergy and empowering UAVs with unprecedented intelligence and autonomy. This survey aims to provide a timely and comprehensive overview of ML techniques used in UAV operations and communications and identify the potential growth areas and research gaps. We emphasize the four key components of UAV operations and communications to which ML can significantly contribute, namely, perception and feature extraction, feature interpretation and regeneration, trajectory and mission planning, and aerodynamic control and operation. We classify the latest popular ML tools based on their applications to the four components and conduct gap analyses. This survey also takes a step forward by pointing out significant challenges in the upcoming realm of ML-aided automated UAV operations and communications. It is revealed that different ML techniques dominate the applications to the four key modules of UAV operations and communications. While there is an increasing trend of cross-module designs, little effort has been devoted to an end-to-end ML framework, from perception and feature extraction to aerodynamic control and operation. It is also unveiled that the reliability and trust of ML in UAV operations and applications require significant attention before full automation of UAVs and potential cooperation between UAVs and humans come to fruition.
过去十年来,无人飞行器(UAV)为数据收集和通信提供了普遍、高效和经济的解决方案。无人机具有出色的机动性、灵活性和快速部署能力,因此被广泛应用于农业、医疗、救援任务、智能城市和智能交通系统。机器学习(ML)在提高无人机的自动化程度和操作精度以及许多无人机辅助应用(如通信、传感和数据收集)方面的能力已日益显现。无人机和人工智能技术的不断融合正在产生巨大的协同效应,并赋予无人机前所未有的智能和自主性。本调查旨在及时、全面地概述无人机操作和通信中使用的 ML 技术,并确定潜在的增长领域和研究缺口。我们强调了人工智能可为无人机操作和通信做出重大贡献的四个关键组成部分,即感知和特征提取、特征解读和再生、轨迹和任务规划以及空气动力控制和操作。我们根据最新流行的人工智能工具在这四个方面的应用对其进行了分类,并进行了差距分析。这项调查还向前迈出了一步,指出了即将到来的人工智能辅助无人机自动操作和通信领域的重大挑战。调查显示,不同的 ML 技术在无人机操作和通信的四个关键模块的应用中占据主导地位。虽然跨模块设计的趋势日益明显,但从感知和特征提取到气动控制和操作,端到端的 ML 框架却鲜有问世。研究还表明,在实现无人机的全面自动化以及无人机与人类之间的潜在合作之前,需要高度重视人工智能在无人机操作和应用中的可靠性和可信度。
{"title":"Machine Learning-Aided Operations and Communications of Unmanned Aerial Vehicles: A Contemporary Survey","authors":"Harrison Kurunathan;Hailong Huang;Kai Li;Wei Ni;Ekram Hossain","doi":"10.1109/COMST.2023.3312221","DOIUrl":"10.1109/COMST.2023.3312221","url":null,"abstract":"Over the past decade, Unmanned Aerial Vehicles (UAVs) have provided pervasive, efficient, and cost-effective solutions for data collection and communications. Their excellent mobility, flexibility, and fast deployment enable UAVs to be extensively utilized in agriculture, medical, rescue missions, smart cities, and intelligent transportation systems. Machine learning (ML) has been increasingly demonstrating its capability of improving the automation and operation precision of UAVs and many UAV-assisted applications, such as communications, sensing, and data collection. The ongoing amalgamation of UAV and ML techniques is creating a significant synergy and empowering UAVs with unprecedented intelligence and autonomy. This survey aims to provide a timely and comprehensive overview of ML techniques used in UAV operations and communications and identify the potential growth areas and research gaps. We emphasize the four key components of UAV operations and communications to which ML can significantly contribute, namely, perception and feature extraction, feature interpretation and regeneration, trajectory and mission planning, and aerodynamic control and operation. We classify the latest popular ML tools based on their applications to the four components and conduct gap analyses. This survey also takes a step forward by pointing out significant challenges in the upcoming realm of ML-aided automated UAV operations and communications. It is revealed that different ML techniques dominate the applications to the four key modules of UAV operations and communications. While there is an increasing trend of cross-module designs, little effort has been devoted to an end-to-end ML framework, from perception and feature extraction to aerodynamic control and operation. It is also unveiled that the reliability and trust of ML in UAV operations and applications require significant attention before full automation of UAVs and potential cooperation between UAVs and humans come to fruition.","PeriodicalId":34,"journal":{"name":"Crystal Growth & Design","volume":"26 1","pages":"496-533"},"PeriodicalIF":35.6,"publicationDate":"2023-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135784733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-06DOI: 10.1109/COMST.2023.3312349
Chamitha De Alwis;Pawani Porambage;Kapal Dev;Thippa Reddy Gadekallu;Madhusanka Liyanage
The dawn of softwarized networks enables Network Slicing (NS) as an important technology towards allocating end-to-end logical networks to facilitate diverse requirements of emerging applications in fifth-generation (5G) mobile networks. However, the emergence of NS also exposes novel security and privacy challenges, primarily related to aspects such as NS life-cycle security, inter-slice security, intra-slice security, slice broker security, zero-touch network and management security, and blockchain security. Hence, enhancing NS security, privacy, and trust has become a key research area toward realizing the true capabilities of 5G. This paper presents a comprehensive and up-to-date survey on NS security. The paper articulates a taxonomy for NS security and privacy, laying the structure for the survey. Accordingly, the paper presents key attack scenarios specific to NS-enabled networks. Furthermore, the paper explores NS security threats, challenges, and issues while elaborating on NS security solutions available in the literature. In addition, NS trust and privacy aspects, along with possible solutions, are explained. The paper also highlights future research directions in NS security and privacy. It is envisaged that this survey will concentrate on existing research work, highlight research gaps and shed light on future research, development, and standardization work to realize secure NS in 5G and beyond mobile communication networks.
{"title":"A Survey on Network Slicing Security: Attacks, Challenges, Solutions and Research Directions","authors":"Chamitha De Alwis;Pawani Porambage;Kapal Dev;Thippa Reddy Gadekallu;Madhusanka Liyanage","doi":"10.1109/COMST.2023.3312349","DOIUrl":"10.1109/COMST.2023.3312349","url":null,"abstract":"The dawn of softwarized networks enables Network Slicing (NS) as an important technology towards allocating end-to-end logical networks to facilitate diverse requirements of emerging applications in fifth-generation (5G) mobile networks. However, the emergence of NS also exposes novel security and privacy challenges, primarily related to aspects such as NS life-cycle security, inter-slice security, intra-slice security, slice broker security, zero-touch network and management security, and blockchain security. Hence, enhancing NS security, privacy, and trust has become a key research area toward realizing the true capabilities of 5G. This paper presents a comprehensive and up-to-date survey on NS security. The paper articulates a taxonomy for NS security and privacy, laying the structure for the survey. Accordingly, the paper presents key attack scenarios specific to NS-enabled networks. Furthermore, the paper explores NS security threats, challenges, and issues while elaborating on NS security solutions available in the literature. In addition, NS trust and privacy aspects, along with possible solutions, are explained. The paper also highlights future research directions in NS security and privacy. It is envisaged that this survey will concentrate on existing research work, highlight research gaps and shed light on future research, development, and standardization work to realize secure NS in 5G and beyond mobile communication networks.","PeriodicalId":34,"journal":{"name":"Crystal Growth & Design","volume":"26 1","pages":"534-570"},"PeriodicalIF":35.6,"publicationDate":"2023-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10242032","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134157308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-04DOI: 10.1109/COMST.2023.3308717
Jiayuan Chen;Changyan Yi;Samuel D. Okegbile;Jun Cai;Xuemin Shen
Digital twin (DT), referring to a promising technique to digitally and accurately represent actual physical entities, has attracted explosive interests from both academia and industry. One typical advantage of DT is that it can be used to not only virtually replicate a system’s detailed operations but also analyze the current condition, predict the future behavior, and refine the control optimization. Although DT has been widely implemented in various fields, such as smart manufacturing and transportation, its conventional paradigm is limited to embody non-living entities, e.g., robots and vehicles. When adopted in human-centric systems, a novel concept, called human digital twin (HDT) has thus been proposed. Particularly, HDT allows in silico representation of individual human body with the ability to dynamically reflect molecular status, physiological status, emotional and psychological status, as well as lifestyle evolutions. These prompt the expected application of HDT in personalized healthcare (PH), which can facilitate the remote monitoring, diagnosis, prescription, surgery and rehabilitation, and hence significantly alleviate the heavy burden on the traditional healthcare system. However, despite the large potential, HDT faces substantial research challenges in different aspects, and becomes an increasingly popular topic recently. In this survey, with a specific focus on the networking architecture and key technologies for HDT in PH applications, we first discuss the differences between HDT and the conventional DTs, followed by the universal framework and essential functions of HDT. We then analyze its design requirements and challenges in PH applications. After that, we provide an overview of the networking architecture of HDT, including data acquisition layer, data communication layer, computation layer, data management layer and data analysis and decision making layer. Besides reviewing the key technologies for implementing such networking architecture in detail, we conclude this survey by presenting future research directions of HDT.
{"title":"Networking Architecture and Key Supporting Technologies for Human Digital Twin in Personalized Healthcare: A Comprehensive Survey","authors":"Jiayuan Chen;Changyan Yi;Samuel D. Okegbile;Jun Cai;Xuemin Shen","doi":"10.1109/COMST.2023.3308717","DOIUrl":"10.1109/COMST.2023.3308717","url":null,"abstract":"Digital twin (DT), referring to a promising technique to digitally and accurately represent actual physical entities, has attracted explosive interests from both academia and industry. One typical advantage of DT is that it can be used to not only virtually replicate a system’s detailed operations but also analyze the current condition, predict the future behavior, and refine the control optimization. Although DT has been widely implemented in various fields, such as smart manufacturing and transportation, its conventional paradigm is limited to embody non-living entities, e.g., robots and vehicles. When adopted in human-centric systems, a novel concept, called human digital twin (HDT) has thus been proposed. Particularly, HDT allows in silico representation of individual human body with the ability to dynamically reflect molecular status, physiological status, emotional and psychological status, as well as lifestyle evolutions. These prompt the expected application of HDT in personalized healthcare (PH), which can facilitate the remote monitoring, diagnosis, prescription, surgery and rehabilitation, and hence significantly alleviate the heavy burden on the traditional healthcare system. However, despite the large potential, HDT faces substantial research challenges in different aspects, and becomes an increasingly popular topic recently. In this survey, with a specific focus on the networking architecture and key technologies for HDT in PH applications, we first discuss the differences between HDT and the conventional DTs, followed by the universal framework and essential functions of HDT. We then analyze its design requirements and challenges in PH applications. After that, we provide an overview of the networking architecture of HDT, including data acquisition layer, data communication layer, computation layer, data management layer and data analysis and decision making layer. Besides reviewing the key technologies for implementing such networking architecture in detail, we conclude this survey by presenting future research directions of HDT.","PeriodicalId":34,"journal":{"name":"Crystal Growth & Design","volume":"26 1","pages":"706-746"},"PeriodicalIF":35.6,"publicationDate":"2023-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131573664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}