Metaverse is an evolving orchestrator of the next-generation Internet architecture that produces an immersive and self-adapting virtual world in which humans perform activities similar to those in the real world, such as playing sports, doing work, and socializing. It is becoming a reality and is driven by ever-evolving advanced technologies such as extended reality, artificial intelligence, and blockchain. In this context, Metaverse will play an essential role in developing smart cities, which becomes more evident in the post-COVID-19-pandemic metropolitan setting. However, the new paradigm imposes new challenges, such as developing novel privacy and security threats that can emerge in the digital Metaverse ecosystem. Moreover, it requires the convergence of several media types with the capability to quickly process massive amounts of data to keep the residents safe and well-informed, which can raise issues related to scalability and interoperability. In light of this, this research study aims to review the literature on the state of the art of integrating the Metaverse architecture concepts in smart cities. First, this paper presents the theoretical architecture of Metaverse and discusses international companies’ interest in this emerging technology. It also examines the notion of Metaverse relevant to virtual reality, identifies the prevalent threats, and determines the importance of communication infrastructure in information gathering for efficient Metaverse operation. Next, the notion of blockchain technologies is discussed regarding privacy preservation and how it can provide tamper-proof content sharing among Metaverse users. Finally, the application of distributed Metaverse for social good is highlighted. Most importantly, the paper explores the reflections of this cutting-edge technology on the smart city, talks about the role and impact of the Metaverse in the production of urban policies, and eventually identifies the research gaps and the future research directions in this domain.
{"title":"Metaverse Communications, Networking, Security, and Applications: Research Issues, State-of-the-Art, and Future Directions","authors":"Mansoor Ali;Faisal Naeem;Georges Kaddoum;Ekram Hossain","doi":"10.1109/COMST.2023.3347172","DOIUrl":"https://doi.org/10.1109/COMST.2023.3347172","url":null,"abstract":"Metaverse is an evolving orchestrator of the next-generation Internet architecture that produces an immersive and self-adapting virtual world in which humans perform activities similar to those in the real world, such as playing sports, doing work, and socializing. It is becoming a reality and is driven by ever-evolving advanced technologies such as extended reality, artificial intelligence, and blockchain. In this context, Metaverse will play an essential role in developing smart cities, which becomes more evident in the post-COVID-19-pandemic metropolitan setting. However, the new paradigm imposes new challenges, such as developing novel privacy and security threats that can emerge in the digital Metaverse ecosystem. Moreover, it requires the convergence of several media types with the capability to quickly process massive amounts of data to keep the residents safe and well-informed, which can raise issues related to scalability and interoperability. In light of this, this research study aims to review the literature on the state of the art of integrating the Metaverse architecture concepts in smart cities. First, this paper presents the theoretical architecture of Metaverse and discusses international companies’ interest in this emerging technology. It also examines the notion of Metaverse relevant to virtual reality, identifies the prevalent threats, and determines the importance of communication infrastructure in information gathering for efficient Metaverse operation. Next, the notion of blockchain technologies is discussed regarding privacy preservation and how it can provide tamper-proof content sharing among Metaverse users. Finally, the application of distributed Metaverse for social good is highlighted. Most importantly, the paper explores the reflections of this cutting-edge technology on the smart city, talks about the role and impact of the Metaverse in the production of urban policies, and eventually identifies the research gaps and the future research directions in this domain.","PeriodicalId":34,"journal":{"name":"Crystal Growth & Design","volume":"26 2","pages":"1238-1278"},"PeriodicalIF":35.6,"publicationDate":"2023-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141084816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-21DOI: 10.1109/COMST.2023.3344671
Hitesh Poddar;Shihao Ju;Dipankar Shakya;Theodore S. Rappaport
With the advancement of wireless communication to sub-terahertz (THz) and millimeter-wave (mmWave) bands, accurate channel models and simulation tools are becoming increasingly important for modeling a wide range of frequencies and scenarios. This paper provides a comprehensive tutorial on generating drop-based and spatial consistency-based channels using the open-source MATLAB-based NYU Channel Model Simulator (NYUSIM). NYUSIM is built on extensive real-world radio propagation measurements for the frequency range of 0.5–150 GHz, covering a variety of scenarios such as Urban Microcell (UMi), Urban Macrocell (UMa), Rural Macrocell (RMa), Indoor Hotspot (InH), and Indoor Factory (InF). Additionally, an overview of the evolution of simulators used to design and analyze wireless systems since the early days of cellular communication is also provided. We introduce the most popular types of simulators used in academia and industry, such as Channel Simulators (CSs), Link Level Simulators (LLSs), System Level Simulators (SLSs), and Network Simulators (NSs), to study wireless communication systems for 5G and beyond. Owing to the widespread adoption of the 3rd Generation Partnership Project (3GPP) Stochastic Channel Model (SCM) for channel generation in various simulators, we conduct a comparative analysis between the 3GPP SCM and NYUSIM channel model to highlight their differences. Moreover, NYUSIM’s versatility extends beyond its MATLAB implementation, as it can be implemented in various LLSs, SLSs, and NSs, enabling researchers to incorporate real-world measurement-based channels into their simulations. To illustrate this capability, we showcase NYUSIM’s implementation in ns-3, a widely used open-source discrete event network simulator. Additionally, we provide several applications of NYUSIM to highlight its potential uses.
{"title":"A Tutorial on NYUSIM: Sub-Terahertz and Millimeter-Wave Channel Simulator for 5G, 6G, and Beyond","authors":"Hitesh Poddar;Shihao Ju;Dipankar Shakya;Theodore S. Rappaport","doi":"10.1109/COMST.2023.3344671","DOIUrl":"https://doi.org/10.1109/COMST.2023.3344671","url":null,"abstract":"With the advancement of wireless communication to sub-terahertz (THz) and millimeter-wave (mmWave) bands, accurate channel models and simulation tools are becoming increasingly important for modeling a wide range of frequencies and scenarios. This paper provides a comprehensive tutorial on generating drop-based and spatial consistency-based channels using the open-source MATLAB-based NYU Channel Model Simulator (NYUSIM). NYUSIM is built on extensive real-world radio propagation measurements for the frequency range of 0.5–150 GHz, covering a variety of scenarios such as Urban Microcell (UMi), Urban Macrocell (UMa), Rural Macrocell (RMa), Indoor Hotspot (InH), and Indoor Factory (InF). Additionally, an overview of the evolution of simulators used to design and analyze wireless systems since the early days of cellular communication is also provided. We introduce the most popular types of simulators used in academia and industry, such as Channel Simulators (CSs), Link Level Simulators (LLSs), System Level Simulators (SLSs), and Network Simulators (NSs), to study wireless communication systems for 5G and beyond. Owing to the widespread adoption of the 3rd Generation Partnership Project (3GPP) Stochastic Channel Model (SCM) for channel generation in various simulators, we conduct a comparative analysis between the 3GPP SCM and NYUSIM channel model to highlight their differences. Moreover, NYUSIM’s versatility extends beyond its MATLAB implementation, as it can be implemented in various LLSs, SLSs, and NSs, enabling researchers to incorporate real-world measurement-based channels into their simulations. To illustrate this capability, we showcase NYUSIM’s implementation in ns-3, a widely used open-source discrete event network simulator. Additionally, we provide several applications of NYUSIM to highlight its potential uses.","PeriodicalId":34,"journal":{"name":"Crystal Growth & Design","volume":"26 2","pages":"824-857"},"PeriodicalIF":35.6,"publicationDate":"2023-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10367974","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141084809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-21DOI: 10.1109/COMST.2023.3345796
Nada Abdel Khalek;Deemah H. Tashman;Walaa Hamouda
The next frontier in wireless connectivity lies at the intersection of cognitive radio (CR) technology and machine learning (ML), where intelligent networks can provide pervasive connectivity for an ever-expanding range of applications. In this regard, this survey provides an in-depth examination of the integration of ML-based CR in a wide range of emerging wireless networks, including the Internet of Things (IoT), mobile communications (vehicular and railway), and unmanned aerial vehicle (UAV) communications. By combining ML-based CR and emerging wireless networks, we can create intelligent, efficient, and ubiquitous wireless communication systems that satisfy spectrum-hungry applications and services of next-generation networks. For each type of wireless network, we highlight the key motivation for using intelligent CR and present a full review of the existing state-of-the-art ML approaches that address pressing challenges, including energy efficiency, interference, throughput, latency, and security. Our goal is to provide researchers and newcomers with a clear understanding of the motivation and methodology behind applying intelligent CR to emerging wireless networks. Moreover, problems and prospective research avenues are outlined, and a future roadmap is offered that explores possibilities for overcoming challenges through trending concepts.
无线连接的下一个前沿领域是认知无线电(CR)技术和机器学习(ML)技术的交汇点,智能网络可为不断扩大的应用提供无处不在的连接。在这方面,本调查报告深入探讨了基于 ML 的 CR 与各种新兴无线网络的整合,包括物联网 (IoT)、移动通信(车载和铁路)和无人机 (UAV) 通信。通过将基于 ML 的 CR 与新兴无线网络相结合,我们可以创建智能、高效和无处不在的无线通信系统,满足下一代网络对频谱的需求。对于每种类型的无线网络,我们都强调了使用智能 CR 的主要动机,并全面回顾了现有的最先进的 ML 方法,这些方法可解决能源效率、干扰、吞吐量、延迟和安全性等紧迫挑战。我们的目标是让研究人员和新手清楚地了解将智能 CR 应用于新兴无线网络背后的动机和方法。此外,我们还概述了存在的问题和前瞻性研究途径,并提供了未来路线图,探讨通过趋势概念克服挑战的可能性。
{"title":"Advances in Machine Learning-Driven Cognitive Radio for Wireless Networks: A Survey","authors":"Nada Abdel Khalek;Deemah H. Tashman;Walaa Hamouda","doi":"10.1109/COMST.2023.3345796","DOIUrl":"https://doi.org/10.1109/COMST.2023.3345796","url":null,"abstract":"The next frontier in wireless connectivity lies at the intersection of cognitive radio (CR) technology and machine learning (ML), where intelligent networks can provide pervasive connectivity for an ever-expanding range of applications. In this regard, this survey provides an in-depth examination of the integration of ML-based CR in a wide range of emerging wireless networks, including the Internet of Things (IoT), mobile communications (vehicular and railway), and unmanned aerial vehicle (UAV) communications. By combining ML-based CR and emerging wireless networks, we can create intelligent, efficient, and ubiquitous wireless communication systems that satisfy spectrum-hungry applications and services of next-generation networks. For each type of wireless network, we highlight the key motivation for using intelligent CR and present a full review of the existing state-of-the-art ML approaches that address pressing challenges, including energy efficiency, interference, throughput, latency, and security. Our goal is to provide researchers and newcomers with a clear understanding of the motivation and methodology behind applying intelligent CR to emerging wireless networks. Moreover, problems and prospective research avenues are outlined, and a future roadmap is offered that explores possibilities for overcoming challenges through trending concepts.","PeriodicalId":34,"journal":{"name":"Crystal Growth & Design","volume":"26 2","pages":"1201-1237"},"PeriodicalIF":35.6,"publicationDate":"2023-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141084854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-21DOI: 10.1109/COMST.2023.3345746
Domenico Ficara;Rosario G. Garroppo;Jerome Henry
The proliferation of Wi-Fi devices has led to the rise of privacy concerns related to MAC Address-based systems used for people tracking and localization across various applications, such as smart cities, intelligent transportation systems, and marketing. These systems have highlighted the necessity for mobile device manufacturers to implement Randomized And Changing MAC address (RCM) techniques as a countermeasure for device identification. In response to the challenges posed by diverse RCM implementations, the IEEE has taken steps to standardize RCM operations through the 802.11aq Task Group (TG). However, while RCM implementation addresses some concerns, it can disrupt services that span both Layer 2 and upper-layers, which were originally designed assuming static MAC addresses. To address these challenges, the IEEE has established the 802.11bh TG, focusing on defining new device identification methods, particularly for Layer 2 services that require pre-association identification. Simultaneously, the IETF launched the MAC Address Device Identification for Network and Application Services (MADINAS) Working Group to investigate the repercussions of RCM on upper-layer services, including the Dynamic Host Configuration Protocol (DHCP). Concurrently, derandomization techniques have emerged to counteract RCM defense mechanisms. The exploration of these techniques has suggested the need for a broader privacy enhancement framework for WLANs that goes beyond simple MAC address randomization. These findings have prompted the inception of the 802.11bi TG, which aims to compile an exhaustive list of potential privacy vulnerabilities and prerequisites for a more private IEEE 802.11 standard. In this context, this tutorial aims to provide insights into the motivations behind RCM, its implementation, and its evolution over the years. It elucidates the influence of RCM on network processes and services. Furthermore, the tutorial delves into the recent progress made within the domains of 802.11bh, 802.11bi, and MADINAS. It offers a thorough analysis of the initial work undertaken by these groups, along with an overview of the relevant research challenges. The tutorial objective is to inspire the research community to explore innovative approaches and solutions that contribute to the ongoing efforts to enhance WLAN privacy through standardization initiatives.
{"title":"A Tutorial on Privacy, RCM and Its Implications in WLAN","authors":"Domenico Ficara;Rosario G. Garroppo;Jerome Henry","doi":"10.1109/COMST.2023.3345746","DOIUrl":"https://doi.org/10.1109/COMST.2023.3345746","url":null,"abstract":"The proliferation of Wi-Fi devices has led to the rise of privacy concerns related to MAC Address-based systems used for people tracking and localization across various applications, such as smart cities, intelligent transportation systems, and marketing. These systems have highlighted the necessity for mobile device manufacturers to implement Randomized And Changing MAC address (RCM) techniques as a countermeasure for device identification. In response to the challenges posed by diverse RCM implementations, the IEEE has taken steps to standardize RCM operations through the 802.11aq Task Group (TG). However, while RCM implementation addresses some concerns, it can disrupt services that span both Layer 2 and upper-layers, which were originally designed assuming static MAC addresses. To address these challenges, the IEEE has established the 802.11bh TG, focusing on defining new device identification methods, particularly for Layer 2 services that require pre-association identification. Simultaneously, the IETF launched the MAC Address Device Identification for Network and Application Services (MADINAS) Working Group to investigate the repercussions of RCM on upper-layer services, including the Dynamic Host Configuration Protocol (DHCP). Concurrently, derandomization techniques have emerged to counteract RCM defense mechanisms. The exploration of these techniques has suggested the need for a broader privacy enhancement framework for WLANs that goes beyond simple MAC address randomization. These findings have prompted the inception of the 802.11bi TG, which aims to compile an exhaustive list of potential privacy vulnerabilities and prerequisites for a more private IEEE 802.11 standard. In this context, this tutorial aims to provide insights into the motivations behind RCM, its implementation, and its evolution over the years. It elucidates the influence of RCM on network processes and services. Furthermore, the tutorial delves into the recent progress made within the domains of 802.11bh, 802.11bi, and MADINAS. It offers a thorough analysis of the initial work undertaken by these groups, along with an overview of the relevant research challenges. The tutorial objective is to inspire the research community to explore innovative approaches and solutions that contribute to the ongoing efforts to enhance WLAN privacy through standardization initiatives.","PeriodicalId":34,"journal":{"name":"Crystal Growth & Design","volume":"26 2","pages":"1003-1040"},"PeriodicalIF":35.6,"publicationDate":"2023-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10368019","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141084808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-20DOI: 10.1109/COMST.2023.3344808
Shunyao Wang;Ryan K. L. Ko;Guangdong Bai;Naipeng Dong;Taejun Choi;Yanjun Zhang
Cyber-physical systems (CPS) are increasingly relying on machine learning (ML) techniques to reduce labor costs and improve efficiency. However, the adoption of ML also exposes CPS to potential adversarial ML attacks witnessed in the literature. Specifically, the increased Internet connectivity in CPS has resulted in a surge in the volume of data generation and communication frequency among devices, thereby expanding the attack surface and attack opportunities for ML adversaries. Among various adversarial ML attacks, evasion attacks are one of the most well-known ones. Therefore, this survey focuses on summarizing the latest research on evasion attack and defense techniques, to understand state-of-the-art ML model security in CPS. To assess the attack effectiveness, this survey proposes an attack taxonomy by introducing quantitative measures such as perturbation level and the number of modified features. Similarly, a defense taxonomy is introduced based on four perspectives demonstrating the defensive techniques from models’ inputs to their outputs. Furthermore, the survey identifies gaps and promising directions that researchers and practitioners can explore to address potential challenges and threats caused by evasion attacks and lays the groundwork for understanding and mitigating the attacks in CPS.
网络物理系统(CPS)越来越依赖机器学习(ML)技术来降低人力成本和提高效率。然而,ML 的采用也使 CPS 面临文献中提到的潜在对抗性 ML 攻击。具体来说,CPS 中互联网连接的增加导致数据生成量和设备间通信频率激增,从而扩大了 ML 对手的攻击面和攻击机会。在各种对抗性 ML 攻击中,规避攻击是最著名的攻击之一。因此,本调查重点总结了有关规避攻击和防御技术的最新研究,以了解 CPS 中最先进的 ML 模型安全性。为了评估攻击效果,本调查通过引入扰动级别和修改特征数量等定量指标,提出了攻击分类法。同样,基于从模型输入到输出的四个角度展示防御技术,引入了防御分类法。此外,调查还指出了研究人员和从业人员可以探索的差距和有前途的方向,以应对逃避攻击带来的潜在挑战和威胁,并为理解和缓解 CPS 中的攻击奠定了基础。
{"title":"Evasion Attack and Defense on Machine Learning Models in Cyber-Physical Systems: A Survey","authors":"Shunyao Wang;Ryan K. L. Ko;Guangdong Bai;Naipeng Dong;Taejun Choi;Yanjun Zhang","doi":"10.1109/COMST.2023.3344808","DOIUrl":"https://doi.org/10.1109/COMST.2023.3344808","url":null,"abstract":"Cyber-physical systems (CPS) are increasingly relying on machine learning (ML) techniques to reduce labor costs and improve efficiency. However, the adoption of ML also exposes CPS to potential adversarial ML attacks witnessed in the literature. Specifically, the increased Internet connectivity in CPS has resulted in a surge in the volume of data generation and communication frequency among devices, thereby expanding the attack surface and attack opportunities for ML adversaries. Among various adversarial ML attacks, evasion attacks are one of the most well-known ones. Therefore, this survey focuses on summarizing the latest research on evasion attack and defense techniques, to understand state-of-the-art ML model security in CPS. To assess the attack effectiveness, this survey proposes an attack taxonomy by introducing quantitative measures such as perturbation level and the number of modified features. Similarly, a defense taxonomy is introduced based on four perspectives demonstrating the defensive techniques from models’ inputs to their outputs. Furthermore, the survey identifies gaps and promising directions that researchers and practitioners can explore to address potential challenges and threats caused by evasion attacks and lays the groundwork for understanding and mitigating the attacks in CPS.","PeriodicalId":34,"journal":{"name":"Crystal Growth & Design","volume":"26 2","pages":"930-966"},"PeriodicalIF":35.6,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141084807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Machine learning is widely used to solve networking challenges, ranging from traffic classification and anomaly detection to network configuration. However, machine learning also requires significant processing and often increases the load on both networks and servers. The introduction of in-network computing, enabled by programmable network devices, has allowed to run applications within the network, providing higher throughput and lower latency. Soon after, in-network machine learning solutions started to emerge, enabling machine learning functionality within the network itself. This survey introduces the concept of in-network machine learning and provides a comprehensive taxonomy. The survey provides an introduction to the technology and explains the different types of machine learning solutions built upon programmable network devices. It explores the different types of machine learning models implemented within the network, and discusses related challenges and solutions. In-network machine learning can significantly benefit cloud computing and next-generation networks, and this survey concludes with a discussion of future trends.
{"title":"In-Network Machine Learning Using Programmable Network Devices: A Survey","authors":"Changgang Zheng;Xinpeng Hong;Damu Ding;Shay Vargaftik;Yaniv Ben-Itzhak;Noa Zilberman","doi":"10.1109/COMST.2023.3344351","DOIUrl":"https://doi.org/10.1109/COMST.2023.3344351","url":null,"abstract":"Machine learning is widely used to solve networking challenges, ranging from traffic classification and anomaly detection to network configuration. However, machine learning also requires significant processing and often increases the load on both networks and servers. The introduction of in-network computing, enabled by programmable network devices, has allowed to run applications within the network, providing higher throughput and lower latency. Soon after, in-network machine learning solutions started to emerge, enabling machine learning functionality within the network itself. This survey introduces the concept of in-network machine learning and provides a comprehensive taxonomy. The survey provides an introduction to the technology and explains the different types of machine learning solutions built upon programmable network devices. It explores the different types of machine learning models implemented within the network, and discusses related challenges and solutions. In-network machine learning can significantly benefit cloud computing and next-generation networks, and this survey concludes with a discussion of future trends.","PeriodicalId":34,"journal":{"name":"Crystal Growth & Design","volume":"26 2","pages":"1171-1200"},"PeriodicalIF":35.6,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141084803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-19DOI: 10.1109/COMST.2023.3344167
Shikhar Verma;Tiago Koketsu Rodrigues;Yuichi Kawamoto;Mostafa M. Fouda;Nei Kato
Recent advancements in wireless local area network (WLAN) technology include IEEE 802.11be and 802.11ay, often known as Wi-Fi 7 and WiGig, respectively. The goal of these developments is to provide Extremely High Throughput (EHT) and low latency to meet the demands of future applications like as 8K videos, augmented and virtual reality, the Internet of Things, telesurgery, and other developing technologies. IEEE 802.11be includes new features such as 320 MHz bandwidth, multi-link operation, Multi-user Multi-Input Multi-Output, orthogonal frequency-division multiple access, and Multiple-Access Point (multi-AP) coordination (MAP-Co) to achieve EHT. With the increase in the number of overlapping APs and inter-AP interference, researchers have focused on studying MAP-Co approaches for coordinated transmission in IEEE 802.11be, making MAP-Co a key feature of future WLANs. Moreover, similar issues may arise in EHF bands WLAN, particularly for standards beyond IEEE 802.11ay. This has prompted researchers to investigate the implementation of MAP-Co over future 802.11ay WLANs. Thus, in this article, we provide a comprehensive review of the state-of-the-art MAP-Co features and their shortcomings concerning emerging WLAN. Finally, we discuss several novel future directions and open challenges for MAP-Co.
{"title":"A Survey on Multi-AP Coordination Approaches Over Emerging WLANs: Future Directions and Open Challenges","authors":"Shikhar Verma;Tiago Koketsu Rodrigues;Yuichi Kawamoto;Mostafa M. Fouda;Nei Kato","doi":"10.1109/COMST.2023.3344167","DOIUrl":"10.1109/COMST.2023.3344167","url":null,"abstract":"Recent advancements in wireless local area network (WLAN) technology include IEEE 802.11be and 802.11ay, often known as Wi-Fi 7 and WiGig, respectively. The goal of these developments is to provide Extremely High Throughput (EHT) and low latency to meet the demands of future applications like as 8K videos, augmented and virtual reality, the Internet of Things, telesurgery, and other developing technologies. IEEE 802.11be includes new features such as 320 MHz bandwidth, multi-link operation, Multi-user Multi-Input Multi-Output, orthogonal frequency-division multiple access, and Multiple-Access Point (multi-AP) coordination (MAP-Co) to achieve EHT. With the increase in the number of overlapping APs and inter-AP interference, researchers have focused on studying MAP-Co approaches for coordinated transmission in IEEE 802.11be, making MAP-Co a key feature of future WLANs. Moreover, similar issues may arise in EHF bands WLAN, particularly for standards beyond IEEE 802.11ay. This has prompted researchers to investigate the implementation of MAP-Co over future 802.11ay WLANs. Thus, in this article, we provide a comprehensive review of the state-of-the-art MAP-Co features and their shortcomings concerning emerging WLAN. Finally, we discuss several novel future directions and open challenges for MAP-Co.","PeriodicalId":34,"journal":{"name":"Crystal Growth & Design","volume":"26 2","pages":"858-889"},"PeriodicalIF":35.6,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139370646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-15DOI: 10.1109/COMST.2023.3340099
Hao Zhou;Melike Erol-Kantarci;Yuanwei Liu;H. Vincent Poor
Reconfigurable intelligent surfaces (RISs) have received considerable attention as a key enabler for envisioned 6G networks, for the purpose of improving the network capacity, coverage, efficiency, and security with low energy consumption and low hardware cost. However, integrating RISs into the existing infrastructure greatly increases the network management complexity, especially for controlling a significant number of RIS elements. To realize the full potential of RISs, efficient optimization approaches are of great importance. This work provides a comprehensive survey of optimization techniques for RIS-aided wireless communications, including model-based, heuristic, and machine learning (ML) algorithms. In particular, we first summarize the problem formulations in the literature with diverse objectives and constraints, e.g., sumrate maximization, power minimization, and imperfect channel state information constraints. Then, we introduce model-based algorithms that have been used in the literature, such as alternating optimization, the majorization-minimization method, and successive convex approximation. Next, heuristic optimization is discussed, which applies heuristic rules for obtaining lowcomplexity solutions. Moreover, we present state-of-the-art ML algorithms and applications towards RISs, i.e., supervised and unsupervised learning, reinforcement learning, federated learning, graph learning, transfer learning, and hierarchical learning-based approaches. Model-based, heuristic, and ML approaches are compared in terms of stability, robustness, optimality and so on, providing a systematic understanding of these techniques. Finally, we highlight RIS-aided applications towards 6G networks and identify future challenges.
{"title":"A Survey on Model-Based, Heuristic, and Machine Learning Optimization Approaches in RIS-Aided Wireless Networks","authors":"Hao Zhou;Melike Erol-Kantarci;Yuanwei Liu;H. Vincent Poor","doi":"10.1109/COMST.2023.3340099","DOIUrl":"https://doi.org/10.1109/COMST.2023.3340099","url":null,"abstract":"Reconfigurable intelligent surfaces (RISs) have received considerable attention as a key enabler for envisioned 6G networks, for the purpose of improving the network capacity, coverage, efficiency, and security with low energy consumption and low hardware cost. However, integrating RISs into the existing infrastructure greatly increases the network management complexity, especially for controlling a significant number of RIS elements. To realize the full potential of RISs, efficient optimization approaches are of great importance. This work provides a comprehensive survey of optimization techniques for RIS-aided wireless communications, including model-based, heuristic, and machine learning (ML) algorithms. In particular, we first summarize the problem formulations in the literature with diverse objectives and constraints, e.g., sumrate maximization, power minimization, and imperfect channel state information constraints. Then, we introduce model-based algorithms that have been used in the literature, such as alternating optimization, the majorization-minimization method, and successive convex approximation. Next, heuristic optimization is discussed, which applies heuristic rules for obtaining lowcomplexity solutions. Moreover, we present state-of-the-art ML algorithms and applications towards RISs, i.e., supervised and unsupervised learning, reinforcement learning, federated learning, graph learning, transfer learning, and hierarchical learning-based approaches. Model-based, heuristic, and ML approaches are compared in terms of stability, robustness, optimality and so on, providing a systematic understanding of these techniques. Finally, we highlight RIS-aided applications towards 6G networks and identify future challenges.","PeriodicalId":34,"journal":{"name":"Crystal Growth & Design","volume":"26 2","pages":"781-823"},"PeriodicalIF":35.6,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141084862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Exploiting ultra-wide bandwidths is a promising approach to achieve the terabits per second (Tbps) data rates required to unlock emerging mobile applications like mobile extended reality and holographic telepresence. However, conventional digital systems are unable to exploit such bandwidths efficiently. In particular, the power consumption of ultra-fast, high-precision digital-to-analogue and analogue-to-digital converters (DACs/ADCs) for ultra-wide bandwidths becomes impractical. At the same time, achieving ultra-fast digital signal processing becomes extremely challenging in terms of power consumption and processing latency due to the complexity of state-of-the-art processing algorithms (e.g., “soft” detection/decoding) and the fact that the increased sampling rates challenge the speed capabilities of modern digital processors. To overcome these bottlenecks, there is a need for signal processing solutions that can, ideally, avoid DACs/ADCs while minimizing both the power consumption and processing latency. One potential approach in this direction is to design digital systems that do not require DACs/ADCs and perform all the corresponding processing directly in the analogue domain. Despite existing attempts to develop individual components of the transceiver chain in the analogue domain, as we discuss in detail in this work, the feasibility of complete analogue processing in ultra-fast wireless systems is still an open research topic. In addition, existing analogue-based approaches have inferior spectrum utilization than digital approaches, partly due to their inability to exploit the recent advances in digital systems such as “soft” detection/decoding. In this context, we also discuss the challenges related to performing “soft” detection/decoding directly in the analogue domain, as has been recently proposed by the DigiLogue processing concept, and we show with a simple example that analogue-based “soft” detection/decoding is feasible and can achieve the same error performance as digital approaches with more than $37times $