Pub Date : 2023-12-21DOI: 10.1109/COMST.2023.3345746
Domenico Ficara;Rosario G. Garroppo;Jerome Henry
The proliferation of Wi-Fi devices has led to the rise of privacy concerns related to MAC Address-based systems used for people tracking and localization across various applications, such as smart cities, intelligent transportation systems, and marketing. These systems have highlighted the necessity for mobile device manufacturers to implement Randomized And Changing MAC address (RCM) techniques as a countermeasure for device identification. In response to the challenges posed by diverse RCM implementations, the IEEE has taken steps to standardize RCM operations through the 802.11aq Task Group (TG). However, while RCM implementation addresses some concerns, it can disrupt services that span both Layer 2 and upper-layers, which were originally designed assuming static MAC addresses. To address these challenges, the IEEE has established the 802.11bh TG, focusing on defining new device identification methods, particularly for Layer 2 services that require pre-association identification. Simultaneously, the IETF launched the MAC Address Device Identification for Network and Application Services (MADINAS) Working Group to investigate the repercussions of RCM on upper-layer services, including the Dynamic Host Configuration Protocol (DHCP). Concurrently, derandomization techniques have emerged to counteract RCM defense mechanisms. The exploration of these techniques has suggested the need for a broader privacy enhancement framework for WLANs that goes beyond simple MAC address randomization. These findings have prompted the inception of the 802.11bi TG, which aims to compile an exhaustive list of potential privacy vulnerabilities and prerequisites for a more private IEEE 802.11 standard. In this context, this tutorial aims to provide insights into the motivations behind RCM, its implementation, and its evolution over the years. It elucidates the influence of RCM on network processes and services. Furthermore, the tutorial delves into the recent progress made within the domains of 802.11bh, 802.11bi, and MADINAS. It offers a thorough analysis of the initial work undertaken by these groups, along with an overview of the relevant research challenges. The tutorial objective is to inspire the research community to explore innovative approaches and solutions that contribute to the ongoing efforts to enhance WLAN privacy through standardization initiatives.
{"title":"A Tutorial on Privacy, RCM and Its Implications in WLAN","authors":"Domenico Ficara;Rosario G. Garroppo;Jerome Henry","doi":"10.1109/COMST.2023.3345746","DOIUrl":"https://doi.org/10.1109/COMST.2023.3345746","url":null,"abstract":"The proliferation of Wi-Fi devices has led to the rise of privacy concerns related to MAC Address-based systems used for people tracking and localization across various applications, such as smart cities, intelligent transportation systems, and marketing. These systems have highlighted the necessity for mobile device manufacturers to implement Randomized And Changing MAC address (RCM) techniques as a countermeasure for device identification. In response to the challenges posed by diverse RCM implementations, the IEEE has taken steps to standardize RCM operations through the 802.11aq Task Group (TG). However, while RCM implementation addresses some concerns, it can disrupt services that span both Layer 2 and upper-layers, which were originally designed assuming static MAC addresses. To address these challenges, the IEEE has established the 802.11bh TG, focusing on defining new device identification methods, particularly for Layer 2 services that require pre-association identification. Simultaneously, the IETF launched the MAC Address Device Identification for Network and Application Services (MADINAS) Working Group to investigate the repercussions of RCM on upper-layer services, including the Dynamic Host Configuration Protocol (DHCP). Concurrently, derandomization techniques have emerged to counteract RCM defense mechanisms. The exploration of these techniques has suggested the need for a broader privacy enhancement framework for WLANs that goes beyond simple MAC address randomization. These findings have prompted the inception of the 802.11bi TG, which aims to compile an exhaustive list of potential privacy vulnerabilities and prerequisites for a more private IEEE 802.11 standard. In this context, this tutorial aims to provide insights into the motivations behind RCM, its implementation, and its evolution over the years. It elucidates the influence of RCM on network processes and services. Furthermore, the tutorial delves into the recent progress made within the domains of 802.11bh, 802.11bi, and MADINAS. It offers a thorough analysis of the initial work undertaken by these groups, along with an overview of the relevant research challenges. The tutorial objective is to inspire the research community to explore innovative approaches and solutions that contribute to the ongoing efforts to enhance WLAN privacy through standardization initiatives.","PeriodicalId":55029,"journal":{"name":"IEEE Communications Surveys and Tutorials","volume":"26 2","pages":"1003-1040"},"PeriodicalIF":35.6,"publicationDate":"2023-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10368019","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141084808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-20DOI: 10.1109/COMST.2023.3344808
Shunyao Wang;Ryan K. L. Ko;Guangdong Bai;Naipeng Dong;Taejun Choi;Yanjun Zhang
Cyber-physical systems (CPS) are increasingly relying on machine learning (ML) techniques to reduce labor costs and improve efficiency. However, the adoption of ML also exposes CPS to potential adversarial ML attacks witnessed in the literature. Specifically, the increased Internet connectivity in CPS has resulted in a surge in the volume of data generation and communication frequency among devices, thereby expanding the attack surface and attack opportunities for ML adversaries. Among various adversarial ML attacks, evasion attacks are one of the most well-known ones. Therefore, this survey focuses on summarizing the latest research on evasion attack and defense techniques, to understand state-of-the-art ML model security in CPS. To assess the attack effectiveness, this survey proposes an attack taxonomy by introducing quantitative measures such as perturbation level and the number of modified features. Similarly, a defense taxonomy is introduced based on four perspectives demonstrating the defensive techniques from models’ inputs to their outputs. Furthermore, the survey identifies gaps and promising directions that researchers and practitioners can explore to address potential challenges and threats caused by evasion attacks and lays the groundwork for understanding and mitigating the attacks in CPS.
网络物理系统(CPS)越来越依赖机器学习(ML)技术来降低人力成本和提高效率。然而,ML 的采用也使 CPS 面临文献中提到的潜在对抗性 ML 攻击。具体来说,CPS 中互联网连接的增加导致数据生成量和设备间通信频率激增,从而扩大了 ML 对手的攻击面和攻击机会。在各种对抗性 ML 攻击中,规避攻击是最著名的攻击之一。因此,本调查重点总结了有关规避攻击和防御技术的最新研究,以了解 CPS 中最先进的 ML 模型安全性。为了评估攻击效果,本调查通过引入扰动级别和修改特征数量等定量指标,提出了攻击分类法。同样,基于从模型输入到输出的四个角度展示防御技术,引入了防御分类法。此外,调查还指出了研究人员和从业人员可以探索的差距和有前途的方向,以应对逃避攻击带来的潜在挑战和威胁,并为理解和缓解 CPS 中的攻击奠定了基础。
{"title":"Evasion Attack and Defense on Machine Learning Models in Cyber-Physical Systems: A Survey","authors":"Shunyao Wang;Ryan K. L. Ko;Guangdong Bai;Naipeng Dong;Taejun Choi;Yanjun Zhang","doi":"10.1109/COMST.2023.3344808","DOIUrl":"https://doi.org/10.1109/COMST.2023.3344808","url":null,"abstract":"Cyber-physical systems (CPS) are increasingly relying on machine learning (ML) techniques to reduce labor costs and improve efficiency. However, the adoption of ML also exposes CPS to potential adversarial ML attacks witnessed in the literature. Specifically, the increased Internet connectivity in CPS has resulted in a surge in the volume of data generation and communication frequency among devices, thereby expanding the attack surface and attack opportunities for ML adversaries. Among various adversarial ML attacks, evasion attacks are one of the most well-known ones. Therefore, this survey focuses on summarizing the latest research on evasion attack and defense techniques, to understand state-of-the-art ML model security in CPS. To assess the attack effectiveness, this survey proposes an attack taxonomy by introducing quantitative measures such as perturbation level and the number of modified features. Similarly, a defense taxonomy is introduced based on four perspectives demonstrating the defensive techniques from models’ inputs to their outputs. Furthermore, the survey identifies gaps and promising directions that researchers and practitioners can explore to address potential challenges and threats caused by evasion attacks and lays the groundwork for understanding and mitigating the attacks in CPS.","PeriodicalId":55029,"journal":{"name":"IEEE Communications Surveys and Tutorials","volume":"26 2","pages":"930-966"},"PeriodicalIF":35.6,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141084807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Machine learning is widely used to solve networking challenges, ranging from traffic classification and anomaly detection to network configuration. However, machine learning also requires significant processing and often increases the load on both networks and servers. The introduction of in-network computing, enabled by programmable network devices, has allowed to run applications within the network, providing higher throughput and lower latency. Soon after, in-network machine learning solutions started to emerge, enabling machine learning functionality within the network itself. This survey introduces the concept of in-network machine learning and provides a comprehensive taxonomy. The survey provides an introduction to the technology and explains the different types of machine learning solutions built upon programmable network devices. It explores the different types of machine learning models implemented within the network, and discusses related challenges and solutions. In-network machine learning can significantly benefit cloud computing and next-generation networks, and this survey concludes with a discussion of future trends.
{"title":"In-Network Machine Learning Using Programmable Network Devices: A Survey","authors":"Changgang Zheng;Xinpeng Hong;Damu Ding;Shay Vargaftik;Yaniv Ben-Itzhak;Noa Zilberman","doi":"10.1109/COMST.2023.3344351","DOIUrl":"https://doi.org/10.1109/COMST.2023.3344351","url":null,"abstract":"Machine learning is widely used to solve networking challenges, ranging from traffic classification and anomaly detection to network configuration. However, machine learning also requires significant processing and often increases the load on both networks and servers. The introduction of in-network computing, enabled by programmable network devices, has allowed to run applications within the network, providing higher throughput and lower latency. Soon after, in-network machine learning solutions started to emerge, enabling machine learning functionality within the network itself. This survey introduces the concept of in-network machine learning and provides a comprehensive taxonomy. The survey provides an introduction to the technology and explains the different types of machine learning solutions built upon programmable network devices. It explores the different types of machine learning models implemented within the network, and discusses related challenges and solutions. In-network machine learning can significantly benefit cloud computing and next-generation networks, and this survey concludes with a discussion of future trends.","PeriodicalId":55029,"journal":{"name":"IEEE Communications Surveys and Tutorials","volume":"26 2","pages":"1171-1200"},"PeriodicalIF":35.6,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141084803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-19DOI: 10.1109/COMST.2023.3344167
Shikhar Verma;Tiago Koketsu Rodrigues;Yuichi Kawamoto;Mostafa M. Fouda;Nei Kato
Recent advancements in wireless local area network (WLAN) technology include IEEE 802.11be and 802.11ay, often known as Wi-Fi 7 and WiGig, respectively. The goal of these developments is to provide Extremely High Throughput (EHT) and low latency to meet the demands of future applications like as 8K videos, augmented and virtual reality, the Internet of Things, telesurgery, and other developing technologies. IEEE 802.11be includes new features such as 320 MHz bandwidth, multi-link operation, Multi-user Multi-Input Multi-Output, orthogonal frequency-division multiple access, and Multiple-Access Point (multi-AP) coordination (MAP-Co) to achieve EHT. With the increase in the number of overlapping APs and inter-AP interference, researchers have focused on studying MAP-Co approaches for coordinated transmission in IEEE 802.11be, making MAP-Co a key feature of future WLANs. Moreover, similar issues may arise in EHF bands WLAN, particularly for standards beyond IEEE 802.11ay. This has prompted researchers to investigate the implementation of MAP-Co over future 802.11ay WLANs. Thus, in this article, we provide a comprehensive review of the state-of-the-art MAP-Co features and their shortcomings concerning emerging WLAN. Finally, we discuss several novel future directions and open challenges for MAP-Co.
{"title":"A Survey on Multi-AP Coordination Approaches Over Emerging WLANs: Future Directions and Open Challenges","authors":"Shikhar Verma;Tiago Koketsu Rodrigues;Yuichi Kawamoto;Mostafa M. Fouda;Nei Kato","doi":"10.1109/COMST.2023.3344167","DOIUrl":"10.1109/COMST.2023.3344167","url":null,"abstract":"Recent advancements in wireless local area network (WLAN) technology include IEEE 802.11be and 802.11ay, often known as Wi-Fi 7 and WiGig, respectively. The goal of these developments is to provide Extremely High Throughput (EHT) and low latency to meet the demands of future applications like as 8K videos, augmented and virtual reality, the Internet of Things, telesurgery, and other developing technologies. IEEE 802.11be includes new features such as 320 MHz bandwidth, multi-link operation, Multi-user Multi-Input Multi-Output, orthogonal frequency-division multiple access, and Multiple-Access Point (multi-AP) coordination (MAP-Co) to achieve EHT. With the increase in the number of overlapping APs and inter-AP interference, researchers have focused on studying MAP-Co approaches for coordinated transmission in IEEE 802.11be, making MAP-Co a key feature of future WLANs. Moreover, similar issues may arise in EHF bands WLAN, particularly for standards beyond IEEE 802.11ay. This has prompted researchers to investigate the implementation of MAP-Co over future 802.11ay WLANs. Thus, in this article, we provide a comprehensive review of the state-of-the-art MAP-Co features and their shortcomings concerning emerging WLAN. Finally, we discuss several novel future directions and open challenges for MAP-Co.","PeriodicalId":55029,"journal":{"name":"IEEE Communications Surveys and Tutorials","volume":"26 2","pages":"858-889"},"PeriodicalIF":35.6,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139370646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-15DOI: 10.1109/COMST.2023.3340099
Hao Zhou;Melike Erol-Kantarci;Yuanwei Liu;H. Vincent Poor
Reconfigurable intelligent surfaces (RISs) have received considerable attention as a key enabler for envisioned 6G networks, for the purpose of improving the network capacity, coverage, efficiency, and security with low energy consumption and low hardware cost. However, integrating RISs into the existing infrastructure greatly increases the network management complexity, especially for controlling a significant number of RIS elements. To realize the full potential of RISs, efficient optimization approaches are of great importance. This work provides a comprehensive survey of optimization techniques for RIS-aided wireless communications, including model-based, heuristic, and machine learning (ML) algorithms. In particular, we first summarize the problem formulations in the literature with diverse objectives and constraints, e.g., sumrate maximization, power minimization, and imperfect channel state information constraints. Then, we introduce model-based algorithms that have been used in the literature, such as alternating optimization, the majorization-minimization method, and successive convex approximation. Next, heuristic optimization is discussed, which applies heuristic rules for obtaining lowcomplexity solutions. Moreover, we present state-of-the-art ML algorithms and applications towards RISs, i.e., supervised and unsupervised learning, reinforcement learning, federated learning, graph learning, transfer learning, and hierarchical learning-based approaches. Model-based, heuristic, and ML approaches are compared in terms of stability, robustness, optimality and so on, providing a systematic understanding of these techniques. Finally, we highlight RIS-aided applications towards 6G networks and identify future challenges.
{"title":"A Survey on Model-Based, Heuristic, and Machine Learning Optimization Approaches in RIS-Aided Wireless Networks","authors":"Hao Zhou;Melike Erol-Kantarci;Yuanwei Liu;H. Vincent Poor","doi":"10.1109/COMST.2023.3340099","DOIUrl":"https://doi.org/10.1109/COMST.2023.3340099","url":null,"abstract":"Reconfigurable intelligent surfaces (RISs) have received considerable attention as a key enabler for envisioned 6G networks, for the purpose of improving the network capacity, coverage, efficiency, and security with low energy consumption and low hardware cost. However, integrating RISs into the existing infrastructure greatly increases the network management complexity, especially for controlling a significant number of RIS elements. To realize the full potential of RISs, efficient optimization approaches are of great importance. This work provides a comprehensive survey of optimization techniques for RIS-aided wireless communications, including model-based, heuristic, and machine learning (ML) algorithms. In particular, we first summarize the problem formulations in the literature with diverse objectives and constraints, e.g., sumrate maximization, power minimization, and imperfect channel state information constraints. Then, we introduce model-based algorithms that have been used in the literature, such as alternating optimization, the majorization-minimization method, and successive convex approximation. Next, heuristic optimization is discussed, which applies heuristic rules for obtaining lowcomplexity solutions. Moreover, we present state-of-the-art ML algorithms and applications towards RISs, i.e., supervised and unsupervised learning, reinforcement learning, federated learning, graph learning, transfer learning, and hierarchical learning-based approaches. Model-based, heuristic, and ML approaches are compared in terms of stability, robustness, optimality and so on, providing a systematic understanding of these techniques. Finally, we highlight RIS-aided applications towards 6G networks and identify future challenges.","PeriodicalId":55029,"journal":{"name":"IEEE Communications Surveys and Tutorials","volume":"26 2","pages":"781-823"},"PeriodicalIF":35.6,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141084862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Exploiting ultra-wide bandwidths is a promising approach to achieve the terabits per second (Tbps) data rates required to unlock emerging mobile applications like mobile extended reality and holographic telepresence. However, conventional digital systems are unable to exploit such bandwidths efficiently. In particular, the power consumption of ultra-fast, high-precision digital-to-analogue and analogue-to-digital converters (DACs/ADCs) for ultra-wide bandwidths becomes impractical. At the same time, achieving ultra-fast digital signal processing becomes extremely challenging in terms of power consumption and processing latency due to the complexity of state-of-the-art processing algorithms (e.g., “soft” detection/decoding) and the fact that the increased sampling rates challenge the speed capabilities of modern digital processors. To overcome these bottlenecks, there is a need for signal processing solutions that can, ideally, avoid DACs/ADCs while minimizing both the power consumption and processing latency. One potential approach in this direction is to design digital systems that do not require DACs/ADCs and perform all the corresponding processing directly in the analogue domain. Despite existing attempts to develop individual components of the transceiver chain in the analogue domain, as we discuss in detail in this work, the feasibility of complete analogue processing in ultra-fast wireless systems is still an open research topic. In addition, existing analogue-based approaches have inferior spectrum utilization than digital approaches, partly due to their inability to exploit the recent advances in digital systems such as “soft” detection/decoding. In this context, we also discuss the challenges related to performing “soft” detection/decoding directly in the analogue domain, as has been recently proposed by the DigiLogue processing concept, and we show with a simple example that analogue-based “soft” detection/decoding is feasible and can achieve the same error performance as digital approaches with more than $37times $