首页 > 最新文献

IEEE Communications Surveys and Tutorials最新文献

英文 中文
A Tutorial on Privacy, RCM and Its Implications in WLAN 无线局域网中的隐私、RCM 及其影响教程
IF 35.6 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-12-21 DOI: 10.1109/COMST.2023.3345746
Domenico Ficara;Rosario G. Garroppo;Jerome Henry
The proliferation of Wi-Fi devices has led to the rise of privacy concerns related to MAC Address-based systems used for people tracking and localization across various applications, such as smart cities, intelligent transportation systems, and marketing. These systems have highlighted the necessity for mobile device manufacturers to implement Randomized And Changing MAC address (RCM) techniques as a countermeasure for device identification. In response to the challenges posed by diverse RCM implementations, the IEEE has taken steps to standardize RCM operations through the 802.11aq Task Group (TG). However, while RCM implementation addresses some concerns, it can disrupt services that span both Layer 2 and upper-layers, which were originally designed assuming static MAC addresses. To address these challenges, the IEEE has established the 802.11bh TG, focusing on defining new device identification methods, particularly for Layer 2 services that require pre-association identification. Simultaneously, the IETF launched the MAC Address Device Identification for Network and Application Services (MADINAS) Working Group to investigate the repercussions of RCM on upper-layer services, including the Dynamic Host Configuration Protocol (DHCP). Concurrently, derandomization techniques have emerged to counteract RCM defense mechanisms. The exploration of these techniques has suggested the need for a broader privacy enhancement framework for WLANs that goes beyond simple MAC address randomization. These findings have prompted the inception of the 802.11bi TG, which aims to compile an exhaustive list of potential privacy vulnerabilities and prerequisites for a more private IEEE 802.11 standard. In this context, this tutorial aims to provide insights into the motivations behind RCM, its implementation, and its evolution over the years. It elucidates the influence of RCM on network processes and services. Furthermore, the tutorial delves into the recent progress made within the domains of 802.11bh, 802.11bi, and MADINAS. It offers a thorough analysis of the initial work undertaken by these groups, along with an overview of the relevant research challenges. The tutorial objective is to inspire the research community to explore innovative approaches and solutions that contribute to the ongoing efforts to enhance WLAN privacy through standardization initiatives.
随着 Wi-Fi 设备的普及,基于 MAC 地址的人员跟踪和定位系统在智能城市、智能交通系统和市场营销等各种应用中的使用引起了人们对隐私的关注。这些系统凸显了移动设备制造商采用随机和更改 MAC 地址 (RCM) 技术作为设备识别对策的必要性。为应对各种 RCM 实施带来的挑战,IEEE 已采取措施,通过 802.11aq 任务组 (TG) 实现 RCM 操作的标准化。然而,虽然 RCM 的实施解决了一些问题,但它可能会破坏跨越第 2 层和上层的服务,而这些服务最初是假设静态 MAC 地址设计的。为了应对这些挑战,IEEE 成立了 802.11bh TG,重点定义新的设备识别方法,特别是针对需要预先关联识别的第 2 层服务。与此同时,IETF 启动了网络和应用服务 MAC 地址设备识别 (MADINAS) 工作组,研究 RCM 对上层服务(包括动态主机配置协议 (DHCP))的影响。与此同时,出现了反随机化技术来对抗 RCM 防御机制。对这些技术的探索表明,除了简单的 MAC 地址随机化外,还需要为 WLAN 建立更广泛的隐私增强框架。这些发现推动了 802.11bi TG 的成立,其目的是汇编一份详尽的清单,列出潜在的隐私漏洞和更私密的 IEEE 802.11 标准的先决条件。在此背景下,本教程旨在深入探讨 RCM 背后的动机、实施及其多年来的演变。它阐明了 RCM 对网络流程和服务的影响。此外,教程还深入探讨了 802.11bh、802.11bi 和 MADINAS 领域的最新进展。教程全面分析了这些小组开展的初步工作,并概述了相关的研究挑战。教程的目的是激励研究界探索创新方法和解决方案,为目前通过标准化倡议加强 WLAN 隐私保护的工作做出贡献。
{"title":"A Tutorial on Privacy, RCM and Its Implications in WLAN","authors":"Domenico Ficara;Rosario G. Garroppo;Jerome Henry","doi":"10.1109/COMST.2023.3345746","DOIUrl":"https://doi.org/10.1109/COMST.2023.3345746","url":null,"abstract":"The proliferation of Wi-Fi devices has led to the rise of privacy concerns related to MAC Address-based systems used for people tracking and localization across various applications, such as smart cities, intelligent transportation systems, and marketing. These systems have highlighted the necessity for mobile device manufacturers to implement Randomized And Changing MAC address (RCM) techniques as a countermeasure for device identification. In response to the challenges posed by diverse RCM implementations, the IEEE has taken steps to standardize RCM operations through the 802.11aq Task Group (TG). However, while RCM implementation addresses some concerns, it can disrupt services that span both Layer 2 and upper-layers, which were originally designed assuming static MAC addresses. To address these challenges, the IEEE has established the 802.11bh TG, focusing on defining new device identification methods, particularly for Layer 2 services that require pre-association identification. Simultaneously, the IETF launched the MAC Address Device Identification for Network and Application Services (MADINAS) Working Group to investigate the repercussions of RCM on upper-layer services, including the Dynamic Host Configuration Protocol (DHCP). Concurrently, derandomization techniques have emerged to counteract RCM defense mechanisms. The exploration of these techniques has suggested the need for a broader privacy enhancement framework for WLANs that goes beyond simple MAC address randomization. These findings have prompted the inception of the 802.11bi TG, which aims to compile an exhaustive list of potential privacy vulnerabilities and prerequisites for a more private IEEE 802.11 standard. In this context, this tutorial aims to provide insights into the motivations behind RCM, its implementation, and its evolution over the years. It elucidates the influence of RCM on network processes and services. Furthermore, the tutorial delves into the recent progress made within the domains of 802.11bh, 802.11bi, and MADINAS. It offers a thorough analysis of the initial work undertaken by these groups, along with an overview of the relevant research challenges. The tutorial objective is to inspire the research community to explore innovative approaches and solutions that contribute to the ongoing efforts to enhance WLAN privacy through standardization initiatives.","PeriodicalId":55029,"journal":{"name":"IEEE Communications Surveys and Tutorials","volume":"26 2","pages":"1003-1040"},"PeriodicalIF":35.6,"publicationDate":"2023-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10368019","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141084808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evasion Attack and Defense on Machine Learning Models in Cyber-Physical Systems: A Survey 网络物理系统中机器学习模型的规避攻击与防御:调查
IF 35.6 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-12-20 DOI: 10.1109/COMST.2023.3344808
Shunyao Wang;Ryan K. L. Ko;Guangdong Bai;Naipeng Dong;Taejun Choi;Yanjun Zhang
Cyber-physical systems (CPS) are increasingly relying on machine learning (ML) techniques to reduce labor costs and improve efficiency. However, the adoption of ML also exposes CPS to potential adversarial ML attacks witnessed in the literature. Specifically, the increased Internet connectivity in CPS has resulted in a surge in the volume of data generation and communication frequency among devices, thereby expanding the attack surface and attack opportunities for ML adversaries. Among various adversarial ML attacks, evasion attacks are one of the most well-known ones. Therefore, this survey focuses on summarizing the latest research on evasion attack and defense techniques, to understand state-of-the-art ML model security in CPS. To assess the attack effectiveness, this survey proposes an attack taxonomy by introducing quantitative measures such as perturbation level and the number of modified features. Similarly, a defense taxonomy is introduced based on four perspectives demonstrating the defensive techniques from models’ inputs to their outputs. Furthermore, the survey identifies gaps and promising directions that researchers and practitioners can explore to address potential challenges and threats caused by evasion attacks and lays the groundwork for understanding and mitigating the attacks in CPS.
网络物理系统(CPS)越来越依赖机器学习(ML)技术来降低人力成本和提高效率。然而,ML 的采用也使 CPS 面临文献中提到的潜在对抗性 ML 攻击。具体来说,CPS 中互联网连接的增加导致数据生成量和设备间通信频率激增,从而扩大了 ML 对手的攻击面和攻击机会。在各种对抗性 ML 攻击中,规避攻击是最著名的攻击之一。因此,本调查重点总结了有关规避攻击和防御技术的最新研究,以了解 CPS 中最先进的 ML 模型安全性。为了评估攻击效果,本调查通过引入扰动级别和修改特征数量等定量指标,提出了攻击分类法。同样,基于从模型输入到输出的四个角度展示防御技术,引入了防御分类法。此外,调查还指出了研究人员和从业人员可以探索的差距和有前途的方向,以应对逃避攻击带来的潜在挑战和威胁,并为理解和缓解 CPS 中的攻击奠定了基础。
{"title":"Evasion Attack and Defense on Machine Learning Models in Cyber-Physical Systems: A Survey","authors":"Shunyao Wang;Ryan K. L. Ko;Guangdong Bai;Naipeng Dong;Taejun Choi;Yanjun Zhang","doi":"10.1109/COMST.2023.3344808","DOIUrl":"https://doi.org/10.1109/COMST.2023.3344808","url":null,"abstract":"Cyber-physical systems (CPS) are increasingly relying on machine learning (ML) techniques to reduce labor costs and improve efficiency. However, the adoption of ML also exposes CPS to potential adversarial ML attacks witnessed in the literature. Specifically, the increased Internet connectivity in CPS has resulted in a surge in the volume of data generation and communication frequency among devices, thereby expanding the attack surface and attack opportunities for ML adversaries. Among various adversarial ML attacks, evasion attacks are one of the most well-known ones. Therefore, this survey focuses on summarizing the latest research on evasion attack and defense techniques, to understand state-of-the-art ML model security in CPS. To assess the attack effectiveness, this survey proposes an attack taxonomy by introducing quantitative measures such as perturbation level and the number of modified features. Similarly, a defense taxonomy is introduced based on four perspectives demonstrating the defensive techniques from models’ inputs to their outputs. Furthermore, the survey identifies gaps and promising directions that researchers and practitioners can explore to address potential challenges and threats caused by evasion attacks and lays the groundwork for understanding and mitigating the attacks in CPS.","PeriodicalId":55029,"journal":{"name":"IEEE Communications Surveys and Tutorials","volume":"26 2","pages":"930-966"},"PeriodicalIF":35.6,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141084807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
In-Network Machine Learning Using Programmable Network Devices: A Survey 使用可编程网络设备进行网内机器学习:调查
IF 35.6 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-12-19 DOI: 10.1109/COMST.2023.3344351
Changgang Zheng;Xinpeng Hong;Damu Ding;Shay Vargaftik;Yaniv Ben-Itzhak;Noa Zilberman
Machine learning is widely used to solve networking challenges, ranging from traffic classification and anomaly detection to network configuration. However, machine learning also requires significant processing and often increases the load on both networks and servers. The introduction of in-network computing, enabled by programmable network devices, has allowed to run applications within the network, providing higher throughput and lower latency. Soon after, in-network machine learning solutions started to emerge, enabling machine learning functionality within the network itself. This survey introduces the concept of in-network machine learning and provides a comprehensive taxonomy. The survey provides an introduction to the technology and explains the different types of machine learning solutions built upon programmable network devices. It explores the different types of machine learning models implemented within the network, and discusses related challenges and solutions. In-network machine learning can significantly benefit cloud computing and next-generation networks, and this survey concludes with a discussion of future trends.
机器学习被广泛用于解决网络挑战,从流量分类和异常检测到网络配置,不一而足。然而,机器学习也需要大量的处理工作,往往会增加网络和服务器的负载。由可编程网络设备支持的网内计算的引入,使得在网络内运行应用成为可能,从而提供了更高的吞吐量和更低的延迟。不久之后,网内机器学习解决方案开始出现,在网络本身实现了机器学习功能。本调查介绍了网络内机器学习的概念,并提供了全面的分类方法。调查介绍了该技术,并解释了基于可编程网络设备的不同类型的机器学习解决方案。它探讨了在网络内实施的不同类型的机器学习模型,并讨论了相关的挑战和解决方案。网络内机器学习可为云计算和下一代网络带来巨大好处,本调查报告最后还讨论了未来趋势。
{"title":"In-Network Machine Learning Using Programmable Network Devices: A Survey","authors":"Changgang Zheng;Xinpeng Hong;Damu Ding;Shay Vargaftik;Yaniv Ben-Itzhak;Noa Zilberman","doi":"10.1109/COMST.2023.3344351","DOIUrl":"https://doi.org/10.1109/COMST.2023.3344351","url":null,"abstract":"Machine learning is widely used to solve networking challenges, ranging from traffic classification and anomaly detection to network configuration. However, machine learning also requires significant processing and often increases the load on both networks and servers. The introduction of in-network computing, enabled by programmable network devices, has allowed to run applications within the network, providing higher throughput and lower latency. Soon after, in-network machine learning solutions started to emerge, enabling machine learning functionality within the network itself. This survey introduces the concept of in-network machine learning and provides a comprehensive taxonomy. The survey provides an introduction to the technology and explains the different types of machine learning solutions built upon programmable network devices. It explores the different types of machine learning models implemented within the network, and discusses related challenges and solutions. In-network machine learning can significantly benefit cloud computing and next-generation networks, and this survey concludes with a discussion of future trends.","PeriodicalId":55029,"journal":{"name":"IEEE Communications Surveys and Tutorials","volume":"26 2","pages":"1171-1200"},"PeriodicalIF":35.6,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141084803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Survey on Multi-AP Coordination Approaches Over Emerging WLANs: Future Directions and Open Challenges 新兴无线局域网多 AP 协调方法调查:未来方向和挑战
IF 35.6 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-12-19 DOI: 10.1109/COMST.2023.3344167
Shikhar Verma;Tiago Koketsu Rodrigues;Yuichi Kawamoto;Mostafa M. Fouda;Nei Kato
Recent advancements in wireless local area network (WLAN) technology include IEEE 802.11be and 802.11ay, often known as Wi-Fi 7 and WiGig, respectively. The goal of these developments is to provide Extremely High Throughput (EHT) and low latency to meet the demands of future applications like as 8K videos, augmented and virtual reality, the Internet of Things, telesurgery, and other developing technologies. IEEE 802.11be includes new features such as 320 MHz bandwidth, multi-link operation, Multi-user Multi-Input Multi-Output, orthogonal frequency-division multiple access, and Multiple-Access Point (multi-AP) coordination (MAP-Co) to achieve EHT. With the increase in the number of overlapping APs and inter-AP interference, researchers have focused on studying MAP-Co approaches for coordinated transmission in IEEE 802.11be, making MAP-Co a key feature of future WLANs. Moreover, similar issues may arise in EHF bands WLAN, particularly for standards beyond IEEE 802.11ay. This has prompted researchers to investigate the implementation of MAP-Co over future 802.11ay WLANs. Thus, in this article, we provide a comprehensive review of the state-of-the-art MAP-Co features and their shortcomings concerning emerging WLAN. Finally, we discuss several novel future directions and open challenges for MAP-Co.
无线局域网(WLAN)技术的最新进展包括 IEEE 802.11be 和 802.11ay,通常分别称为 Wi-Fi 7 和 WiGig。这些发展的目标是提供极高吞吐量(EHT)和低延迟,以满足未来应用的需求,如 8K 视频、增强和虚拟现实、物联网、远程手术和其他发展中的技术。IEEE 802.11be 包括新功能,如 320 MHz 带宽、多链路操作、多用户多输入多输出、正交频分多址和多接入点(multi-AP)协调(MAP-Co),以实现 EHT。随着重叠接入点数量和接入点间干扰的增加,研究人员重点研究了在 IEEE 802.11be 中进行协调传输的 MAP-Co 方法,使 MAP-Co 成为未来无线局域网的关键功能。此外,类似的问题也可能出现在 EHF 波段的无线局域网中,尤其是 IEEE 802.11ay 以外的标准。这促使研究人员对未来 802.11ay WLAN 上 MAP-Co 的实施进行研究。因此,在本文中,我们将全面回顾最先进的 MAP-Co 功能及其在新兴无线局域网中的不足之处。最后,我们讨论了 MAP-Co 未来的几个新方向和面临的挑战。
{"title":"A Survey on Multi-AP Coordination Approaches Over Emerging WLANs: Future Directions and Open Challenges","authors":"Shikhar Verma;Tiago Koketsu Rodrigues;Yuichi Kawamoto;Mostafa M. Fouda;Nei Kato","doi":"10.1109/COMST.2023.3344167","DOIUrl":"10.1109/COMST.2023.3344167","url":null,"abstract":"Recent advancements in wireless local area network (WLAN) technology include IEEE 802.11be and 802.11ay, often known as Wi-Fi 7 and WiGig, respectively. The goal of these developments is to provide Extremely High Throughput (EHT) and low latency to meet the demands of future applications like as 8K videos, augmented and virtual reality, the Internet of Things, telesurgery, and other developing technologies. IEEE 802.11be includes new features such as 320 MHz bandwidth, multi-link operation, Multi-user Multi-Input Multi-Output, orthogonal frequency-division multiple access, and Multiple-Access Point (multi-AP) coordination (MAP-Co) to achieve EHT. With the increase in the number of overlapping APs and inter-AP interference, researchers have focused on studying MAP-Co approaches for coordinated transmission in IEEE 802.11be, making MAP-Co a key feature of future WLANs. Moreover, similar issues may arise in EHF bands WLAN, particularly for standards beyond IEEE 802.11ay. This has prompted researchers to investigate the implementation of MAP-Co over future 802.11ay WLANs. Thus, in this article, we provide a comprehensive review of the state-of-the-art MAP-Co features and their shortcomings concerning emerging WLAN. Finally, we discuss several novel future directions and open challenges for MAP-Co.","PeriodicalId":55029,"journal":{"name":"IEEE Communications Surveys and Tutorials","volume":"26 2","pages":"858-889"},"PeriodicalIF":35.6,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139370646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Survey on Model-Based, Heuristic, and Machine Learning Optimization Approaches in RIS-Aided Wireless Networks 基于模型、启发式和机器学习的无线网络优化方法概览
IF 35.6 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-12-15 DOI: 10.1109/COMST.2023.3340099
Hao Zhou;Melike Erol-Kantarci;Yuanwei Liu;H. Vincent Poor
Reconfigurable intelligent surfaces (RISs) have received considerable attention as a key enabler for envisioned 6G networks, for the purpose of improving the network capacity, coverage, efficiency, and security with low energy consumption and low hardware cost. However, integrating RISs into the existing infrastructure greatly increases the network management complexity, especially for controlling a significant number of RIS elements. To realize the full potential of RISs, efficient optimization approaches are of great importance. This work provides a comprehensive survey of optimization techniques for RIS-aided wireless communications, including model-based, heuristic, and machine learning (ML) algorithms. In particular, we first summarize the problem formulations in the literature with diverse objectives and constraints, e.g., sumrate maximization, power minimization, and imperfect channel state information constraints. Then, we introduce model-based algorithms that have been used in the literature, such as alternating optimization, the majorization-minimization method, and successive convex approximation. Next, heuristic optimization is discussed, which applies heuristic rules for obtaining lowcomplexity solutions. Moreover, we present state-of-the-art ML algorithms and applications towards RISs, i.e., supervised and unsupervised learning, reinforcement learning, federated learning, graph learning, transfer learning, and hierarchical learning-based approaches. Model-based, heuristic, and ML approaches are compared in terms of stability, robustness, optimality and so on, providing a systematic understanding of these techniques. Finally, we highlight RIS-aided applications towards 6G networks and identify future challenges.
可重构智能表面(RIS)作为设想中的 6G 网络的关键推动因素,以低能耗和低硬件成本提高网络容量、覆盖范围、效率和安全性为目的,受到了广泛关注。然而,将 RIS 集成到现有基础设施中会大大增加网络管理的复杂性,尤其是在控制大量 RIS 元件时。要充分发挥 RIS 的潜力,高效的优化方法至关重要。本研究全面考察了 RIS 辅助无线通信的优化技术,包括基于模型的算法、启发式算法和机器学习(ML)算法。特别是,我们首先总结了文献中具有不同目标和约束条件的问题表述,如总和最大化、功率最小化和不完善信道状态信息约束条件。然后,我们介绍了文献中使用过的基于模型的算法,如交替优化法、大化-最小化法和连续凸近似法。接下来,我们讨论了启发式优化法,它采用启发式规则来获得低复杂度的解决方案。此外,我们还介绍了针对 RIS 的最先进的 ML 算法和应用,即有监督和无监督学习、强化学习、联合学习、图学习、迁移学习和基于分层学习的方法。我们从稳定性、鲁棒性、最优性等方面对基于模型的方法、启发式方法和 ML 方法进行了比较,从而对这些技术有了系统的了解。最后,我们重点介绍了面向 6G 网络的 RIS 辅助应用,并指出了未来的挑战。
{"title":"A Survey on Model-Based, Heuristic, and Machine Learning Optimization Approaches in RIS-Aided Wireless Networks","authors":"Hao Zhou;Melike Erol-Kantarci;Yuanwei Liu;H. Vincent Poor","doi":"10.1109/COMST.2023.3340099","DOIUrl":"https://doi.org/10.1109/COMST.2023.3340099","url":null,"abstract":"Reconfigurable intelligent surfaces (RISs) have received considerable attention as a key enabler for envisioned 6G networks, for the purpose of improving the network capacity, coverage, efficiency, and security with low energy consumption and low hardware cost. However, integrating RISs into the existing infrastructure greatly increases the network management complexity, especially for controlling a significant number of RIS elements. To realize the full potential of RISs, efficient optimization approaches are of great importance. This work provides a comprehensive survey of optimization techniques for RIS-aided wireless communications, including model-based, heuristic, and machine learning (ML) algorithms. In particular, we first summarize the problem formulations in the literature with diverse objectives and constraints, e.g., sumrate maximization, power minimization, and imperfect channel state information constraints. Then, we introduce model-based algorithms that have been used in the literature, such as alternating optimization, the majorization-minimization method, and successive convex approximation. Next, heuristic optimization is discussed, which applies heuristic rules for obtaining lowcomplexity solutions. Moreover, we present state-of-the-art ML algorithms and applications towards RISs, i.e., supervised and unsupervised learning, reinforcement learning, federated learning, graph learning, transfer learning, and hierarchical learning-based approaches. Model-based, heuristic, and ML approaches are compared in terms of stability, robustness, optimality and so on, providing a systematic understanding of these techniques. Finally, we highlight RIS-aided applications towards 6G networks and identify future challenges.","PeriodicalId":55029,"journal":{"name":"IEEE Communications Surveys and Tutorials","volume":"26 2","pages":"781-823"},"PeriodicalIF":35.6,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141084862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward Ultra-Power-Efficient, Tbps Wireless Systems via Analogue Processing: Existing Approaches, Challenges and Way Forward 通过模拟处理实现超高能效的 Tbps 无线系统:现有方法、挑战和前进方向
IF 35.6 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-12-13 DOI: 10.1109/COMST.2023.3342775
Mahmoud Mojarrad Kiasaraei;Konstantinos Nikitopoulos;Rahim Tafazolli
Exploiting ultra-wide bandwidths is a promising approach to achieve the terabits per second (Tbps) data rates required to unlock emerging mobile applications like mobile extended reality and holographic telepresence. However, conventional digital systems are unable to exploit such bandwidths efficiently. In particular, the power consumption of ultra-fast, high-precision digital-to-analogue and analogue-to-digital converters (DACs/ADCs) for ultra-wide bandwidths becomes impractical. At the same time, achieving ultra-fast digital signal processing becomes extremely challenging in terms of power consumption and processing latency due to the complexity of state-of-the-art processing algorithms (e.g., “soft” detection/decoding) and the fact that the increased sampling rates challenge the speed capabilities of modern digital processors. To overcome these bottlenecks, there is a need for signal processing solutions that can, ideally, avoid DACs/ADCs while minimizing both the power consumption and processing latency. One potential approach in this direction is to design digital systems that do not require DACs/ADCs and perform all the corresponding processing directly in the analogue domain. Despite existing attempts to develop individual components of the transceiver chain in the analogue domain, as we discuss in detail in this work, the feasibility of complete analogue processing in ultra-fast wireless systems is still an open research topic. In addition, existing analogue-based approaches have inferior spectrum utilization than digital approaches, partly due to their inability to exploit the recent advances in digital systems such as “soft” detection/decoding. In this context, we also discuss the challenges related to performing “soft” detection/decoding directly in the analogue domain, as has been recently proposed by the DigiLogue processing concept, and we show with a simple example that analogue-based “soft” detection/decoding is feasible and can achieve the same error performance as digital approaches with more than $37times $ power savings. In addition, we discuss several challenges related to the design of ultra-fast, fully analogue wireless receivers that can perform “soft” processing directly in the analogue domain and we suggest research directions to overcome these challenges.
利用超宽带宽是实现每秒太比特(Tbps)数据传输速率的一种有前途的方法,这种传输速率是开启移动扩展现实和全息远程呈现等新兴移动应用所必需的。然而,传统的数字系统无法有效利用这种带宽。特别是,用于超宽带宽的超高速、高精度数模和模数转换器(DAC/ADC)的功耗变得不切实际。同时,由于最新处理算法(如 "软 "检测/解码)的复杂性,以及采样率的提高对现代数字处理器速度能力的挑战,实现超高速数字信号处理在功耗和处理延迟方面变得极具挑战性。为了克服这些瓶颈,我们需要信号处理解决方案,这些解决方案最好能避免使用 DAC/ADC,同时最大限度地减少功耗和处理延迟。这方面的一个可行方法是设计不需要 DAC/ADC 的数字系统,直接在模拟域执行所有相应的处理。尽管我们已经尝试在模拟域中开发收发器链的各个组件,但正如我们在本作品中详细讨论的那样,在超高速无线系统中进行完整模拟处理的可行性仍然是一个开放的研究课题。此外,与数字方法相比,现有的模拟方法频谱利用率较低,部分原因是它们无法利用数字系统的最新进展,如 "软 "检测/解码。在此背景下,我们还讨论了与直接在模拟域执行 "软 "检测/解码相关的挑战,正如 DigiLogue 处理概念最近提出的那样,我们用一个简单的例子表明,基于模拟的 "软 "检测/解码是可行的,并且可以实现与数字方法相同的误差性能,同时节省超过 37 美元的功耗。此外,我们还讨论了与设计可直接在模拟域执行 "软 "处理的超高速全模拟无线接收器有关的几个挑战,并提出了克服这些挑战的研究方向。
{"title":"Toward Ultra-Power-Efficient, Tbps Wireless Systems via Analogue Processing: Existing Approaches, Challenges and Way Forward","authors":"Mahmoud Mojarrad Kiasaraei;Konstantinos Nikitopoulos;Rahim Tafazolli","doi":"10.1109/COMST.2023.3342775","DOIUrl":"https://doi.org/10.1109/COMST.2023.3342775","url":null,"abstract":"Exploiting ultra-wide bandwidths is a promising approach to achieve the terabits per second (Tbps) data rates required to unlock emerging mobile applications like mobile extended reality and holographic telepresence. However, conventional digital systems are unable to exploit such bandwidths efficiently. In particular, the power consumption of ultra-fast, high-precision digital-to-analogue and analogue-to-digital converters (DACs/ADCs) for ultra-wide bandwidths becomes impractical. At the same time, achieving ultra-fast digital signal processing becomes extremely challenging in terms of power consumption and processing latency due to the complexity of state-of-the-art processing algorithms (e.g., “soft” detection/decoding) and the fact that the increased sampling rates challenge the speed capabilities of modern digital processors. To overcome these bottlenecks, there is a need for signal processing solutions that can, ideally, avoid DACs/ADCs while minimizing both the power consumption and processing latency. One potential approach in this direction is to design digital systems that do not require DACs/ADCs and perform all the corresponding processing directly in the analogue domain. Despite existing attempts to develop individual components of the transceiver chain in the analogue domain, as we discuss in detail in this work, the feasibility of complete analogue processing in ultra-fast wireless systems is still an open research topic. In addition, existing analogue-based approaches have inferior spectrum utilization than digital approaches, partly due to their inability to exploit the recent advances in digital systems such as “soft” detection/decoding. In this context, we also discuss the challenges related to performing “soft” detection/decoding directly in the analogue domain, as has been recently proposed by the DigiLogue processing concept, and we show with a simple example that analogue-based “soft” detection/decoding is feasible and can achieve the same error performance as digital approaches with more than \u0000<inline-formula> <tex-math>$37times $ </tex-math></inline-formula>\u0000 power savings. In addition, we discuss several challenges related to the design of ultra-fast, fully analogue wireless receivers that can perform “soft” processing directly in the analogue domain and we suggest research directions to overcome these challenges.","PeriodicalId":55029,"journal":{"name":"IEEE Communications Surveys and Tutorials","volume":"26 2","pages":"747-780"},"PeriodicalIF":35.6,"publicationDate":"2023-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141084802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Comprehensive Survey on Optical Scattering Communications: Current Research, New Trends, and Future Vision 光散射通信综合调查:当前研究、新趋势和未来展望
IF 35.6 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-12-05 DOI: 10.1109/COMST.2023.3339371
Sudhanshu Arya;Yeon Ho Chung
To meet high data rate requirements of future wireless communication systems, there is a need for advanced communication technologies that could be used in combination with existing wireless RF technologies. Recently, optical wireless communication (OWC) has been extensively investigated as an attractive alternate technology to RF. OWC uses the optical carrier to convey data, with wavelengths ranging from ultraviolet (UV) to infrared (IR) to visible light. In the past years, there is a spike in interest over optical scattering communications (OSCs) employing UV wavelengths, thanks to the recent advances and rapid developments in deep UV light-emitting diodes (LEDs), laser diodes, and solar-blind UV filters and detectors. The unique atmospheric scattering and absorption properties of the deep UV band, which is solar-blind at the ground level, are the motivation for the recent development of the OSC systems. However, there is a clear gap in the existing literature that the OSC systems are yet to be systematically surveyed for their applicability to future wireless communications. In this context, this paper bridges the gap by providing the first contemporary and comprehensive survey on recent and new advancements in the OSCs, commonly known as UV communications. In summary, this survey is expected to provide a largely missing articulation between various aspects of UV communications. To be easy to follow, we commence our discourse by surveying the propagation concepts and historic evolution of UV communication systems. Next, we provide a detailed survey on UV channel modeling because accurate channel characterization is important for efficient system design and performance optimization of UV communication systems. We discuss various UV channel characterization efforts thus far made. Then, we present a classification to analyze current OSC system designs. Importantly, we survey recent advancements in the NLOS UV communication systems that include the application of artificial intelligence, artificial neural networks, game theory, orbital angular momentum, etc. Moreover, we conduct a comprehensive survey on recently documented UV-based indoor communication systems. Finally, we point out several key issues yet to be addressed and collate potentially interesting and challenging topics for future research. This survey is aptly featured by in-depth discussion and analysis of UV communication systems in various aspects, many of which, to the best of authors’ knowledge, are the first time presented in this field.
为了满足未来无线通信系统的高数据传输速率要求,我们需要能够与现有无线射频技术相结合的先进通信技术。最近,人们对光无线通信(OWC)进行了广泛研究,认为它是射频技术的一种有吸引力的替代技术。光无线通信使用光载体传输数据,波长范围从紫外线(UV)到红外线(IR)再到可见光。近年来,由于深紫外发光二极管(LED)、激光二极管以及太阳盲紫外滤光片和探测器的最新进展和快速发展,人们对采用紫外波长的光散射通信(OSC)的兴趣骤增。深紫外波段具有独特的大气散射和吸收特性,在地面不受太阳光的影响,这也是近来开发 OSC 系统的动因。然而,在现有文献中还存在一个明显的空白,那就是尚未系统地研究 OSC 系统在未来无线通信中的适用性。在此背景下,本文首次对开放式调制解调器(俗称紫外通信)的最新进展进行了当代全面调查,从而弥补了这一空白。总之,本调查报告有望为紫外通信的各个方面提供基本缺失的衔接。为了便于理解,我们首先介绍了紫外通信系统的传播概念和历史演变。接下来,我们将详细介绍紫外信道建模,因为准确的信道表征对于紫外通信系统的高效系统设计和性能优化非常重要。我们讨论了迄今为止所做的各种紫外信道表征工作。然后,我们提出了一种分析当前 OSC 系统设计的分类方法。重要的是,我们考察了近来在非对称紫外通信系统方面取得的进展,包括人工智能、人工神经网络、博弈论、轨道角动量等的应用。此外,我们还对最近记录的基于紫外的室内通信系统进行了全面调查。最后,我们指出了几个有待解决的关键问题,并整理了未来研究中潜在的有趣和具有挑战性的课题。作者对紫外通信系统进行了多方面的深入探讨和分析,其中许多内容都是首次在这一领域提出。
{"title":"A Comprehensive Survey on Optical Scattering Communications: Current Research, New Trends, and Future Vision","authors":"Sudhanshu Arya;Yeon Ho Chung","doi":"10.1109/COMST.2023.3339371","DOIUrl":"https://doi.org/10.1109/COMST.2023.3339371","url":null,"abstract":"To meet high data rate requirements of future wireless communication systems, there is a need for advanced communication technologies that could be used in combination with existing wireless RF technologies. Recently, optical wireless communication (OWC) has been extensively investigated as an attractive alternate technology to RF. OWC uses the optical carrier to convey data, with wavelengths ranging from ultraviolet (UV) to infrared (IR) to visible light. In the past years, there is a spike in interest over optical scattering communications (OSCs) employing UV wavelengths, thanks to the recent advances and rapid developments in deep UV light-emitting diodes (LEDs), laser diodes, and solar-blind UV filters and detectors. The unique atmospheric scattering and absorption properties of the deep UV band, which is solar-blind at the ground level, are the motivation for the recent development of the OSC systems. However, there is a clear gap in the existing literature that the OSC systems are yet to be systematically surveyed for their applicability to future wireless communications. In this context, this paper bridges the gap by providing the first contemporary and comprehensive survey on recent and new advancements in the OSCs, commonly known as UV communications. In summary, this survey is expected to provide a largely missing articulation between various aspects of UV communications. To be easy to follow, we commence our discourse by surveying the propagation concepts and historic evolution of UV communication systems. Next, we provide a detailed survey on UV channel modeling because accurate channel characterization is important for efficient system design and performance optimization of UV communication systems. We discuss various UV channel characterization efforts thus far made. Then, we present a classification to analyze current OSC system designs. Importantly, we survey recent advancements in the NLOS UV communication systems that include the application of artificial intelligence, artificial neural networks, game theory, orbital angular momentum, etc. Moreover, we conduct a comprehensive survey on recently documented UV-based indoor communication systems. Finally, we point out several key issues yet to be addressed and collate potentially interesting and challenging topics for future research. This survey is aptly featured by in-depth discussion and analysis of UV communication systems in various aspects, many of which, to the best of authors’ knowledge, are the first time presented in this field.","PeriodicalId":55029,"journal":{"name":"IEEE Communications Surveys and Tutorials","volume":"26 2","pages":"1446-1477"},"PeriodicalIF":35.6,"publicationDate":"2023-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141084869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI-Enhanced Cloud-Edge-Terminal Collaborative Network: Survey, Applications, and Future Directions 人工智能增强型云端端协作网络:调查、应用和未来方向
IF 35.6 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-12-01 DOI: 10.1109/COMST.2023.3338153
Huixian Gu;Liqiang Zhao;Zhu Han;Gan Zheng;Shenghui Song
The cloud-edge-terminal collaborative network (CETCN) is considered as a novel paradigm for emerging applications owing to its huge potential in providing low-latency and ultra-reliable computing services. However, achieving such benefits is very challenging due to the heterogeneous computing power of terminal devices and the complex environment faced by the CETCN. In particular, the high-dimensional and dynamic environment states cause difficulties for the CETCN to make efficient decisions in terms of task offloading, collaborative caching and mobility management. To this end, artificial intelligence (AI), especially deep reinforcement learning (DRL) has been proven effective in solving sequential decision-making problems in various domains, and offers a promising solution for the above-mentioned issues due to several reasons. Firstly, accurate modelling of the CETCN, which is difficult to obtain for real-world applications, is not required for the DRL-based method. Secondly, DRL can effectively respond to high-dimensional and dynamic tasks through iterative interactions with the environment. Thirdly, due to the complexity of tasks and the differences in resource supply among different vendors, collaboration is required between different vendors to complete tasks. The multi-agent DRL (MADRL) methods are very effective in solving collaborative tasks, where the collaborative tasks can be jointly completed by cloud, edge and terminal devices which provided by different vendors. This survey provides a comprehensive overview regarding the applications of DRL and MADRL in the context of CETCN. The first part of this survey provides a depth overview of the key concepts of the CETCN and the mathematical underpinnings of both DRL and MADRL. Then, we highlight the applications of RL algorithms in solving various challenges within CETCN, such as task offloading, resource allocation, caching and mobility management. In addition, we extend discussion to explore how DRL and MADRL are making inroads into emerging CETCN scenarios like intelligent transportation system (ITS), the industrial Internet of Things (IIoT), smart health and digital agriculture. Furthermore, security considerations related to the application of DRL within CETCN are addressed, along with an overview of existing standards that pertain to edge intelligence. Finally, we list several lessons learned in this evolving field and outline future research opportunities and challenges that are critical for the development of the CETCN. We hope this survey will attract more researchers to investigate scalable and decentralized AI algorithms for the design of CETCN.
云-边缘-终端协作网络(CETCN)因其在提供低延迟和超可靠计算服务方面的巨大潜力,被视为新兴应用的一种新模式。然而,由于终端设备的异构计算能力和 CETCN 面临的复杂环境,实现这些优势非常具有挑战性。特别是,高维和动态的环境状态给 CETCN 在任务卸载、协作缓存和移动性管理方面做出高效决策造成了困难。为此,人工智能(AI),特别是深度强化学习(DRL)已被证明能有效解决各种领域的连续决策问题,并为上述问题提供了一种前景广阔的解决方案,原因有以下几点。首先,基于 DRL 的方法不需要对 CETCN 进行精确建模,而这在实际应用中很难实现。其次,通过与环境的迭代交互,DRL 可以有效应对高维动态任务。第三,由于任务的复杂性和不同供应商之间资源供应的差异,不同供应商之间需要协作完成任务。多代理 DRL(MADRL)方法在解决协作任务方面非常有效,协作任务可以由不同厂商提供的云、边缘和终端设备共同完成。本调查全面概述了 DRL 和 MADRL 在 CETCN 中的应用。调查的第一部分深入概述了 CETCN 的关键概念以及 DRL 和 MADRL 的数学基础。然后,我们重点介绍了 RL 算法在解决 CETCN 中各种挑战方面的应用,如任务卸载、资源分配、缓存和移动性管理。此外,我们将讨论扩展到探讨 DRL 和 MADRL 如何在智能交通系统 (ITS)、工业物联网 (IIoT)、智能健康和数字农业等新兴 CETCN 场景中取得进展。此外,我们还讨论了与在 CETCN 中应用 DRL 相关的安全考虑因素,并概述了与边缘智能相关的现有标准。最后,我们列举了在这一不断发展的领域中汲取的若干经验教训,并概述了对 CETCN 发展至关重要的未来研究机遇和挑战。我们希望这份调查报告能吸引更多的研究人员为 CETCN 的设计研究可扩展和分散的人工智能算法。
{"title":"AI-Enhanced Cloud-Edge-Terminal Collaborative Network: Survey, Applications, and Future Directions","authors":"Huixian Gu;Liqiang Zhao;Zhu Han;Gan Zheng;Shenghui Song","doi":"10.1109/COMST.2023.3338153","DOIUrl":"https://doi.org/10.1109/COMST.2023.3338153","url":null,"abstract":"The cloud-edge-terminal collaborative network (CETCN) is considered as a novel paradigm for emerging applications owing to its huge potential in providing low-latency and ultra-reliable computing services. However, achieving such benefits is very challenging due to the heterogeneous computing power of terminal devices and the complex environment faced by the CETCN. In particular, the high-dimensional and dynamic environment states cause difficulties for the CETCN to make efficient decisions in terms of task offloading, collaborative caching and mobility management. To this end, artificial intelligence (AI), especially deep reinforcement learning (DRL) has been proven effective in solving sequential decision-making problems in various domains, and offers a promising solution for the above-mentioned issues due to several reasons. Firstly, accurate modelling of the CETCN, which is difficult to obtain for real-world applications, is not required for the DRL-based method. Secondly, DRL can effectively respond to high-dimensional and dynamic tasks through iterative interactions with the environment. Thirdly, due to the complexity of tasks and the differences in resource supply among different vendors, collaboration is required between different vendors to complete tasks. The multi-agent DRL (MADRL) methods are very effective in solving collaborative tasks, where the collaborative tasks can be jointly completed by cloud, edge and terminal devices which provided by different vendors. This survey provides a comprehensive overview regarding the applications of DRL and MADRL in the context of CETCN. The first part of this survey provides a depth overview of the key concepts of the CETCN and the mathematical underpinnings of both DRL and MADRL. Then, we highlight the applications of RL algorithms in solving various challenges within CETCN, such as task offloading, resource allocation, caching and mobility management. In addition, we extend discussion to explore how DRL and MADRL are making inroads into emerging CETCN scenarios like intelligent transportation system (ITS), the industrial Internet of Things (IIoT), smart health and digital agriculture. Furthermore, security considerations related to the application of DRL within CETCN are addressed, along with an overview of existing standards that pertain to edge intelligence. Finally, we list several lessons learned in this evolving field and outline future research opportunities and challenges that are critical for the development of the CETCN. We hope this survey will attract more researchers to investigate scalable and decentralized AI algorithms for the design of CETCN.","PeriodicalId":55029,"journal":{"name":"IEEE Communications Surveys and Tutorials","volume":"26 2","pages":"1322-1385"},"PeriodicalIF":35.6,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141084861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI-Empowered Fog/Edge Resource Management for IoT Applications: A Comprehensive Review, Research Challenges, and Future Perspectives 面向物联网应用的人工智能驱动的雾/边缘资源管理:全面回顾、研究挑战和未来展望
IF 35.6 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-11-30 DOI: 10.1109/COMST.2023.3338015
Guneet Kaur Walia;Mohit Kumar;Sukhpal Singh Gill
The proliferation of ubiquitous Internet of Things (IoT) sensors and smart devices in several domains embracing healthcare, Industry 4.0, transportation and agriculture are giving rise to a prodigious amount of data requiring ever-increasing computations and services from cloud to the edge of the network. Fog/Edge computing is a promising and distributed computing paradigm that has drawn extensive attention from both industry and academia. The infrastructural efficiency of these computing paradigms necessitates adaptive resource management mechanisms for offloading decisions and efficient scheduling. Resource Management (RM) is a non-trivial issue whose complexity is the result of heterogeneous resources, incoming transactional workload, edge node discovery, and Quality of Service (QoS) parameters at the same time, which makes the efficacy of resources even more challenging. Hence, the researchers have adopted Artificial Intelligence (AI)-based techniques to resolve the above-mentioned issues. This paper offers a comprehensive review of resource management issues and challenges in Fog/Edge paradigm by categorizing them into provisioning of computing resources, task offloading, resource scheduling, service placement, and load balancing. In addition, existing AI and non-AI based state-of-the-art solutions have been discussed, along with their QoS metrics, datasets analysed, limitations and challenges. The survey provides mathematical formulation corresponding to each categorized resource management issue. Our work sheds light on promising research directions on cutting-edge technologies such as Serverless computing, 5G, Industrial IoT (IIoT), blockchain, digital twins, quantum computing, and Software-Defined Networking (SDN), which can be integrated with the existing frameworks of fog/edge-of-things paradigms to improve business intelligence and analytics amongst IoT-based applications.
无处不在的物联网(IoT)传感器和智能设备在医疗保健、工业 4.0、交通和农业等多个领域激增,产生了大量数据,需要从云端到网络边缘提供越来越多的计算和服务。雾/边缘计算是一种前景广阔的分布式计算范例,已引起业界和学术界的广泛关注。这些计算范例的基础设施效率要求建立自适应资源管理机制,以进行卸载决策和高效调度。资源管理(RM)是一个非同小可的问题,其复杂性是由异构资源、传入的事务性工作负载、边缘节点发现和服务质量(QoS)参数同时造成的,这使得资源的功效更具挑战性。因此,研究人员采用了基于人工智能(AI)的技术来解决上述问题。本文对 Fog/Edge 范式中的资源管理问题和挑战进行了全面综述,将其分为计算资源供应、任务卸载、资源调度、服务安置和负载平衡。此外,还讨论了现有的基于人工智能和非人工智能的最先进解决方案,以及它们的 QoS 指标、分析的数据集、局限性和挑战。调查提供了与每个分类资源管理问题相对应的数学公式。我们的工作为无服务器计算、5G、工业物联网(IIoT)、区块链、数字双胞胎、量子计算和软件定义网络(SDN)等前沿技术提供了有前途的研究方向,这些技术可以与现有的雾/物联网边缘范例框架集成,以改进基于物联网的应用中的商业智能和分析。
{"title":"AI-Empowered Fog/Edge Resource Management for IoT Applications: A Comprehensive Review, Research Challenges, and Future Perspectives","authors":"Guneet Kaur Walia;Mohit Kumar;Sukhpal Singh Gill","doi":"10.1109/COMST.2023.3338015","DOIUrl":"https://doi.org/10.1109/COMST.2023.3338015","url":null,"abstract":"The proliferation of ubiquitous Internet of Things (IoT) sensors and smart devices in several domains embracing healthcare, Industry 4.0, transportation and agriculture are giving rise to a prodigious amount of data requiring ever-increasing computations and services from cloud to the edge of the network. Fog/Edge computing is a promising and distributed computing paradigm that has drawn extensive attention from both industry and academia. The infrastructural efficiency of these computing paradigms necessitates adaptive resource management mechanisms for offloading decisions and efficient scheduling. Resource Management (RM) is a non-trivial issue whose complexity is the result of heterogeneous resources, incoming transactional workload, edge node discovery, and Quality of Service (QoS) parameters at the same time, which makes the efficacy of resources even more challenging. Hence, the researchers have adopted Artificial Intelligence (AI)-based techniques to resolve the above-mentioned issues. This paper offers a comprehensive review of resource management issues and challenges in Fog/Edge paradigm by categorizing them into provisioning of computing resources, task offloading, resource scheduling, service placement, and load balancing. In addition, existing AI and non-AI based state-of-the-art solutions have been discussed, along with their QoS metrics, datasets analysed, limitations and challenges. The survey provides mathematical formulation corresponding to each categorized resource management issue. Our work sheds light on promising research directions on cutting-edge technologies such as Serverless computing, 5G, Industrial IoT (IIoT), blockchain, digital twins, quantum computing, and Software-Defined Networking (SDN), which can be integrated with the existing frameworks of fog/edge-of-things paradigms to improve business intelligence and analytics amongst IoT-based applications.","PeriodicalId":55029,"journal":{"name":"IEEE Communications Surveys and Tutorials","volume":"26 1","pages":"619-669"},"PeriodicalIF":35.6,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139976288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Tutorial-Cum-Survey on Percolation Theory With Applications in Large-Scale Wireless Networks 关于大规模无线网络中应用的渗透理论的教程与研究
IF 35.6 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-11-28 DOI: 10.1109/COMST.2023.3336194
Hesham ElSawy;Ainur Zhaikhan;Mustafa A. Kishk;Mohamed-Slim Alouini
Connectivity is an important key performance indicator and a focal point of research in large-scale wireless networks. Due to path-loss attenuation of electromagnetic waves, direct wireless connectivity is limited to proximate devices. Nevertheless, connectivity among distant devices can still be attained through a sequence of consecutive multi-hop communication links, which enables routing and disseminating legitimate information across wireless ad hoc networks. Multi-hop connectivity is also foundational for data aggregation in the Internet of things (IoT) and cyberphysical systems (CPS). On the downside, multi-hop wireless transmissions increase susceptibility to eavesdropping and enable malicious network attacks. Hence, security-aware network connectivity is required to maintain communication privacy, detect and isolate malicious devices, and thwart the spreading of illegitimate traffic (e.g., viruses, worms, falsified data, illegitimate control, etc.). In 5G and beyond networks, an intricate balance between connectivity, privacy, and security is a necessity due to the proliferating IoT and CPS, which are featured with massive number of wireless devices that can directly communicate together (e.g., device-to-device, machine-to-machine, and vehicle-to-vehicle communication). In this regards, graph theory represents a foundational mathematical tool to model the network physical topology. In particular, random geometric graphs (RGGs) capture the inherently random locations and wireless interconnections among the spatially distributed devices. Percolation theory is then utilized to characterize and control distant multi-hop connectivity on network graphs. Recently, percolation theory over RGGs has been widely utilized to study connectivity, privacy, and security of several types of wireless networks. The impact and utilization of percolation theory are expected to further increase in the IoT/CPS era, which motivates this tutorial. Towards this end, we first introduce the preliminaries of graph and percolation theories in the context of wireless networks. Next, we overview and explain their application to various types of wireless networks.
连接性是一项重要的关键性能指标,也是大规模无线网络的研究重点。由于电磁波的路径损耗衰减,直接无线连接仅限于近距离设备。不过,远距离设备之间仍可通过一连串连续的多跳通信链路实现连接,从而在无线特设网络中进行路由选择并传播合法信息。多跳连接也是物联网(IoT)和网络物理系统(CPS)数据聚合的基础。但另一方面,多跳无线传输增加了被窃听的可能性,使恶意网络攻击成为可能。因此,需要具有安全意识的网络连接来维护通信隐私,检测和隔离恶意设备,阻止非法流量(如病毒、蠕虫、伪造数据、非法控制等)的传播。在 5G 及以后的网络中,由于物联网和 CPS 的激增,大量无线设备可以直接通信(如设备到设备、机器到机器和车辆到车辆通信),因此必须在连接性、隐私和安全性之间实现复杂的平衡。在这方面,图论是建立网络物理拓扑模型的基础数学工具。其中,随机几何图(RGG)捕捉了空间分布设备之间固有的随机位置和无线互连。然后利用渗流理论来描述和控制网络图上的远距离多跳连接。最近,RGG 上的渗滤理论被广泛用于研究几类无线网络的连通性、隐私性和安全性。在物联网/CPS 时代,预计渗滤理论的影响和应用将进一步增加,这也是本教程的动机所在。为此,我们首先介绍图理论和渗流理论在无线网络中的初步应用。接下来,我们将概述并解释它们在各类无线网络中的应用。
{"title":"A Tutorial-Cum-Survey on Percolation Theory With Applications in Large-Scale Wireless Networks","authors":"Hesham ElSawy;Ainur Zhaikhan;Mustafa A. Kishk;Mohamed-Slim Alouini","doi":"10.1109/COMST.2023.3336194","DOIUrl":"https://doi.org/10.1109/COMST.2023.3336194","url":null,"abstract":"Connectivity is an important key performance indicator and a focal point of research in large-scale wireless networks. Due to path-loss attenuation of electromagnetic waves, direct wireless connectivity is limited to proximate devices. Nevertheless, connectivity among distant devices can still be attained through a sequence of consecutive multi-hop communication links, which enables routing and disseminating legitimate information across wireless ad hoc networks. Multi-hop connectivity is also foundational for data aggregation in the Internet of things (IoT) and cyberphysical systems (CPS). On the downside, multi-hop wireless transmissions increase susceptibility to eavesdropping and enable malicious network attacks. Hence, security-aware network connectivity is required to maintain communication privacy, detect and isolate malicious devices, and thwart the spreading of illegitimate traffic (e.g., viruses, worms, falsified data, illegitimate control, etc.). In 5G and beyond networks, an intricate balance between connectivity, privacy, and security is a necessity due to the proliferating IoT and CPS, which are featured with massive number of wireless devices that can directly communicate together (e.g., device-to-device, machine-to-machine, and vehicle-to-vehicle communication). In this regards, graph theory represents a foundational mathematical tool to model the network physical topology. In particular, random geometric graphs (RGGs) capture the inherently random locations and wireless interconnections among the spatially distributed devices. Percolation theory is then utilized to characterize and control distant multi-hop connectivity on network graphs. Recently, percolation theory over RGGs has been widely utilized to study connectivity, privacy, and security of several types of wireless networks. The impact and utilization of percolation theory are expected to further increase in the IoT/CPS era, which motivates this tutorial. Towards this end, we first introduce the preliminaries of graph and percolation theories in the context of wireless networks. Next, we overview and explain their application to various types of wireless networks.","PeriodicalId":55029,"journal":{"name":"IEEE Communications Surveys and Tutorials","volume":"26 1","pages":"428-460"},"PeriodicalIF":35.6,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139976264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Communications Surveys and Tutorials
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1