首页 > 最新文献

IEEE Transactions on Machine Learning in Communications and Networking最新文献

英文 中文
Attention-Aided Outdoor Localization in Commercial 5G NR Systems 商用 5G NR 系统中的注意力辅助室外定位
Pub Date : 2024-11-01 DOI: 10.1109/TMLCN.2024.3490496
Guoda Tian;Dino Pjanić;Xuesong Cai;Bo Bernhardsson;Fredrik Tufvesson
The integration of high-precision cellular localization and machine learning (ML) is considered a cornerstone technique in future cellular navigation systems, offering unparalleled accuracy and functionality. This study focuses on localization based on uplink channel measurements in a fifth-generation (5G) new radio (NR) system. An attention-aided ML-based single-snapshot localization pipeline is presented, which consists of several cascaded blocks, namely a signal processing block, an attention-aided block, and an uncertainty estimation block. Specifically, the signal processing block generates an impulse response beam matrix for all beams. The attention-aided block trains on the channel impulse responses using an attention-aided network, which captures the correlation between impulse responses for different beams. The uncertainty estimation block predicts the probability density function of the user equipment (UE) position, thereby also indicating the confidence level of the localization result. Two representative uncertainty estimation techniques, the negative log-likelihood and the regression-by-classification techniques, are applied and compared. Furthermore, for dynamic measurements with multiple snapshots available, we combine the proposed pipeline with a Kalman filter to enhance localization accuracy. To evaluate our approach, we extract channel impulse responses for different beams from a commercial base station. The outdoor measurement campaign covers Line-of-Sight (LoS), Non Line-of-Sight (NLoS), and a mix of LoS and NLoS scenarios. The results show that sub-meter localization accuracy can be achieved.
高精度蜂窝定位与机器学习(ML)的集成被认为是未来蜂窝导航系统的基石技术,可提供无与伦比的精度和功能。本研究的重点是第五代(5G)新无线电(NR)系统中基于上行链路信道测量的定位。本文介绍了一种基于注意力辅助 ML 的单快照定位流水线,它由几个级联块组成,即信号处理块、注意力辅助块和不确定性估计块。具体来说,信号处理模块为所有波束生成脉冲响应波束矩阵。注意力辅助块利用注意力辅助网络对信道脉冲响应进行训练,从而捕捉不同波束脉冲响应之间的相关性。不确定性估计模块预测用户设备(UE)位置的概率密度函数,从而显示定位结果的置信度。应用了两种具有代表性的不确定性估计技术,即负对数概率和分类回归技术,并进行了比较。此外,对于具有多个可用快照的动态测量,我们将提议的管道与卡尔曼滤波器相结合,以提高定位精度。为了评估我们的方法,我们从一个商用基站提取了不同波束的信道脉冲响应。室外测量活动涵盖了视距(LoS)、非视距(NLoS)以及 LoS 和 NLoS 场景的混合。结果表明,可以实现亚米级定位精度。
{"title":"Attention-Aided Outdoor Localization in Commercial 5G NR Systems","authors":"Guoda Tian;Dino Pjanić;Xuesong Cai;Bo Bernhardsson;Fredrik Tufvesson","doi":"10.1109/TMLCN.2024.3490496","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3490496","url":null,"abstract":"The integration of high-precision cellular localization and machine learning (ML) is considered a cornerstone technique in future cellular navigation systems, offering unparalleled accuracy and functionality. This study focuses on localization based on uplink channel measurements in a fifth-generation (5G) new radio (NR) system. An attention-aided ML-based single-snapshot localization pipeline is presented, which consists of several cascaded blocks, namely a signal processing block, an attention-aided block, and an uncertainty estimation block. Specifically, the signal processing block generates an impulse response beam matrix for all beams. The attention-aided block trains on the channel impulse responses using an attention-aided network, which captures the correlation between impulse responses for different beams. The uncertainty estimation block predicts the probability density function of the user equipment (UE) position, thereby also indicating the confidence level of the localization result. Two representative uncertainty estimation techniques, the negative log-likelihood and the regression-by-classification techniques, are applied and compared. Furthermore, for dynamic measurements with multiple snapshots available, we combine the proposed pipeline with a Kalman filter to enhance localization accuracy. To evaluate our approach, we extract channel impulse responses for different beams from a commercial base station. The outdoor measurement campaign covers Line-of-Sight (LoS), Non Line-of-Sight (NLoS), and a mix of LoS and NLoS scenarios. The results show that sub-meter localization accuracy can be achieved.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1678-1692"},"PeriodicalIF":0.0,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10741343","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142694615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Information Bottleneck-Based Domain Adaptation for Hybrid Deep Learning in Scalable Network Slicing 可扩展网络切片中基于信息瓶颈的混合深度学习领域适应性研究
Pub Date : 2024-10-24 DOI: 10.1109/TMLCN.2024.3485520
Tianlun Hu;Qi Liao;Qiang Liu;Georg Carle
Network slicing enables operators to efficiently support diverse applications on a shared infrastructure. However, the evolving complexity of networks, compounded by inter-cell interference, necessitates agile and adaptable resource management. While deep learning offers solutions for coping with complexity, its adaptability to dynamic configurations remains limited. In this paper, we propose a novel hybrid deep learning algorithm called IDLA (integrated deep learning with the Lagrangian method). This integrated approach aims to enhance the scalability, flexibility, and robustness of slicing resource allocation solutions by harnessing the high approximation capability of deep learning and the strong generalization of classical non-linear optimization methods. Then, we introduce a variational information bottleneck (VIB)-assisted domain adaptation (DA) approach to enhance integrated deep learning and Lagrangian method (IDLA)’s adaptability across diverse network environments and conditions. We propose pre-training a variational information bottleneck (VIB)-based Quality of Service (QoS) estimator, using slice-specific inputs shared across all source domain slices. Each target domain slice can deploy this estimator to predict its QoS and optimize slice resource allocation using the IDLA algorithm. This VIB-based estimator is continuously fine-tuned with a mixture of samples from both the source and target domains until convergence. Evaluating on a multi-cell network with time-varying slice configurations, the VIB-enhanced IDLA algorithm outperforms baselines such as heuristic and deep reinforcement learning-based solutions, achieving twice the convergence speed and 16.52% higher asymptotic performance after slicing configuration changes. Transferability assessment demonstrates a 25.66% improvement in estimation accuracy with VIB, especially in scenarios with significant domain gaps, highlighting its robustness and effectiveness across diverse domains.
网络切片使运营商能够在共享基础设施上有效支持各种应用。然而,网络的复杂性不断发展,再加上小区间的干扰,因此需要灵活、适应性强的资源管理。虽然深度学习提供了应对复杂性的解决方案,但其对动态配置的适应性仍然有限。在本文中,我们提出了一种名为 IDLA(拉格朗日法集成深度学习)的新型混合深度学习算法。这种集成方法旨在利用深度学习的高逼近能力和经典非线性优化方法的强泛化能力,增强切片资源分配解决方案的可扩展性、灵活性和鲁棒性。然后,我们引入了一种变异信息瓶颈(VIB)辅助领域适应(DA)方法,以增强集成深度学习和拉格朗日方法(IDLA)在不同网络环境和条件下的适应性。我们提出了一种基于变异信息瓶颈(VIB)的服务质量(QoS)预估器,使用所有源域片共享的特定片输入进行预训练。每个目标域切片都可以使用该估计器来预测其 QoS,并使用 IDLA 算法优化切片资源分配。这种基于 VIB 的估计器通过源域和目标域的混合样本进行持续微调,直至收敛。在具有时变切片配置的多蜂窝网络上进行评估时,VIB 增强型 IDLA 算法优于启发式和基于深度强化学习的解决方案等基线算法,在切片配置发生变化后,收敛速度提高了一倍,渐进性能提高了 16.52%。可移植性评估表明,使用 VIB 后,估计准确率提高了 25.66%,尤其是在存在显著领域差距的场景中,这凸显了 VIB 在不同领域的鲁棒性和有效性。
{"title":"Information Bottleneck-Based Domain Adaptation for Hybrid Deep Learning in Scalable Network Slicing","authors":"Tianlun Hu;Qi Liao;Qiang Liu;Georg Carle","doi":"10.1109/TMLCN.2024.3485520","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3485520","url":null,"abstract":"Network slicing enables operators to efficiently support diverse applications on a shared infrastructure. However, the evolving complexity of networks, compounded by inter-cell interference, necessitates agile and adaptable resource management. While deep learning offers solutions for coping with complexity, its adaptability to dynamic configurations remains limited. In this paper, we propose a novel hybrid deep learning algorithm called IDLA (integrated deep learning with the Lagrangian method). This integrated approach aims to enhance the scalability, flexibility, and robustness of slicing resource allocation solutions by harnessing the high approximation capability of deep learning and the strong generalization of classical non-linear optimization methods. Then, we introduce a variational information bottleneck (VIB)-assisted domain adaptation (DA) approach to enhance integrated deep learning and Lagrangian method (IDLA)’s adaptability across diverse network environments and conditions. We propose pre-training a variational information bottleneck (VIB)-based Quality of Service (QoS) estimator, using slice-specific inputs shared across all source domain slices. Each target domain slice can deploy this estimator to predict its QoS and optimize slice resource allocation using the IDLA algorithm. This VIB-based estimator is continuously fine-tuned with a mixture of samples from both the source and target domains until convergence. Evaluating on a multi-cell network with time-varying slice configurations, the VIB-enhanced IDLA algorithm outperforms baselines such as heuristic and deep reinforcement learning-based solutions, achieving twice the convergence speed and 16.52% higher asymptotic performance after slicing configuration changes. Transferability assessment demonstrates a 25.66% improvement in estimation accuracy with VIB, especially in scenarios with significant domain gaps, highlighting its robustness and effectiveness across diverse domains.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1642-1660"},"PeriodicalIF":0.0,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10734592","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142579172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Polarization-Aware Channel State Prediction Using Phasor Quaternion Neural Networks 利用相位四元数神经网络进行极化感知信道状态预测
Pub Date : 2024-10-23 DOI: 10.1109/TMLCN.2024.3485521
Anzhe Ye;Haotian Chen;Ryo Natsuaki;Akira Hirose
The performance of a wireless communication system depends to a large extent on the wireless channel. Due to the multipath fading environment during the radio wave propagation, channel prediction plays a vital role to enable adaptive transmission for wireless communication systems. Predicting various channel characteristics by using neural networks can help address more complex communication environments. However, achieving this goal typically requires the simultaneous use of multiple distinct neural models, which is undoubtedly unaffordable for mobile communications. Therefore, it is necessary to enable a simpler structure to simultaneously predict multiple channel characteristics. In this paper, we propose a fading channel prediction method using phasor quaternion neural networks (PQNNs) to predict the polarization states, with phase information involved to enhance the channel compensation ability. We evaluate the performance of the proposed PQNN method in two different fading situations in an actual environment, and we find that the proposed scheme provides 2.8 dB and 4.0 dB improvements at bit error rate (BER) of $10^{-4}$ , showing better BER performance in light and serious fading situations, respectively. This work also reveals that by treating polarization information and phase information as a single entity, the model can leverage their physical correlation to achieve improved performance.
无线通信系统的性能在很大程度上取决于无线信道。由于无线电波传播过程中存在多径衰落环境,信道预测对无线通信系统的自适应传输起着至关重要的作用。利用神经网络预测各种信道特性有助于应对更复杂的通信环境。然而,要实现这一目标,通常需要同时使用多个不同的神经模型,这对于移动通信来说无疑是难以承受的。因此,有必要采用更简单的结构来同时预测多种信道特性。在本文中,我们提出了一种使用相位四元神经网络(PQNN)预测极化状态的衰落信道预测方法,其中涉及相位信息以增强信道补偿能力。我们在实际环境中评估了所提出的 PQNN 方法在两种不同衰落情况下的性能,发现所提出的方案在误码率(BER)为 10^{-4}$ 时分别提高了 2.8 dB 和 4.0 dB,在轻度衰落和严重衰落情况下分别表现出更好的误码率性能。这项工作还揭示出,通过将极化信息和相位信息视为单一实体,该模型可以利用它们之间的物理相关性来提高性能。
{"title":"Polarization-Aware Channel State Prediction Using Phasor Quaternion Neural Networks","authors":"Anzhe Ye;Haotian Chen;Ryo Natsuaki;Akira Hirose","doi":"10.1109/TMLCN.2024.3485521","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3485521","url":null,"abstract":"The performance of a wireless communication system depends to a large extent on the wireless channel. Due to the multipath fading environment during the radio wave propagation, channel prediction plays a vital role to enable adaptive transmission for wireless communication systems. Predicting various channel characteristics by using neural networks can help address more complex communication environments. However, achieving this goal typically requires the simultaneous use of multiple distinct neural models, which is undoubtedly unaffordable for mobile communications. Therefore, it is necessary to enable a simpler structure to simultaneously predict multiple channel characteristics. In this paper, we propose a fading channel prediction method using phasor quaternion neural networks (PQNNs) to predict the polarization states, with phase information involved to enhance the channel compensation ability. We evaluate the performance of the proposed PQNN method in two different fading situations in an actual environment, and we find that the proposed scheme provides 2.8 dB and 4.0 dB improvements at bit error rate (BER) of \u0000<inline-formula> <tex-math>$10^{-4}$ </tex-math></inline-formula>\u0000, showing better BER performance in light and serious fading situations, respectively. This work also reveals that by treating polarization information and phase information as a single entity, the model can leverage their physical correlation to achieve improved performance.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1628-1641"},"PeriodicalIF":0.0,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10731896","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142579173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TWIRLD: Transformer Generated Terahertz Waveform for Improved Radio Link Distance TWIRLD:用于改善无线电链路距离的变压器产生的太赫兹波形
Pub Date : 2024-10-17 DOI: 10.1109/TMLCN.2024.3483111
Shuvam Chakraborty;Claire Parisi;Dola Saha;Ngwe Thawdar
terahertz (THz) band communication is envisioned as one of the leading technologies to meet the exponentially growing data rate requirements of emerging and future wireless communication networks. Utilizing the contiguous bandwidth available at THz frequencies requires a transceiver design tailored to tackle issues existing at these frequencies such as strong propagation and absorption loss, small scale fading (e.g. scattering, reflection, refraction), hardware non-linearity, etc. In prior works, multicarrier waveforms (e.g., Orthogonal Frequency Division Multiplexing (OFDM)) are shown to be efficient in tackling some of these issues. However, OFDM introduces a drawback in the form of peak-to-average power ratio (PAPR) which, compounded with strong propagation and absorption loss and high noise power due to large bandwidth at THz and sub-THz frequencies, severely limits link distances and, in turn, capacity, preventing efficient bandwidth usage. In this work, we propose TWIRLD - a deep learning (DL)-based joint optimization method, modeled and implemented as components of end-to-end transceiver chain. TWIRLD performs a symbol remapping at baseband of OFDM signals, which increases average transmit power while also optimizing the bit error rate (BER). We provide theoretical analysis, statistical equivalence of TWIRLD to the ideal receiver, and comprehensive complexity and footprint estimates. We validate TWIRLD in simulation showing link distance improvement of up to 91% and compare the results with legacy and state of the art methods and their enhanced versions. Finally, TWIRLD is validated with over the air (OTA) communication using a state-of-the-art testbed at 140 GHz up to a bandwidth of 5 GHz where we observe improvement of up to 79% in link distance accommodating for practical channel and other transmission losses.
太赫兹(THz)波段通信被认为是满足新兴和未来无线通信网络急剧增长的数据传输速率要求的领先技术之一。要利用太赫兹频率的连续带宽,就必须设计出专门的收发器,以解决这些频率存在的问题,如强传播和吸收损耗、小尺度衰减(如散射、反射、折射)、硬件非线性等。在之前的研究中,多载波波形(如正交频分复用(OFDM))被证明能有效解决其中一些问题。然而,OFDM 引入了峰均功率比(PAPR)形式的缺点,再加上太赫兹和亚太赫兹频率的大带宽造成的强传播和吸收损耗以及高噪声功率,严重限制了链路距离,进而限制了容量,阻碍了带宽的有效利用。在这项工作中,我们提出了基于深度学习(DL)的联合优化方法 TWIRLD,并将其作为端到端收发器链的组件进行建模和实现。TWIRLD 在 OFDM 信号的基带执行符号重映射,在提高平均发射功率的同时优化误码率 (BER)。我们提供了理论分析、TWIRLD 与理想接收器的统计等价性以及全面的复杂性和占用空间估计。我们对 TWIRLD 进行了仿真验证,结果显示链路距离改善高达 91%,并将结果与传统方法、最新方法及其增强版本进行了比较。最后,我们使用最先进的测试平台在 140 GHz 至 5 GHz 的带宽范围内对 TWIRLD 进行了空中 (OTA) 通信验证,结果显示,在考虑到实际信道和其他传输损耗的情况下,TWIRLD 的链路距离最多可改善 79%。
{"title":"TWIRLD: Transformer Generated Terahertz Waveform for Improved Radio Link Distance","authors":"Shuvam Chakraborty;Claire Parisi;Dola Saha;Ngwe Thawdar","doi":"10.1109/TMLCN.2024.3483111","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3483111","url":null,"abstract":"terahertz (THz) band communication is envisioned as one of the leading technologies to meet the exponentially growing data rate requirements of emerging and future wireless communication networks. Utilizing the contiguous bandwidth available at THz frequencies requires a transceiver design tailored to tackle issues existing at these frequencies such as strong propagation and absorption loss, small scale fading (e.g. scattering, reflection, refraction), hardware non-linearity, etc. In prior works, multicarrier waveforms (e.g., Orthogonal Frequency Division Multiplexing (OFDM)) are shown to be efficient in tackling some of these issues. However, OFDM introduces a drawback in the form of peak-to-average power ratio (PAPR) which, compounded with strong propagation and absorption loss and high noise power due to large bandwidth at THz and sub-THz frequencies, severely limits link distances and, in turn, capacity, preventing efficient bandwidth usage. In this work, we propose \u0000<monospace>TWIRLD</monospace>\u0000 - a deep learning (DL)-based joint optimization method, modeled and implemented as components of end-to-end transceiver chain. \u0000<monospace>TWIRLD</monospace>\u0000 performs a symbol remapping at baseband of OFDM signals, which increases average transmit power while also optimizing the bit error rate (BER). We provide theoretical analysis, statistical equivalence of \u0000<monospace>TWIRLD</monospace>\u0000 to the ideal receiver, and comprehensive complexity and footprint estimates. We validate \u0000<monospace>TWIRLD</monospace>\u0000 in simulation showing link distance improvement of up to 91% and compare the results with legacy and state of the art methods and their enhanced versions. Finally, \u0000<monospace>TWIRLD</monospace>\u0000 is validated with over the air (OTA) communication using a state-of-the-art testbed at 140 GHz up to a bandwidth of 5 GHz where we observe improvement of up to 79% in link distance accommodating for practical channel and other transmission losses.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1595-1614"},"PeriodicalIF":0.0,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10720922","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142550544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recursive GNNs for Learning Precoding Policies With Size-Generalizability 用于学习具有大小通用性的预编码策略的递归 GNNs
Pub Date : 2024-10-14 DOI: 10.1109/TMLCN.2024.3480044
Jia Guo;Chenyang Yang
Graph neural networks (GNNs) have been shown promising in optimizing power allocation and link scheduling with good size generalizability and low training complexity. These merits are important for learning wireless policies under dynamic environments, which partially come from the matched permutation equivariance (PE) properties of the GNNs to the policies to be learned. Nonetheless, it has been noticed in literature that only satisfying the PE property of a precoding policy in multi-antenna systems cannot ensure a GNN for learning precoding to be generalizable to the unseen problem scales. Incorporating models with GNNs helps improve size generalizability, which however is only applicable to specific problems, settings, and algorithms. In this paper, we propose a framework of size generalizable GNNs for learning precoding policies that are purely data-driven and can learn wireless policies including but not limited to baseband and hybrid precoding in multi-user multi-antenna systems. To this end, we first find a special structure of each iteration of several numerical algorithms for optimizing precoding, from which we identify the key characteristics of a GNN that affect its size generalizability. Then, we design size-generalizable GNNs that are with these key characteristics and satisfy the PE properties of precoding policies in a recursive manner. Simulation results show that the proposed GNNs can be well-generalized to the number of users for learning baseband and hybrid precoding policies, require much fewer samples than existing GNNs and shorter inference time than numerical algorithms to achieve the same performance.
图神经网络(GNN)在优化功率分配和链路调度方面前景广阔,具有良好的尺寸泛化能力和较低的训练复杂度。这些优点对于在动态环境下学习无线策略非常重要,而这些优点部分来自于图形神经网络与待学习策略相匹配的置换等差(PE)特性。然而,有文献指出,在多天线系统中,仅满足预编码策略的 PE 属性并不能确保用于学习预编码的 GNN 能够泛化到未知的问题规模。将模型与 GNN 结合起来有助于提高规模通用性,但这只适用于特定的问题、设置和算法。在本文中,我们提出了一个用于学习预编码策略的可尺寸泛化 GNN 框架,该框架纯粹由数据驱动,可以学习无线策略,包括但不限于多用户多天线系统中的基带和混合预编码。为此,我们首先找到了用于优化预编码的几种数值算法每次迭代的特殊结构,并从中找出了影响 GNN 大小通用性的关键特征。然后,我们以递归的方式设计出具有这些关键特征并满足预编码策略 PE 特性的可尺寸泛化 GNN。仿真结果表明,所提出的 GNN 在学习基带和混合预编码策略时可以很好地泛化到用户数量,与现有的 GNN 相比需要更少的样本,与数值算法相比推理时间更短,从而达到相同的性能。
{"title":"Recursive GNNs for Learning Precoding Policies With Size-Generalizability","authors":"Jia Guo;Chenyang Yang","doi":"10.1109/TMLCN.2024.3480044","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3480044","url":null,"abstract":"Graph neural networks (GNNs) have been shown promising in optimizing power allocation and link scheduling with good size generalizability and low training complexity. These merits are important for learning wireless policies under dynamic environments, which partially come from the matched permutation equivariance (PE) properties of the GNNs to the policies to be learned. Nonetheless, it has been noticed in literature that only satisfying the PE property of a precoding policy in multi-antenna systems cannot ensure a GNN for learning precoding to be generalizable to the unseen problem scales. Incorporating models with GNNs helps improve size generalizability, which however is only applicable to specific problems, settings, and algorithms. In this paper, we propose a framework of size generalizable GNNs for learning precoding policies that are purely data-driven and can learn wireless policies including but not limited to baseband and hybrid precoding in multi-user multi-antenna systems. To this end, we first find a special structure of each iteration of several numerical algorithms for optimizing precoding, from which we identify the key characteristics of a GNN that affect its size generalizability. Then, we design size-generalizable GNNs that are with these key characteristics and satisfy the PE properties of precoding policies in a recursive manner. Simulation results show that the proposed GNNs can be well-generalized to the number of users for learning baseband and hybrid precoding policies, require much fewer samples than existing GNNs and shorter inference time than numerical algorithms to achieve the same performance.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1558-1579"},"PeriodicalIF":0.0,"publicationDate":"2024-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10716720","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142540477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NeIL: Intelligent Replica Selection for Distributed Applications NeIL:为分布式应用程序选择智能副本
Pub Date : 2024-10-11 DOI: 10.1109/TMLCN.2024.3479109
Faraz Ahmed;Lianjie Cao;Ayush Goel;Puneet Sharma
Distributed applications such as cloud gaming, streaming, etc., are increasingly using edge-to-cloud infrastructure for high availability and performance. While edge infrastructure brings services closer to the end-user, the number of sites on which the services need to be replicated has also increased. This makes replica selection challenging for clients of the replicated services. Traditional replica selection methods including anycast based methods and DNS re-directions are performance agnostic, and clients experience degraded network performance when network performance dynamics are not considered in replica selection. In this work, we present a client-side replica selection framework NeIL, that enables network performance aware replica selection. We propose to use bandits with experts based Multi-Armed Bandit (MAB) algorithms and adapt these algorithms for replica selection at individual clients without centralized coordination. We evaluate our approach using three different setups including a distributed Mininet setup where we use publicly available network performance data from the Measurement Lab (M-Lab) to emulate network conditions, a setup where we deploy replica servers on AWS, and finally we present results from a global enterprise deployment. Our experimental results show that in comparison to greedy selection, NeIL performs better than greedy for 45% of the time and better than or equal to greedy selection for 80% of the time resulting in a net gain in end-to-end network performance. On AWS, we see similar results where NeIL performs better than or equal to greedy for 75% of the time. We have successfully deployed NeIL in a global enterprise remote device management service with over 4000 client devices and our analysis shows that NeIL achieves significantly better tail service quality by cutting the $99th$ percentile tail latency from 5.6 seconds to 1.7 seconds.
云游戏、流媒体等分布式应用越来越多地使用边缘到云基础设施来实现高可用性和高性能。虽然边缘基础设施使服务更接近终端用户,但需要复制服务的站点数量也在增加。这就给复制服务的客户选择副本带来了挑战。传统的副本选择方法(包括基于任播的方法和 DNS 重定向)与性能无关,如果在选择副本时不考虑网络性能动态,客户端就会遇到网络性能下降的问题。在这项工作中,我们提出了一个客户端复制选择框架 NeIL,它能实现网络性能感知复制选择。我们建议使用基于专家的多臂匪徒(MAB)算法,并将这些算法调整用于单个客户端的副本选择,而无需集中协调。我们使用三种不同的设置来评估我们的方法,包括分布式 Mininet 设置(我们使用来自测量实验室(M-Lab)的公开可用网络性能数据来模拟网络条件)、在 AWS 上部署副本服务器的设置,最后我们展示全球企业部署的结果。我们的实验结果表明,与贪婪选择相比,NeIL 在 45% 的时间内表现优于贪婪选择,在 80% 的时间内表现优于或等于贪婪选择,从而实现了端到端网络性能的净增。在 AWS 上,我们也看到了类似的结果,NeIL 有 75% 的时间表现优于或等于贪婪选择。我们在一个拥有 4000 多台客户端设备的全球性企业远程设备管理服务中成功部署了 NeIL,我们的分析表明,NeIL 将第 99 个百分位数的尾部延迟从 5.6 秒降至 1.7 秒,从而显著提高了尾部服务质量。
{"title":"NeIL: Intelligent Replica Selection for Distributed Applications","authors":"Faraz Ahmed;Lianjie Cao;Ayush Goel;Puneet Sharma","doi":"10.1109/TMLCN.2024.3479109","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3479109","url":null,"abstract":"Distributed applications such as cloud gaming, streaming, etc., are increasingly using edge-to-cloud infrastructure for high availability and performance. While edge infrastructure brings services closer to the end-user, the number of sites on which the services need to be replicated has also increased. This makes replica selection challenging for clients of the replicated services. Traditional replica selection methods including anycast based methods and DNS re-directions are performance agnostic, and clients experience degraded network performance when network performance dynamics are not considered in replica selection. In this work, we present a client-side replica selection framework NeIL, that enables network performance aware replica selection. We propose to use bandits with experts based Multi-Armed Bandit (MAB) algorithms and adapt these algorithms for replica selection at individual clients without centralized coordination. We evaluate our approach using three different setups including a distributed Mininet setup where we use publicly available network performance data from the Measurement Lab (M-Lab) to emulate network conditions, a setup where we deploy replica servers on AWS, and finally we present results from a global enterprise deployment. Our experimental results show that in comparison to greedy selection, NeIL performs better than greedy for 45% of the time and better than or equal to greedy selection for 80% of the time resulting in a net gain in end-to-end network performance. On AWS, we see similar results where NeIL performs better than or equal to greedy for 75% of the time. We have successfully deployed NeIL in a global enterprise remote device management service with over 4000 client devices and our analysis shows that NeIL achieves significantly better tail service quality by cutting the \u0000<inline-formula> <tex-math>$99th$ </tex-math></inline-formula>\u0000 percentile tail latency from 5.6 seconds to 1.7 seconds.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1580-1594"},"PeriodicalIF":0.0,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10714467","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142540425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Intelligent and Programmable Data Plane for QoS-Aware Packet Processing 面向服务质量感知数据包处理的智能可编程数据平面
Pub Date : 2024-10-07 DOI: 10.1109/TMLCN.2024.3475968
Muhammad Saqib;Halime Elbiaze;Roch H. Glitho;Yacine Ghamri-Doudane
One of the main features of data plane programmability is that it allows the easy deployment of a programmable network traffic management framework. One can build an early-stage Internet traffic classifier to facilitate effective Quality of Service (QoS) provisioning. However, maintaining accuracy and efficiency (i.e., processing delay/pipeline latency) in early-stage traffic classification is challenging due to memory and operational constraints in the network data plane. Additionally, deploying network-wide flow-specific rules for QoS leads to significant memory usage and overheads. To address these challenges, we propose new architectural components encompassing efficient processing logic into the programmable traffic management framework. In particular, we propose a single feature-based traffic classification algorithm and a stateless QoS-aware packet scheduling mechanism. Our approach first focuses on maintaining accuracy and processing efficiency in early-stage traffic classification by leveraging a single input feature - sequential packet size information. We then use the classifier to embed the Service Level Objective (SLO) into the header of the packets. Carrying SLOs inside the packet allows QoS-aware packet processing through admission control-enabled priority queuing. The results show that most flows are properly classified with the first four packets. Furthermore, using the SLO-enabled admission control mechanism on top of the priority queues enables stateless QoS provisioning. Our approach outperforms the classical and objective-based priority queuing in managing heterogeneous traffic demands by improving network resource utilization.
数据平面可编程性的主要特点之一是可以轻松部署可编程的网络流量管理框架。我们可以建立一个早期阶段的互联网流量分类器,以促进有效的服务质量(QoS)供应。然而,由于网络数据平面的内存和操作限制,要保持早期流量分类的准确性和效率(即处理延迟/管道延迟)是一项挑战。此外,为 QoS 部署全网流量特定规则会导致大量内存使用和开销。为了应对这些挑战,我们提出了新的架构组件,将高效处理逻辑纳入可编程流量管理框架。特别是,我们提出了一种基于特征的单一流量分类算法和一种无状态 QoS 感知数据包调度机制。我们的方法首先通过利用单一输入特征--连续数据包大小信息--来保持早期流量分类的准确性和处理效率。然后,我们利用分类器将服务级别目标(SLO)嵌入数据包的头部。将 SLO 嵌入数据包后,就可以通过启用了准入控制的优先队列进行 QoS 感知数据包处理。结果表明,大多数流量都能通过前四个数据包正确分类。此外,在优先队列之上使用启用了 SLO 的准入控制机制可实现无状态 QoS 供应。通过提高网络资源利用率,我们的方法在管理异构流量需求方面优于传统的基于目标的优先队列。
{"title":"An Intelligent and Programmable Data Plane for QoS-Aware Packet Processing","authors":"Muhammad Saqib;Halime Elbiaze;Roch H. Glitho;Yacine Ghamri-Doudane","doi":"10.1109/TMLCN.2024.3475968","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3475968","url":null,"abstract":"One of the main features of data plane programmability is that it allows the easy deployment of a programmable network traffic management framework. One can build an early-stage Internet traffic classifier to facilitate effective Quality of Service (QoS) provisioning. However, maintaining accuracy and efficiency (i.e., processing delay/pipeline latency) in early-stage traffic classification is challenging due to memory and operational constraints in the network data plane. Additionally, deploying network-wide flow-specific rules for QoS leads to significant memory usage and overheads. To address these challenges, we propose new architectural components encompassing efficient processing logic into the programmable traffic management framework. In particular, we propose a single feature-based traffic classification algorithm and a stateless QoS-aware packet scheduling mechanism. Our approach first focuses on maintaining accuracy and processing efficiency in early-stage traffic classification by leveraging a single input feature - sequential packet size information. We then use the classifier to embed the Service Level Objective (SLO) into the header of the packets. Carrying SLOs inside the packet allows QoS-aware packet processing through admission control-enabled priority queuing. The results show that most flows are properly classified with the first four packets. Furthermore, using the SLO-enabled admission control mechanism on top of the priority queues enables stateless QoS provisioning. Our approach outperforms the classical and objective-based priority queuing in managing heterogeneous traffic demands by improving network resource utilization.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1540-1557"},"PeriodicalIF":0.0,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10706883","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142443062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Radio Environments by Differentiable Ray Tracing 通过可变光线跟踪学习无线电环境
Pub Date : 2024-10-04 DOI: 10.1109/TMLCN.2024.3474639
Jakob Hoydis;Fayçal Aït Aoudia;Sebastian Cammerer;Florian Euchner;Merlin Nimier-David;Stephan Ten Brink;Alexander Keller
Ray tracing (RT) is instrumental in 6G research in order to generate spatially-consistent and environment-specific channel impulse responses (CIRs). While acquiring accurate scene geometries is now relatively straightforward, determining material characteristics requires precise calibration using channel measurements. We therefore introduce a novel gradient-based calibration method, complemented by differentiable parametrizations of material properties, scattering and antenna patterns. Our method seamlessly integrates with differentiable ray tracers that enable the computation of derivatives of CIRs with respect to these parameters. Essentially, we approach field computation as a large computational graph wherein parameters are trainable akin to weights of a neural network (NN). We have validated our method using both synthetic data and real-world indoor channel measurements, employing a distributed multiple-input multiple-output (MIMO) channel sounder.
光线跟踪(RT)在 6G 研究中发挥着重要作用,可生成空间一致、环境特定的信道脉冲响应(CIR)。虽然现在获取精确的场景几何图形相对简单,但确定材料特性需要使用通道测量进行精确校准。因此,我们引入了一种新颖的基于梯度的校准方法,并辅以材料特性、散射和天线模式的可微分参数。我们的方法与可微分光线跟踪器无缝集成,可计算 CIR 相对于这些参数的导数。从本质上讲,我们将场计算视为一个大型计算图,其中的参数可训练,类似于神经网络(NN)的权重。我们采用分布式多输入多输出(MIMO)信道探测仪,通过合成数据和实际室内信道测量验证了我们的方法。
{"title":"Learning Radio Environments by Differentiable Ray Tracing","authors":"Jakob Hoydis;Fayçal Aït Aoudia;Sebastian Cammerer;Florian Euchner;Merlin Nimier-David;Stephan Ten Brink;Alexander Keller","doi":"10.1109/TMLCN.2024.3474639","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3474639","url":null,"abstract":"Ray tracing (RT) is instrumental in 6G research in order to generate spatially-consistent and environment-specific channel impulse responses (CIRs). While acquiring accurate scene geometries is now relatively straightforward, determining material characteristics requires precise calibration using channel measurements. We therefore introduce a novel gradient-based calibration method, complemented by differentiable parametrizations of material properties, scattering and antenna patterns. Our method seamlessly integrates with differentiable ray tracers that enable the computation of derivatives of CIRs with respect to these parameters. Essentially, we approach field computation as a large computational graph wherein parameters are trainable akin to weights of a neural network (NN). We have validated our method using both synthetic data and real-world indoor channel measurements, employing a distributed multiple-input multiple-output (MIMO) channel sounder.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1527-1539"},"PeriodicalIF":0.0,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10705152","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142442991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Smart Jamming Attack and Mitigation on Deep Transfer Reinforcement Learning Enabled Resource Allocation for Network Slicing 基于深度传输强化学习的网络切片资源分配的智能干扰攻击与缓解
Pub Date : 2024-09-30 DOI: 10.1109/TMLCN.2024.3470760
Shavbo Salehi;Hao Zhou;Medhat Elsayed;Majid Bavand;Raimundas Gaigalas;Yigit Ozcan;Melike Erol-Kantarci
Network slicing is a pivotal paradigm in wireless networks enabling customized services to users and applications. Yet, intelligent jamming attacks threaten the performance of network slicing. In this paper, we focus on the security aspect of network slicing over a deep transfer reinforcement learning (DTRL) enabled scenario. We first demonstrate how a deep reinforcement learning (DRL)-enabled jamming attack exposes potential risks. In particular, the attacker can intelligently jam resource blocks (RBs) reserved for slices by monitoring transmission signals and perturbing the assigned resources. Then, we propose a DRL-driven mitigation model to mitigate the intelligent attacker. Specifically, the defense mechanism generates interference on unallocated RBs where another antenna is used for transmitting powerful signals. This causes the jammer to consider these RBs as allocated RBs and generate interference for those instead of the allocated RBs. The analysis revealed that the intelligent DRL-enabled jamming attack caused a significant 50% degradation in network throughput and 60% increase in latency in comparison with the no-attack scenario. However, with the implemented mitigation measures, we observed 80% improvement in network throughput and 70% reduction in latency in comparison to the under-attack scenario.
网络切片是无线网络中的一个关键范例,可为用户和应用提供定制服务。然而,智能干扰攻击威胁着网络切片的性能。在本文中,我们重点讨论了在深度传输强化学习(DTRL)支持的场景下网络切片的安全问题。我们首先展示了支持深度强化学习(DRL)的干扰攻击是如何暴露潜在风险的。特别是,攻击者可以通过监控传输信号和扰动分配的资源,智能地干扰为切片预留的资源块(RB)。随后,我们提出了一种 DRL 驱动的缓解模型来缓解智能攻击者。具体来说,防御机制会在未分配的 RB 上产生干扰,在这些 RB 上,另一个天线被用于发射强大的信号。这会导致干扰者将这些 RB 视为已分配的 RB,并对这些 RB 而不是已分配的 RB 产生干扰。分析表明,与无攻击情况相比,启用 DRL 的智能干扰攻击导致网络吞吐量大幅下降 50%,延迟增加 60%。然而,通过实施缓解措施,我们观察到与未受攻击情况相比,网络吞吐量提高了 80%,延迟减少了 70%。
{"title":"Smart Jamming Attack and Mitigation on Deep Transfer Reinforcement Learning Enabled Resource Allocation for Network Slicing","authors":"Shavbo Salehi;Hao Zhou;Medhat Elsayed;Majid Bavand;Raimundas Gaigalas;Yigit Ozcan;Melike Erol-Kantarci","doi":"10.1109/TMLCN.2024.3470760","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3470760","url":null,"abstract":"Network slicing is a pivotal paradigm in wireless networks enabling customized services to users and applications. Yet, intelligent jamming attacks threaten the performance of network slicing. In this paper, we focus on the security aspect of network slicing over a deep transfer reinforcement learning (DTRL) enabled scenario. We first demonstrate how a deep reinforcement learning (DRL)-enabled jamming attack exposes potential risks. In particular, the attacker can intelligently jam resource blocks (RBs) reserved for slices by monitoring transmission signals and perturbing the assigned resources. Then, we propose a DRL-driven mitigation model to mitigate the intelligent attacker. Specifically, the defense mechanism generates interference on unallocated RBs where another antenna is used for transmitting powerful signals. This causes the jammer to consider these RBs as allocated RBs and generate interference for those instead of the allocated RBs. The analysis revealed that the intelligent DRL-enabled jamming attack caused a significant 50% degradation in network throughput and 60% increase in latency in comparison with the no-attack scenario. However, with the implemented mitigation measures, we observed 80% improvement in network throughput and 70% reduction in latency in comparison to the under-attack scenario.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1492-1508"},"PeriodicalIF":0.0,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10699421","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142397284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing Resource Fragmentation in Virtual Network Function Placement Using Deep Reinforcement Learning 利用深度强化学习优化虚拟网络功能布局中的资源碎片
Pub Date : 2024-09-26 DOI: 10.1109/TMLCN.2024.3469131
Ramy Mohamed;Marios Avgeris;Aris Leivadeas;Ioannis Lambadaris
In the 6G wireless era, the strategical deployment of Virtual Network Functions (VNFs) within a network infrastructure that optimizes resource utilization while fulfilling performance criteria is critical for successfully implementing the Network Function Virtualization (NFV) paradigm across the Edge-to-Cloud continuum. This is especially prominent when resource fragmentation –where available resources become isolated and underutilized– becomes an issue due to the frequent reallocations of VNFs. However, traditional optimization methods often struggle to deal with the dynamic and complex nature of the VNF placement problem when fragmentation is considered. This study proposes a novel online VNF placement approach for Edge/Cloud infrastructures that utilizes Deep Reinforcement Learning (DRL) and Reward Constrained Policy Optimization (RCPO) to address this problem. We combine DRL’s adaptability with RCPO’s constraint incorporation capabilities to ensure that the learned policies satisfy the performance and resource constraints while minimizing resource fragmentation. Specifically, the VNF placement problem is first formulated as an offline-constrained optimization problem, and then we devise an online solver using Neural Combinatorial Optimization (NCO). Our method incorporates a metric called Resource Fragmentation Degree (RFD) to quantify fragmentation in the network. Using this metric and RCPO, our NCO agent is trained to make intelligent placement decisions that reduce fragmentation and optimize resource utilization. An error correction heuristic complements the robustness of the proposed framework. Through extensive testing in a simulated environment, the proposed approach is shown to outperform state-of-the-art VNF placement techniques when it comes to minimizing resource fragmentation under constraint satisfaction guarantees.
在 6G 无线时代,在网络基础设施中战略性地部署虚拟网络功能 (VNF),在满足性能标准的同时优化资源利用率,对于在从边缘到云的整个过程中成功实施网络功能虚拟化 (NFV) 范例至关重要。当由于 VNF 的频繁重新分配而导致资源碎片化(可用资源变得孤立和利用不足)成为问题时,这一点就尤为突出。然而,传统的优化方法往往难以应对碎片化情况下 VNF 放置问题的动态性和复杂性。本研究针对边缘/云基础设施提出了一种新型在线 VNF 安置方法,利用深度强化学习(DRL)和奖励约束策略优化(RCPO)来解决这一问题。我们将 DRL 的适应性与 RCPO 的约束整合能力相结合,确保学习到的策略满足性能和资源约束,同时最大限度地减少资源碎片。具体来说,VNF 放置问题首先被表述为离线约束优化问题,然后我们利用神经组合优化(NCO)设计了一个在线求解器。我们的方法采用了一种称为资源碎片度(RFD)的指标来量化网络中的碎片。利用这一指标和 RCPO,我们的 NCO 代理经过训练,可以做出智能化的放置决策,从而减少碎片并优化资源利用率。纠错启发式补充了拟议框架的鲁棒性。通过在模拟环境中进行广泛的测试,证明在保证满足约束条件的前提下最大限度地减少资源碎片方面,所提出的方法优于最先进的 VNF 放置技术。
{"title":"Optimizing Resource Fragmentation in Virtual Network Function Placement Using Deep Reinforcement Learning","authors":"Ramy Mohamed;Marios Avgeris;Aris Leivadeas;Ioannis Lambadaris","doi":"10.1109/TMLCN.2024.3469131","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3469131","url":null,"abstract":"In the 6G wireless era, the strategical deployment of Virtual Network Functions (VNFs) within a network infrastructure that optimizes resource utilization while fulfilling performance criteria is critical for successfully implementing the Network Function Virtualization (NFV) paradigm across the Edge-to-Cloud continuum. This is especially prominent when resource fragmentation –where available resources become isolated and underutilized– becomes an issue due to the frequent reallocations of VNFs. However, traditional optimization methods often struggle to deal with the dynamic and complex nature of the VNF placement problem when fragmentation is considered. This study proposes a novel online VNF placement approach for Edge/Cloud infrastructures that utilizes Deep Reinforcement Learning (DRL) and Reward Constrained Policy Optimization (RCPO) to address this problem. We combine DRL’s adaptability with RCPO’s constraint incorporation capabilities to ensure that the learned policies satisfy the performance and resource constraints while minimizing resource fragmentation. Specifically, the VNF placement problem is first formulated as an offline-constrained optimization problem, and then we devise an online solver using Neural Combinatorial Optimization (NCO). Our method incorporates a metric called Resource Fragmentation Degree (RFD) to quantify fragmentation in the network. Using this metric and RCPO, our NCO agent is trained to make intelligent placement decisions that reduce fragmentation and optimize resource utilization. An error correction heuristic complements the robustness of the proposed framework. Through extensive testing in a simulated environment, the proposed approach is shown to outperform state-of-the-art VNF placement techniques when it comes to minimizing resource fragmentation under constraint satisfaction guarantees.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1475-1491"},"PeriodicalIF":0.0,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10695455","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142397282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Machine Learning in Communications and Networking
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1