首页 > 最新文献

IEEE Transactions on Machine Learning in Communications and Networking最新文献

英文 中文
Semi-Supervised Learning via Cross-Prediction-Powered Inference for Wireless Systems 基于交叉预测推理的无线系统半监督学习
Pub Date : 2024-11-20 DOI: 10.1109/TMLCN.2024.3503543
Houssem Sifaou;Osvaldo Simeone
In many wireless application scenarios, acquiring labeled data can be prohibitively costly, requiring complex optimization processes or measurement campaigns. Semi-supervised learning leverages unlabeled samples to augment the available dataset by assigning synthetic labels obtained via machine learning (ML)-based predictions. However, treating the synthetic labels as true labels may yield worse-performing models as compared to models trained using only labeled data. Inspired by the recently developed prediction-powered inference (PPI) framework, this work investigates how to leverage the synthetic labels produced by an ML model, while accounting for the inherent bias concerning true labels. To this end, we first review PPI and its recent extensions, namely tuned PPI and cross-prediction-powered inference (CPPI). Then, we introduce two novel variants of PPI. The first, referred to as tuned CPPI, provides CPPI with an additional degree of freedom in adapting to the quality of the ML-based labels. The second, meta-CPPI (MCPPI), extends tuned CPPI via the joint optimization of the ML labeling models and of the parameters of interest. Finally, we showcase two applications of PPI-based techniques in wireless systems, namely beam alignment based on channel knowledge maps in millimeter-wave systems and received signal strength information-based indoor localization. Simulation results show the advantages of PPI-based techniques over conventional approaches that rely solely on labeled data or that apply standard pseudo-labeling strategies from semi-supervised learning. Furthermore, the proposed tuned CPPI method is observed to guarantee the best performance among all benchmark schemes, especially in the regime of limited labeled data.
在许多无线应用场景中,获取标记数据的成本可能非常高,需要复杂的优化过程或测量活动。半监督学习利用未标记的样本,通过分配基于机器学习(ML)的预测获得的合成标签来增加可用数据集。然而,与仅使用标记数据训练的模型相比,将合成标签视为真实标签可能会产生性能较差的模型。受最近开发的预测驱动推理(PPI)框架的启发,这项工作研究了如何利用ML模型产生的合成标签,同时考虑到关于真实标签的固有偏差。为此,我们首先回顾了PPI及其最近的扩展,即调优PPI和交叉预测驱动推理(CPPI)。然后,我们介绍了PPI的两种新变体。第一种,称为调谐CPPI,为CPPI提供了额外的自由度,以适应基于ml的标签的质量。第二种是元CPPI (MCPPI),它通过ML标记模型和感兴趣参数的联合优化扩展了调整后的CPPI。最后,我们展示了基于ppi技术在无线系统中的两种应用,即毫米波系统中基于信道知识图的波束对准和基于接收信号强度信息的室内定位。仿真结果表明,基于ppi的技术优于仅依赖于标记数据或应用半监督学习的标准伪标记策略的传统方法。此外,所提出的调优CPPI方法在所有基准测试方案中保证了最佳性能,特别是在有限标记数据的情况下。
{"title":"Semi-Supervised Learning via Cross-Prediction-Powered Inference for Wireless Systems","authors":"Houssem Sifaou;Osvaldo Simeone","doi":"10.1109/TMLCN.2024.3503543","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3503543","url":null,"abstract":"In many wireless application scenarios, acquiring labeled data can be prohibitively costly, requiring complex optimization processes or measurement campaigns. Semi-supervised learning leverages unlabeled samples to augment the available dataset by assigning synthetic labels obtained via machine learning (ML)-based predictions. However, treating the synthetic labels as true labels may yield worse-performing models as compared to models trained using only labeled data. Inspired by the recently developed prediction-powered inference (PPI) framework, this work investigates how to leverage the synthetic labels produced by an ML model, while accounting for the inherent bias concerning true labels. To this end, we first review PPI and its recent extensions, namely tuned PPI and cross-prediction-powered inference (CPPI). Then, we introduce two novel variants of PPI. The first, referred to as tuned CPPI, provides CPPI with an additional degree of freedom in adapting to the quality of the ML-based labels. The second, meta-CPPI (MCPPI), extends tuned CPPI via the joint optimization of the ML labeling models and of the parameters of interest. Finally, we showcase two applications of PPI-based techniques in wireless systems, namely beam alignment based on channel knowledge maps in millimeter-wave systems and received signal strength information-based indoor localization. Simulation results show the advantages of PPI-based techniques over conventional approaches that rely solely on labeled data or that apply standard pseudo-labeling strategies from semi-supervised learning. Furthermore, the proposed tuned CPPI method is observed to guarantee the best performance among all benchmark schemes, especially in the regime of limited labeled data.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"30-44"},"PeriodicalIF":0.0,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10758826","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142844291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reinforcement-Learning-Based Trajectory Design and Phase-Shift Control in UAV-Mounted-RIS Communications 基于强化学习的无人机- ris通信轨迹设计与相移控制
Pub Date : 2024-11-19 DOI: 10.1109/TMLCN.2024.3502576
Tianjiao Sun;Sixing Yin;Li Deng;F. Richard Yu
Taking advantages of both unmanned aerial vehicles (UAVs) and reconfigurable intelligent surfaces (RISs), UAV-mounted-RIS systems are expected to enhance transmission performance in complicated wireless environments. In this paper, we focus on system design for a UAV-mounted-RIS system and investigate joint optimization for the RIS’s phase shift and the UAV’s trajectory. To cope with the practical issue of inaccessible information on the user terminals’ (UTs) location and channel state, a reinforcement learning (RL)-based solution is proposed to find the optimal policy with finite steps of “trial-and-error”. As the action space is continuous, the deep deterministic policy gradient (DDPG) algorithm is applied to train the RL model. However, the online interaction between the agent and environment may lead to instability during the training and the assumption of (first-order) Markovian state transition could be impractical in real-world problems. Therefore, the decision transformer (DT) algorithm is employed as an alternative for RL model training to adapt to more general situations of state transition. Experimental results demonstrate that the proposed RL solutions are highly efficient in model training along with acceptable performance close to the benchmark, which relies on conventional optimization algorithms with the UT’s locations and channel parameters explicitly known beforehand.
利用无人机(uav)和可重构智能表面(RISs)的优势,无人机安装的ris系统有望提高复杂无线环境下的传输性能。本文重点研究了无人机挂载RIS系统的系统设计,并研究了RIS相移和无人机轨迹的联合优化问题。针对用户终端位置信息和通道状态信息不可访问的实际问题,提出了一种基于强化学习(RL)的方法,通过有限步的“试错”找出最优策略。由于动作空间是连续的,采用深度确定性策略梯度(deep deterministic policy gradient, DDPG)算法对RL模型进行训练。然而,智能体和环境之间的在线交互可能导致训练过程中的不稳定性,并且(一阶)马尔可夫状态转移的假设在现实问题中可能是不切实际的。因此,决策转换器(DT)算法被用作RL模型训练的替代方法,以适应更一般的状态转移情况。实验结果表明,所提出的RL解决方案在模型训练中非常高效,并且具有接近基准的可接受性能,这依赖于事先明确知道UT位置和通道参数的传统优化算法。
{"title":"Reinforcement-Learning-Based Trajectory Design and Phase-Shift Control in UAV-Mounted-RIS Communications","authors":"Tianjiao Sun;Sixing Yin;Li Deng;F. Richard Yu","doi":"10.1109/TMLCN.2024.3502576","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3502576","url":null,"abstract":"Taking advantages of both unmanned aerial vehicles (UAVs) and reconfigurable intelligent surfaces (RISs), UAV-mounted-RIS systems are expected to enhance transmission performance in complicated wireless environments. In this paper, we focus on system design for a UAV-mounted-RIS system and investigate joint optimization for the RIS’s phase shift and the UAV’s trajectory. To cope with the practical issue of inaccessible information on the user terminals’ (UTs) location and channel state, a reinforcement learning (RL)-based solution is proposed to find the optimal policy with finite steps of “trial-and-error”. As the action space is continuous, the deep deterministic policy gradient (DDPG) algorithm is applied to train the RL model. However, the online interaction between the agent and environment may lead to instability during the training and the assumption of (first-order) Markovian state transition could be impractical in real-world problems. Therefore, the decision transformer (DT) algorithm is employed as an alternative for RL model training to adapt to more general situations of state transition. Experimental results demonstrate that the proposed RL solutions are highly efficient in model training along with acceptable performance close to the benchmark, which relies on conventional optimization algorithms with the UT’s locations and channel parameters explicitly known beforehand.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"163-175"},"PeriodicalIF":0.0,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10758222","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142918339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A2PC: Augmented Advantage Pointer-Critic Model for Low Latency on Mobile IoT With Edge Computing A2PC:基于边缘计算的移动物联网低延迟增强优势指针批判模型
Pub Date : 2024-11-18 DOI: 10.1109/TMLCN.2024.3501217
Rodrigo Carvalho;Faroq Al-Tam;Noélia Correia
As a growing trend, edge computing infrastructures are starting to be integrated with Internet of Things (IoT) systems to facilitate time-critical applications. These systems often require the processing of data with limited usefulness in time, so the edge becomes vital in the development of such reactive IoT applications with real-time requirements. Although different architectural designs will always have advantages and disadvantages, mobile gateways appear to be particularly relevant in enabling this integration with the edge, particularly in the context of wide area networks with occasional data generation. In these scenarios, mobility planning is necessary, as aspects of the technology need to be aligned with the temporal needs of an application. The nature of this planning problem makes cutting-edge deep reinforcement learning (DRL) techniques useful in solving pertinent issues, such as having to deal with multiple dimensions in the action space while aiming for optimum levels of system performance. This article presents a novel scalable DRL model that incorporates a pointer-network (Ptr-Net) and an actor-critic algorithm to handle complex action spaces. The model synchronously determines the gateway location and visit time. Ultimately, the gateways are able to attain high-quality trajectory planning with reduced latency.
作为一种日益增长的趋势,边缘计算基础设施开始与物联网(IoT)系统集成,以促进时间关键型应用。这些系统通常需要及时处理有用性有限的数据,因此在开发具有实时要求的响应式物联网应用程序时,边缘变得至关重要。尽管不同的架构设计总是有优点和缺点,但移动网关在实现与边缘的集成方面似乎特别相关,特别是在偶尔产生数据的广域网环境中。在这些场景中,移动性规划是必要的,因为技术的各个方面需要与应用程序的临时需求保持一致。这个规划问题的本质使得尖端的深度强化学习(DRL)技术在解决相关问题时非常有用,例如必须在行动空间中处理多个维度,同时以最佳系统性能为目标。本文提出了一种新颖的可扩展DRL模型,该模型结合了一个指针网络(Ptr-Net)和一个actor-critic算法来处理复杂的动作空间。该模型同步确定网关位置和访问时间。最终,网关能够在减少延迟的情况下获得高质量的轨迹规划。
{"title":"A2PC: Augmented Advantage Pointer-Critic Model for Low Latency on Mobile IoT With Edge Computing","authors":"Rodrigo Carvalho;Faroq Al-Tam;Noélia Correia","doi":"10.1109/TMLCN.2024.3501217","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3501217","url":null,"abstract":"As a growing trend, edge computing infrastructures are starting to be integrated with Internet of Things (IoT) systems to facilitate time-critical applications. These systems often require the processing of data with limited usefulness in time, so the edge becomes vital in the development of such reactive IoT applications with real-time requirements. Although different architectural designs will always have advantages and disadvantages, mobile gateways appear to be particularly relevant in enabling this integration with the edge, particularly in the context of wide area networks with occasional data generation. In these scenarios, mobility planning is necessary, as aspects of the technology need to be aligned with the temporal needs of an application. The nature of this planning problem makes cutting-edge deep reinforcement learning (DRL) techniques useful in solving pertinent issues, such as having to deal with multiple dimensions in the action space while aiming for optimum levels of system performance. This article presents a novel scalable DRL model that incorporates a pointer-network (Ptr-Net) and an actor-critic algorithm to handle complex action spaces. The model synchronously determines the gateway location and visit time. Ultimately, the gateways are able to attain high-quality trajectory planning with reduced latency.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"1-16"},"PeriodicalIF":0.0,"publicationDate":"2024-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10755120","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing Power Allocation in HAPs Assisted LEO Satellite Communications 优化辅助低地轨道卫星通信的 HAP 功率分配
Pub Date : 2024-11-04 DOI: 10.1109/TMLCN.2024.3491054
Zain Ali;Zouheir Rezki;Mohamed-Slim Alouini
The next generation of communication devices will require robust connectivity for millions of ground devices such as sensors or mobile devices in remote or disaster-stricken areas to be connected to the network. Non-terrestrial network (NTN) nodes can play a vital role in fulfilling these requirements. Specifically, low-earth orbiting (LEO) satellites have emerged as an efficient and cost-effective technique to connect devices over long distances through space. However, due to their low power and environmental limitations, LEO satellites may require assistance from aerial devices such as high-altitude platforms (HAPs) or unmanned aerial vehicles to forward their data to the ground devices. Moreover, the limited power available at the NTNs makes it crucial to utilize available resources efficiently. In this paper, we present a model in which a LEO satellite communicates with multiple ground devices with the help of HAPs that relay LEO data to the ground devices. We formulate the problem of optimizing power allocation at the LEO satellite and all the HAPs to maximize the sum-rate of the system. To take advantage of the benefits of free-space optical (FSO) communication in satellites, we consider the LEO transmitting data to the HAPs on FSO links, which are then broadcast to the connected ground devices on radio frequency channels. We transform the complex non-convex problem into a convex form and compute the Karush-Kuhn-Tucker (KKT) conditions-based solution of the problem for power allocation at the LEO satellite and HAPs. Then, to reduce computation time, we propose a soft actor-critic (SAC) reinforcement learning (RL) framework that provides the solution in significantly less time while delivering comparable performance to the KKT scheme. Our simulation results demonstrate that the proposed solutions provide excellent performance and are scalable to any number of HAPs and ground devices in the system.
下一代通信设备需要强大的连接能力,以便将偏远或受灾地区的传感器或移动设备等数以百万计的地面设备连接到网络。非地面网络(NTN)节点可在满足这些要求方面发挥重要作用。具体来说,低地轨道(LEO)卫星已成为一种高效、经济的技术,可通过空间远距离连接设备。然而,由于低功率和环境限制,低地轨道卫星可能需要高空平台(HAP)或无人飞行器等空中设备的协助,才能将数据传送到地面设备。此外,近地轨道网的功率有限,因此有效利用可用资源至关重要。在本文中,我们提出了一个低地轨道卫星与多个地面设备通信的模型,借助 HAP 将低地轨道数据转发给地面设备。我们提出的问题是优化低地轨道卫星和所有 HAP 的功率分配,使系统总速率最大化。为了利用卫星自由空间光学(FSO)通信的优势,我们考虑由低地轨道卫星通过 FSO 链路向 HAP 发送数据,然后通过无线电频率信道将数据广播给连接的地面设备。我们将复杂的非凸问题转化为凸问题,并计算出基于卡鲁什-库恩-塔克(KKT)条件的低地轨道卫星和 HAP 功率分配问题解决方案。然后,为了减少计算时间,我们提出了一种软行为批判(SAC)强化学习(RL)框架,该框架在提供与 KKT 方案性能相当的解决方案的同时,大大缩短了计算时间。我们的仿真结果表明,所提出的解决方案性能卓越,可扩展至系统中任何数量的 HAP 和地面设备。
{"title":"Optimizing Power Allocation in HAPs Assisted LEO Satellite Communications","authors":"Zain Ali;Zouheir Rezki;Mohamed-Slim Alouini","doi":"10.1109/TMLCN.2024.3491054","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3491054","url":null,"abstract":"The next generation of communication devices will require robust connectivity for millions of ground devices such as sensors or mobile devices in remote or disaster-stricken areas to be connected to the network. Non-terrestrial network (NTN) nodes can play a vital role in fulfilling these requirements. Specifically, low-earth orbiting (LEO) satellites have emerged as an efficient and cost-effective technique to connect devices over long distances through space. However, due to their low power and environmental limitations, LEO satellites may require assistance from aerial devices such as high-altitude platforms (HAPs) or unmanned aerial vehicles to forward their data to the ground devices. Moreover, the limited power available at the NTNs makes it crucial to utilize available resources efficiently. In this paper, we present a model in which a LEO satellite communicates with multiple ground devices with the help of HAPs that relay LEO data to the ground devices. We formulate the problem of optimizing power allocation at the LEO satellite and all the HAPs to maximize the sum-rate of the system. To take advantage of the benefits of free-space optical (FSO) communication in satellites, we consider the LEO transmitting data to the HAPs on FSO links, which are then broadcast to the connected ground devices on radio frequency channels. We transform the complex non-convex problem into a convex form and compute the Karush-Kuhn-Tucker (KKT) conditions-based solution of the problem for power allocation at the LEO satellite and HAPs. Then, to reduce computation time, we propose a soft actor-critic (SAC) reinforcement learning (RL) framework that provides the solution in significantly less time while delivering comparable performance to the KKT scheme. Our simulation results demonstrate that the proposed solutions provide excellent performance and are scalable to any number of HAPs and ground devices in the system.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1661-1677"},"PeriodicalIF":0.0,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10741546","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142636509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attention-Aided Outdoor Localization in Commercial 5G NR Systems 商用 5G NR 系统中的注意力辅助室外定位
Pub Date : 2024-11-01 DOI: 10.1109/TMLCN.2024.3490496
Guoda Tian;Dino Pjanić;Xuesong Cai;Bo Bernhardsson;Fredrik Tufvesson
The integration of high-precision cellular localization and machine learning (ML) is considered a cornerstone technique in future cellular navigation systems, offering unparalleled accuracy and functionality. This study focuses on localization based on uplink channel measurements in a fifth-generation (5G) new radio (NR) system. An attention-aided ML-based single-snapshot localization pipeline is presented, which consists of several cascaded blocks, namely a signal processing block, an attention-aided block, and an uncertainty estimation block. Specifically, the signal processing block generates an impulse response beam matrix for all beams. The attention-aided block trains on the channel impulse responses using an attention-aided network, which captures the correlation between impulse responses for different beams. The uncertainty estimation block predicts the probability density function of the user equipment (UE) position, thereby also indicating the confidence level of the localization result. Two representative uncertainty estimation techniques, the negative log-likelihood and the regression-by-classification techniques, are applied and compared. Furthermore, for dynamic measurements with multiple snapshots available, we combine the proposed pipeline with a Kalman filter to enhance localization accuracy. To evaluate our approach, we extract channel impulse responses for different beams from a commercial base station. The outdoor measurement campaign covers Line-of-Sight (LoS), Non Line-of-Sight (NLoS), and a mix of LoS and NLoS scenarios. The results show that sub-meter localization accuracy can be achieved.
高精度蜂窝定位与机器学习(ML)的集成被认为是未来蜂窝导航系统的基石技术,可提供无与伦比的精度和功能。本研究的重点是第五代(5G)新无线电(NR)系统中基于上行链路信道测量的定位。本文介绍了一种基于注意力辅助 ML 的单快照定位流水线,它由几个级联块组成,即信号处理块、注意力辅助块和不确定性估计块。具体来说,信号处理模块为所有波束生成脉冲响应波束矩阵。注意力辅助块利用注意力辅助网络对信道脉冲响应进行训练,从而捕捉不同波束脉冲响应之间的相关性。不确定性估计模块预测用户设备(UE)位置的概率密度函数,从而显示定位结果的置信度。应用了两种具有代表性的不确定性估计技术,即负对数概率和分类回归技术,并进行了比较。此外,对于具有多个可用快照的动态测量,我们将提议的管道与卡尔曼滤波器相结合,以提高定位精度。为了评估我们的方法,我们从一个商用基站提取了不同波束的信道脉冲响应。室外测量活动涵盖了视距(LoS)、非视距(NLoS)以及 LoS 和 NLoS 场景的混合。结果表明,可以实现亚米级定位精度。
{"title":"Attention-Aided Outdoor Localization in Commercial 5G NR Systems","authors":"Guoda Tian;Dino Pjanić;Xuesong Cai;Bo Bernhardsson;Fredrik Tufvesson","doi":"10.1109/TMLCN.2024.3490496","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3490496","url":null,"abstract":"The integration of high-precision cellular localization and machine learning (ML) is considered a cornerstone technique in future cellular navigation systems, offering unparalleled accuracy and functionality. This study focuses on localization based on uplink channel measurements in a fifth-generation (5G) new radio (NR) system. An attention-aided ML-based single-snapshot localization pipeline is presented, which consists of several cascaded blocks, namely a signal processing block, an attention-aided block, and an uncertainty estimation block. Specifically, the signal processing block generates an impulse response beam matrix for all beams. The attention-aided block trains on the channel impulse responses using an attention-aided network, which captures the correlation between impulse responses for different beams. The uncertainty estimation block predicts the probability density function of the user equipment (UE) position, thereby also indicating the confidence level of the localization result. Two representative uncertainty estimation techniques, the negative log-likelihood and the regression-by-classification techniques, are applied and compared. Furthermore, for dynamic measurements with multiple snapshots available, we combine the proposed pipeline with a Kalman filter to enhance localization accuracy. To evaluate our approach, we extract channel impulse responses for different beams from a commercial base station. The outdoor measurement campaign covers Line-of-Sight (LoS), Non Line-of-Sight (NLoS), and a mix of LoS and NLoS scenarios. The results show that sub-meter localization accuracy can be achieved.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1678-1692"},"PeriodicalIF":0.0,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10741343","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142694615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Information Bottleneck-Based Domain Adaptation for Hybrid Deep Learning in Scalable Network Slicing 可扩展网络切片中基于信息瓶颈的混合深度学习领域适应性研究
Pub Date : 2024-10-24 DOI: 10.1109/TMLCN.2024.3485520
Tianlun Hu;Qi Liao;Qiang Liu;Georg Carle
Network slicing enables operators to efficiently support diverse applications on a shared infrastructure. However, the evolving complexity of networks, compounded by inter-cell interference, necessitates agile and adaptable resource management. While deep learning offers solutions for coping with complexity, its adaptability to dynamic configurations remains limited. In this paper, we propose a novel hybrid deep learning algorithm called IDLA (integrated deep learning with the Lagrangian method). This integrated approach aims to enhance the scalability, flexibility, and robustness of slicing resource allocation solutions by harnessing the high approximation capability of deep learning and the strong generalization of classical non-linear optimization methods. Then, we introduce a variational information bottleneck (VIB)-assisted domain adaptation (DA) approach to enhance integrated deep learning and Lagrangian method (IDLA)’s adaptability across diverse network environments and conditions. We propose pre-training a variational information bottleneck (VIB)-based Quality of Service (QoS) estimator, using slice-specific inputs shared across all source domain slices. Each target domain slice can deploy this estimator to predict its QoS and optimize slice resource allocation using the IDLA algorithm. This VIB-based estimator is continuously fine-tuned with a mixture of samples from both the source and target domains until convergence. Evaluating on a multi-cell network with time-varying slice configurations, the VIB-enhanced IDLA algorithm outperforms baselines such as heuristic and deep reinforcement learning-based solutions, achieving twice the convergence speed and 16.52% higher asymptotic performance after slicing configuration changes. Transferability assessment demonstrates a 25.66% improvement in estimation accuracy with VIB, especially in scenarios with significant domain gaps, highlighting its robustness and effectiveness across diverse domains.
网络切片使运营商能够在共享基础设施上有效支持各种应用。然而,网络的复杂性不断发展,再加上小区间的干扰,因此需要灵活、适应性强的资源管理。虽然深度学习提供了应对复杂性的解决方案,但其对动态配置的适应性仍然有限。在本文中,我们提出了一种名为 IDLA(拉格朗日法集成深度学习)的新型混合深度学习算法。这种集成方法旨在利用深度学习的高逼近能力和经典非线性优化方法的强泛化能力,增强切片资源分配解决方案的可扩展性、灵活性和鲁棒性。然后,我们引入了一种变异信息瓶颈(VIB)辅助领域适应(DA)方法,以增强集成深度学习和拉格朗日方法(IDLA)在不同网络环境和条件下的适应性。我们提出了一种基于变异信息瓶颈(VIB)的服务质量(QoS)预估器,使用所有源域片共享的特定片输入进行预训练。每个目标域切片都可以使用该估计器来预测其 QoS,并使用 IDLA 算法优化切片资源分配。这种基于 VIB 的估计器通过源域和目标域的混合样本进行持续微调,直至收敛。在具有时变切片配置的多蜂窝网络上进行评估时,VIB 增强型 IDLA 算法优于启发式和基于深度强化学习的解决方案等基线算法,在切片配置发生变化后,收敛速度提高了一倍,渐进性能提高了 16.52%。可移植性评估表明,使用 VIB 后,估计准确率提高了 25.66%,尤其是在存在显著领域差距的场景中,这凸显了 VIB 在不同领域的鲁棒性和有效性。
{"title":"Information Bottleneck-Based Domain Adaptation for Hybrid Deep Learning in Scalable Network Slicing","authors":"Tianlun Hu;Qi Liao;Qiang Liu;Georg Carle","doi":"10.1109/TMLCN.2024.3485520","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3485520","url":null,"abstract":"Network slicing enables operators to efficiently support diverse applications on a shared infrastructure. However, the evolving complexity of networks, compounded by inter-cell interference, necessitates agile and adaptable resource management. While deep learning offers solutions for coping with complexity, its adaptability to dynamic configurations remains limited. In this paper, we propose a novel hybrid deep learning algorithm called IDLA (integrated deep learning with the Lagrangian method). This integrated approach aims to enhance the scalability, flexibility, and robustness of slicing resource allocation solutions by harnessing the high approximation capability of deep learning and the strong generalization of classical non-linear optimization methods. Then, we introduce a variational information bottleneck (VIB)-assisted domain adaptation (DA) approach to enhance integrated deep learning and Lagrangian method (IDLA)’s adaptability across diverse network environments and conditions. We propose pre-training a variational information bottleneck (VIB)-based Quality of Service (QoS) estimator, using slice-specific inputs shared across all source domain slices. Each target domain slice can deploy this estimator to predict its QoS and optimize slice resource allocation using the IDLA algorithm. This VIB-based estimator is continuously fine-tuned with a mixture of samples from both the source and target domains until convergence. Evaluating on a multi-cell network with time-varying slice configurations, the VIB-enhanced IDLA algorithm outperforms baselines such as heuristic and deep reinforcement learning-based solutions, achieving twice the convergence speed and 16.52% higher asymptotic performance after slicing configuration changes. Transferability assessment demonstrates a 25.66% improvement in estimation accuracy with VIB, especially in scenarios with significant domain gaps, highlighting its robustness and effectiveness across diverse domains.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1642-1660"},"PeriodicalIF":0.0,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10734592","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142579172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Polarization-Aware Channel State Prediction Using Phasor Quaternion Neural Networks 利用相位四元数神经网络进行极化感知信道状态预测
Pub Date : 2024-10-23 DOI: 10.1109/TMLCN.2024.3485521
Anzhe Ye;Haotian Chen;Ryo Natsuaki;Akira Hirose
The performance of a wireless communication system depends to a large extent on the wireless channel. Due to the multipath fading environment during the radio wave propagation, channel prediction plays a vital role to enable adaptive transmission for wireless communication systems. Predicting various channel characteristics by using neural networks can help address more complex communication environments. However, achieving this goal typically requires the simultaneous use of multiple distinct neural models, which is undoubtedly unaffordable for mobile communications. Therefore, it is necessary to enable a simpler structure to simultaneously predict multiple channel characteristics. In this paper, we propose a fading channel prediction method using phasor quaternion neural networks (PQNNs) to predict the polarization states, with phase information involved to enhance the channel compensation ability. We evaluate the performance of the proposed PQNN method in two different fading situations in an actual environment, and we find that the proposed scheme provides 2.8 dB and 4.0 dB improvements at bit error rate (BER) of $10^{-4}$ , showing better BER performance in light and serious fading situations, respectively. This work also reveals that by treating polarization information and phase information as a single entity, the model can leverage their physical correlation to achieve improved performance.
无线通信系统的性能在很大程度上取决于无线信道。由于无线电波传播过程中存在多径衰落环境,信道预测对无线通信系统的自适应传输起着至关重要的作用。利用神经网络预测各种信道特性有助于应对更复杂的通信环境。然而,要实现这一目标,通常需要同时使用多个不同的神经模型,这对于移动通信来说无疑是难以承受的。因此,有必要采用更简单的结构来同时预测多种信道特性。在本文中,我们提出了一种使用相位四元神经网络(PQNN)预测极化状态的衰落信道预测方法,其中涉及相位信息以增强信道补偿能力。我们在实际环境中评估了所提出的 PQNN 方法在两种不同衰落情况下的性能,发现所提出的方案在误码率(BER)为 10^{-4}$ 时分别提高了 2.8 dB 和 4.0 dB,在轻度衰落和严重衰落情况下分别表现出更好的误码率性能。这项工作还揭示出,通过将极化信息和相位信息视为单一实体,该模型可以利用它们之间的物理相关性来提高性能。
{"title":"Polarization-Aware Channel State Prediction Using Phasor Quaternion Neural Networks","authors":"Anzhe Ye;Haotian Chen;Ryo Natsuaki;Akira Hirose","doi":"10.1109/TMLCN.2024.3485521","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3485521","url":null,"abstract":"The performance of a wireless communication system depends to a large extent on the wireless channel. Due to the multipath fading environment during the radio wave propagation, channel prediction plays a vital role to enable adaptive transmission for wireless communication systems. Predicting various channel characteristics by using neural networks can help address more complex communication environments. However, achieving this goal typically requires the simultaneous use of multiple distinct neural models, which is undoubtedly unaffordable for mobile communications. Therefore, it is necessary to enable a simpler structure to simultaneously predict multiple channel characteristics. In this paper, we propose a fading channel prediction method using phasor quaternion neural networks (PQNNs) to predict the polarization states, with phase information involved to enhance the channel compensation ability. We evaluate the performance of the proposed PQNN method in two different fading situations in an actual environment, and we find that the proposed scheme provides 2.8 dB and 4.0 dB improvements at bit error rate (BER) of \u0000<inline-formula> <tex-math>$10^{-4}$ </tex-math></inline-formula>\u0000, showing better BER performance in light and serious fading situations, respectively. This work also reveals that by treating polarization information and phase information as a single entity, the model can leverage their physical correlation to achieve improved performance.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1628-1641"},"PeriodicalIF":0.0,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10731896","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142579173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TWIRLD: Transformer Generated Terahertz Waveform for Improved Radio Link Distance TWIRLD:用于改善无线电链路距离的变压器产生的太赫兹波形
Pub Date : 2024-10-17 DOI: 10.1109/TMLCN.2024.3483111
Shuvam Chakraborty;Claire Parisi;Dola Saha;Ngwe Thawdar
terahertz (THz) band communication is envisioned as one of the leading technologies to meet the exponentially growing data rate requirements of emerging and future wireless communication networks. Utilizing the contiguous bandwidth available at THz frequencies requires a transceiver design tailored to tackle issues existing at these frequencies such as strong propagation and absorption loss, small scale fading (e.g. scattering, reflection, refraction), hardware non-linearity, etc. In prior works, multicarrier waveforms (e.g., Orthogonal Frequency Division Multiplexing (OFDM)) are shown to be efficient in tackling some of these issues. However, OFDM introduces a drawback in the form of peak-to-average power ratio (PAPR) which, compounded with strong propagation and absorption loss and high noise power due to large bandwidth at THz and sub-THz frequencies, severely limits link distances and, in turn, capacity, preventing efficient bandwidth usage. In this work, we propose TWIRLD - a deep learning (DL)-based joint optimization method, modeled and implemented as components of end-to-end transceiver chain. TWIRLD performs a symbol remapping at baseband of OFDM signals, which increases average transmit power while also optimizing the bit error rate (BER). We provide theoretical analysis, statistical equivalence of TWIRLD to the ideal receiver, and comprehensive complexity and footprint estimates. We validate TWIRLD in simulation showing link distance improvement of up to 91% and compare the results with legacy and state of the art methods and their enhanced versions. Finally, TWIRLD is validated with over the air (OTA) communication using a state-of-the-art testbed at 140 GHz up to a bandwidth of 5 GHz where we observe improvement of up to 79% in link distance accommodating for practical channel and other transmission losses.
太赫兹(THz)波段通信被认为是满足新兴和未来无线通信网络急剧增长的数据传输速率要求的领先技术之一。要利用太赫兹频率的连续带宽,就必须设计出专门的收发器,以解决这些频率存在的问题,如强传播和吸收损耗、小尺度衰减(如散射、反射、折射)、硬件非线性等。在之前的研究中,多载波波形(如正交频分复用(OFDM))被证明能有效解决其中一些问题。然而,OFDM 引入了峰均功率比(PAPR)形式的缺点,再加上太赫兹和亚太赫兹频率的大带宽造成的强传播和吸收损耗以及高噪声功率,严重限制了链路距离,进而限制了容量,阻碍了带宽的有效利用。在这项工作中,我们提出了基于深度学习(DL)的联合优化方法 TWIRLD,并将其作为端到端收发器链的组件进行建模和实现。TWIRLD 在 OFDM 信号的基带执行符号重映射,在提高平均发射功率的同时优化误码率 (BER)。我们提供了理论分析、TWIRLD 与理想接收器的统计等价性以及全面的复杂性和占用空间估计。我们对 TWIRLD 进行了仿真验证,结果显示链路距离改善高达 91%,并将结果与传统方法、最新方法及其增强版本进行了比较。最后,我们使用最先进的测试平台在 140 GHz 至 5 GHz 的带宽范围内对 TWIRLD 进行了空中 (OTA) 通信验证,结果显示,在考虑到实际信道和其他传输损耗的情况下,TWIRLD 的链路距离最多可改善 79%。
{"title":"TWIRLD: Transformer Generated Terahertz Waveform for Improved Radio Link Distance","authors":"Shuvam Chakraborty;Claire Parisi;Dola Saha;Ngwe Thawdar","doi":"10.1109/TMLCN.2024.3483111","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3483111","url":null,"abstract":"terahertz (THz) band communication is envisioned as one of the leading technologies to meet the exponentially growing data rate requirements of emerging and future wireless communication networks. Utilizing the contiguous bandwidth available at THz frequencies requires a transceiver design tailored to tackle issues existing at these frequencies such as strong propagation and absorption loss, small scale fading (e.g. scattering, reflection, refraction), hardware non-linearity, etc. In prior works, multicarrier waveforms (e.g., Orthogonal Frequency Division Multiplexing (OFDM)) are shown to be efficient in tackling some of these issues. However, OFDM introduces a drawback in the form of peak-to-average power ratio (PAPR) which, compounded with strong propagation and absorption loss and high noise power due to large bandwidth at THz and sub-THz frequencies, severely limits link distances and, in turn, capacity, preventing efficient bandwidth usage. In this work, we propose \u0000<monospace>TWIRLD</monospace>\u0000 - a deep learning (DL)-based joint optimization method, modeled and implemented as components of end-to-end transceiver chain. \u0000<monospace>TWIRLD</monospace>\u0000 performs a symbol remapping at baseband of OFDM signals, which increases average transmit power while also optimizing the bit error rate (BER). We provide theoretical analysis, statistical equivalence of \u0000<monospace>TWIRLD</monospace>\u0000 to the ideal receiver, and comprehensive complexity and footprint estimates. We validate \u0000<monospace>TWIRLD</monospace>\u0000 in simulation showing link distance improvement of up to 91% and compare the results with legacy and state of the art methods and their enhanced versions. Finally, \u0000<monospace>TWIRLD</monospace>\u0000 is validated with over the air (OTA) communication using a state-of-the-art testbed at 140 GHz up to a bandwidth of 5 GHz where we observe improvement of up to 79% in link distance accommodating for practical channel and other transmission losses.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1595-1614"},"PeriodicalIF":0.0,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10720922","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142550544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recursive GNNs for Learning Precoding Policies With Size-Generalizability 用于学习具有大小通用性的预编码策略的递归 GNNs
Pub Date : 2024-10-14 DOI: 10.1109/TMLCN.2024.3480044
Jia Guo;Chenyang Yang
Graph neural networks (GNNs) have been shown promising in optimizing power allocation and link scheduling with good size generalizability and low training complexity. These merits are important for learning wireless policies under dynamic environments, which partially come from the matched permutation equivariance (PE) properties of the GNNs to the policies to be learned. Nonetheless, it has been noticed in literature that only satisfying the PE property of a precoding policy in multi-antenna systems cannot ensure a GNN for learning precoding to be generalizable to the unseen problem scales. Incorporating models with GNNs helps improve size generalizability, which however is only applicable to specific problems, settings, and algorithms. In this paper, we propose a framework of size generalizable GNNs for learning precoding policies that are purely data-driven and can learn wireless policies including but not limited to baseband and hybrid precoding in multi-user multi-antenna systems. To this end, we first find a special structure of each iteration of several numerical algorithms for optimizing precoding, from which we identify the key characteristics of a GNN that affect its size generalizability. Then, we design size-generalizable GNNs that are with these key characteristics and satisfy the PE properties of precoding policies in a recursive manner. Simulation results show that the proposed GNNs can be well-generalized to the number of users for learning baseband and hybrid precoding policies, require much fewer samples than existing GNNs and shorter inference time than numerical algorithms to achieve the same performance.
图神经网络(GNN)在优化功率分配和链路调度方面前景广阔,具有良好的尺寸泛化能力和较低的训练复杂度。这些优点对于在动态环境下学习无线策略非常重要,而这些优点部分来自于图形神经网络与待学习策略相匹配的置换等差(PE)特性。然而,有文献指出,在多天线系统中,仅满足预编码策略的 PE 属性并不能确保用于学习预编码的 GNN 能够泛化到未知的问题规模。将模型与 GNN 结合起来有助于提高规模通用性,但这只适用于特定的问题、设置和算法。在本文中,我们提出了一个用于学习预编码策略的可尺寸泛化 GNN 框架,该框架纯粹由数据驱动,可以学习无线策略,包括但不限于多用户多天线系统中的基带和混合预编码。为此,我们首先找到了用于优化预编码的几种数值算法每次迭代的特殊结构,并从中找出了影响 GNN 大小通用性的关键特征。然后,我们以递归的方式设计出具有这些关键特征并满足预编码策略 PE 特性的可尺寸泛化 GNN。仿真结果表明,所提出的 GNN 在学习基带和混合预编码策略时可以很好地泛化到用户数量,与现有的 GNN 相比需要更少的样本,与数值算法相比推理时间更短,从而达到相同的性能。
{"title":"Recursive GNNs for Learning Precoding Policies With Size-Generalizability","authors":"Jia Guo;Chenyang Yang","doi":"10.1109/TMLCN.2024.3480044","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3480044","url":null,"abstract":"Graph neural networks (GNNs) have been shown promising in optimizing power allocation and link scheduling with good size generalizability and low training complexity. These merits are important for learning wireless policies under dynamic environments, which partially come from the matched permutation equivariance (PE) properties of the GNNs to the policies to be learned. Nonetheless, it has been noticed in literature that only satisfying the PE property of a precoding policy in multi-antenna systems cannot ensure a GNN for learning precoding to be generalizable to the unseen problem scales. Incorporating models with GNNs helps improve size generalizability, which however is only applicable to specific problems, settings, and algorithms. In this paper, we propose a framework of size generalizable GNNs for learning precoding policies that are purely data-driven and can learn wireless policies including but not limited to baseband and hybrid precoding in multi-user multi-antenna systems. To this end, we first find a special structure of each iteration of several numerical algorithms for optimizing precoding, from which we identify the key characteristics of a GNN that affect its size generalizability. Then, we design size-generalizable GNNs that are with these key characteristics and satisfy the PE properties of precoding policies in a recursive manner. Simulation results show that the proposed GNNs can be well-generalized to the number of users for learning baseband and hybrid precoding policies, require much fewer samples than existing GNNs and shorter inference time than numerical algorithms to achieve the same performance.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1558-1579"},"PeriodicalIF":0.0,"publicationDate":"2024-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10716720","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142540477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NeIL: Intelligent Replica Selection for Distributed Applications NeIL:为分布式应用程序选择智能副本
Pub Date : 2024-10-11 DOI: 10.1109/TMLCN.2024.3479109
Faraz Ahmed;Lianjie Cao;Ayush Goel;Puneet Sharma
Distributed applications such as cloud gaming, streaming, etc., are increasingly using edge-to-cloud infrastructure for high availability and performance. While edge infrastructure brings services closer to the end-user, the number of sites on which the services need to be replicated has also increased. This makes replica selection challenging for clients of the replicated services. Traditional replica selection methods including anycast based methods and DNS re-directions are performance agnostic, and clients experience degraded network performance when network performance dynamics are not considered in replica selection. In this work, we present a client-side replica selection framework NeIL, that enables network performance aware replica selection. We propose to use bandits with experts based Multi-Armed Bandit (MAB) algorithms and adapt these algorithms for replica selection at individual clients without centralized coordination. We evaluate our approach using three different setups including a distributed Mininet setup where we use publicly available network performance data from the Measurement Lab (M-Lab) to emulate network conditions, a setup where we deploy replica servers on AWS, and finally we present results from a global enterprise deployment. Our experimental results show that in comparison to greedy selection, NeIL performs better than greedy for 45% of the time and better than or equal to greedy selection for 80% of the time resulting in a net gain in end-to-end network performance. On AWS, we see similar results where NeIL performs better than or equal to greedy for 75% of the time. We have successfully deployed NeIL in a global enterprise remote device management service with over 4000 client devices and our analysis shows that NeIL achieves significantly better tail service quality by cutting the $99th$ percentile tail latency from 5.6 seconds to 1.7 seconds.
云游戏、流媒体等分布式应用越来越多地使用边缘到云基础设施来实现高可用性和高性能。虽然边缘基础设施使服务更接近终端用户,但需要复制服务的站点数量也在增加。这就给复制服务的客户选择副本带来了挑战。传统的副本选择方法(包括基于任播的方法和 DNS 重定向)与性能无关,如果在选择副本时不考虑网络性能动态,客户端就会遇到网络性能下降的问题。在这项工作中,我们提出了一个客户端复制选择框架 NeIL,它能实现网络性能感知复制选择。我们建议使用基于专家的多臂匪徒(MAB)算法,并将这些算法调整用于单个客户端的副本选择,而无需集中协调。我们使用三种不同的设置来评估我们的方法,包括分布式 Mininet 设置(我们使用来自测量实验室(M-Lab)的公开可用网络性能数据来模拟网络条件)、在 AWS 上部署副本服务器的设置,最后我们展示全球企业部署的结果。我们的实验结果表明,与贪婪选择相比,NeIL 在 45% 的时间内表现优于贪婪选择,在 80% 的时间内表现优于或等于贪婪选择,从而实现了端到端网络性能的净增。在 AWS 上,我们也看到了类似的结果,NeIL 有 75% 的时间表现优于或等于贪婪选择。我们在一个拥有 4000 多台客户端设备的全球性企业远程设备管理服务中成功部署了 NeIL,我们的分析表明,NeIL 将第 99 个百分位数的尾部延迟从 5.6 秒降至 1.7 秒,从而显著提高了尾部服务质量。
{"title":"NeIL: Intelligent Replica Selection for Distributed Applications","authors":"Faraz Ahmed;Lianjie Cao;Ayush Goel;Puneet Sharma","doi":"10.1109/TMLCN.2024.3479109","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3479109","url":null,"abstract":"Distributed applications such as cloud gaming, streaming, etc., are increasingly using edge-to-cloud infrastructure for high availability and performance. While edge infrastructure brings services closer to the end-user, the number of sites on which the services need to be replicated has also increased. This makes replica selection challenging for clients of the replicated services. Traditional replica selection methods including anycast based methods and DNS re-directions are performance agnostic, and clients experience degraded network performance when network performance dynamics are not considered in replica selection. In this work, we present a client-side replica selection framework NeIL, that enables network performance aware replica selection. We propose to use bandits with experts based Multi-Armed Bandit (MAB) algorithms and adapt these algorithms for replica selection at individual clients without centralized coordination. We evaluate our approach using three different setups including a distributed Mininet setup where we use publicly available network performance data from the Measurement Lab (M-Lab) to emulate network conditions, a setup where we deploy replica servers on AWS, and finally we present results from a global enterprise deployment. Our experimental results show that in comparison to greedy selection, NeIL performs better than greedy for 45% of the time and better than or equal to greedy selection for 80% of the time resulting in a net gain in end-to-end network performance. On AWS, we see similar results where NeIL performs better than or equal to greedy for 75% of the time. We have successfully deployed NeIL in a global enterprise remote device management service with over 4000 client devices and our analysis shows that NeIL achieves significantly better tail service quality by cutting the \u0000<inline-formula> <tex-math>$99th$ </tex-math></inline-formula>\u0000 percentile tail latency from 5.6 seconds to 1.7 seconds.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1580-1594"},"PeriodicalIF":0.0,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10714467","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142540425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Machine Learning in Communications and Networking
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1