首页 > 最新文献

IEEE Transactions on Machine Learning in Communications and Networking最新文献

英文 中文
QoS Prediction for Satellite-Based Avionic Communication Using Transformers 基于变压器的星载航空电子通信QoS预测
Pub Date : 2026-01-13 DOI: 10.1109/TMLCN.2026.3653719
Hind Mukhtar;Raymond Schaub;Melike Erol-Kantarci
Satellite-based communication systems are crucial for providing high-speed data services in aviation, particularly for business aviation operations that demand global connectivity. These systems face challenges from numerous interdependent factors, such as satellite handovers, congestion, flight maneuvers, and seasonal variations, making accurate Quality of Service (QoS) prediction complex. Currently, there is no established methodology for predicting QoS in avionic communication systems. This paper addresses this gap by proposing machine learning-based approaches for pre-flight QoS prediction. Specifically, we leverage transformer models to predict QoS along a given flight path using real-world data. The model takes as input a variety of positional and network-related features, such as aircraft location, satellite information, historical QoS, and handover probabilities, and outputs a predicted performance score for each position along the flight. This approach allows for proactive decision-making, enabling flight crews to select the most optimal flight paths before departure, improving overall operational efficiency in business aviation. Our proposed encoder-decoder transformer model achieved an overall prediction accuracy of 65% and an RMSE of 1.91, representing a significant improvement over traditional baseline methods. While these metrics are notable, our model’s key contribution is a substantial improvement in prediction accuracy for underrepresented classes, which were a major limitation of prior approaches. Additionally, the model significantly reduces inference time, achieving predictions in 40 seconds compared to 6,353 seconds for a traditional KNN model. This approach allows for proactive decision-making, enabling flight crews to select optimal flight paths before departure, improving overall operational efficiency in business aviation.
基于卫星的通信系统对于提供航空高速数据服务至关重要,特别是对于需要全球连接的公务航空运营而言。这些系统面临着来自许多相互依赖因素的挑战,例如卫星切换、拥塞、飞行机动和季节变化,使得准确的服务质量(QoS)预测变得复杂。目前,航空电子通信系统的QoS预测还没有成熟的方法。本文通过提出基于机器学习的飞行前QoS预测方法来解决这一差距。具体来说,我们利用变压器模型利用真实世界的数据沿给定的飞行路径预测QoS。该模型将各种位置和网络相关的特征(如飞机位置、卫星信息、历史QoS和切换概率)作为输入,并输出飞行过程中每个位置的预测性能分数。这种方法允许主动决策,使机组人员能够在起飞前选择最优的飞行路径,从而提高商务航空的整体运营效率。我们提出的编码器-解码器变压器模型实现了65%的总体预测精度和1.91的RMSE,比传统的基线方法有了显著的改进。虽然这些指标是值得注意的,但我们的模型的关键贡献是对代表性不足的类别的预测准确性的实质性改进,这是先前方法的主要限制。此外,该模型显著缩短了推理时间,与传统KNN模型的6,353秒相比,该模型在40秒内实现了预测。这种方法允许主动决策,使机组人员能够在起飞前选择最佳飞行路径,从而提高商务航空的整体运营效率。
{"title":"QoS Prediction for Satellite-Based Avionic Communication Using Transformers","authors":"Hind Mukhtar;Raymond Schaub;Melike Erol-Kantarci","doi":"10.1109/TMLCN.2026.3653719","DOIUrl":"https://doi.org/10.1109/TMLCN.2026.3653719","url":null,"abstract":"Satellite-based communication systems are crucial for providing high-speed data services in aviation, particularly for business aviation operations that demand global connectivity. These systems face challenges from numerous interdependent factors, such as satellite handovers, congestion, flight maneuvers, and seasonal variations, making accurate Quality of Service (QoS) prediction complex. Currently, there is no established methodology for predicting QoS in avionic communication systems. This paper addresses this gap by proposing machine learning-based approaches for pre-flight QoS prediction. Specifically, we leverage transformer models to predict QoS along a given flight path using real-world data. The model takes as input a variety of positional and network-related features, such as aircraft location, satellite information, historical QoS, and handover probabilities, and outputs a predicted performance score for each position along the flight. This approach allows for proactive decision-making, enabling flight crews to select the most optimal flight paths before departure, improving overall operational efficiency in business aviation. Our proposed encoder-decoder transformer model achieved an overall prediction accuracy of 65% and an RMSE of 1.91, representing a significant improvement over traditional baseline methods. While these metrics are notable, our model’s key contribution is a substantial improvement in prediction accuracy for underrepresented classes, which were a major limitation of prior approaches. Additionally, the model significantly reduces inference time, achieving predictions in 40 seconds compared to 6,353 seconds for a traditional KNN model. This approach allows for proactive decision-making, enabling flight crews to select optimal flight paths before departure, improving overall operational efficiency in business aviation.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"4 ","pages":"300-317"},"PeriodicalIF":0.0,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11348973","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dictionary Learning for Phase-Less Beam Alignment Codebook Design in Multipath Channels 基于字典学习的多径信道无相波束对准码本设计
Pub Date : 2026-01-12 DOI: 10.1109/TMLCN.2026.3653010
Benjamin W. Domae;Danijela Cabric
Large antenna arrays are critical for reliability and high data rates in wireless networks at millimeter-wave and sub-terahertz bands. While traditional methods for initial beam alignment for analog phased arrays scale beam alignment overhead linearly with the array size, compressive sensing (CS) and machine learning (ML) algorithms can scale logarithmically. CS and ML methods typically utilize pseudo-random or heuristic beam designs as compressive codebooks. However, these codebooks may not be optimal for scenarios with uncertain array impairments or multipath, particularly when measurements are phase-less or power-based. In this work, we propose a novel dictionary learning method to design codebooks for phase-less beam alignment given multipath and unknown impairment statistics. This codebook learning algorithm uses an alternating optimization with block coordinate descent to update the codebooks and Monte Carlo trials over multipath and impairments to incorporate a-priori knowledge of the hardware and environment. Additionally, we discuss engineering considerations for the codebook design algorithm, including a comparison of three proposed loss functions and three proposed beam alignment algorithms used for codebook learning. As one of the three beam alignment methods, we propose transfer learning for ML-based beam alignment to reduce the training time of both the ML model and codebook learning. We demonstrate that codebook learning and our ML-based beam alignment algorithms can significantly reduce the beam alignment overhead in terms of number of measurements required.
大型天线阵列对于毫米波和次太赫兹频段无线网络的可靠性和高数据速率至关重要。传统的模拟相控阵初始波束对准方法将波束对准开销与阵列大小呈线性关系,而压缩感知(CS)和机器学习(ML)算法可以呈对数关系。CS和ML方法通常使用伪随机或启发式光束设计作为压缩码本。然而,这些码本可能不是具有不确定阵列损伤或多路径的情况下的最佳方案,特别是当测量是无相位或基于功率时。在这项工作中,我们提出了一种新的字典学习方法来设计给定多径和未知损伤统计的无相波束对准码本。该码本学习算法使用块坐标下降交替优化来更新码本,并在多路径和损伤上进行蒙特卡罗试验,以结合硬件和环境的先验知识。此外,我们还讨论了码本设计算法的工程考虑因素,包括用于码本学习的三种建议的损失函数和三种建议的波束对准算法的比较。作为三种波束对准方法之一,我们提出了基于迁移学习的基于ML的波束对准方法,以减少ML模型和码本学习的训练时间。我们证明了码本学习和我们基于ml的波束对准算法可以显着减少所需测量次数的波束对准开销。
{"title":"Dictionary Learning for Phase-Less Beam Alignment Codebook Design in Multipath Channels","authors":"Benjamin W. Domae;Danijela Cabric","doi":"10.1109/TMLCN.2026.3653010","DOIUrl":"https://doi.org/10.1109/TMLCN.2026.3653010","url":null,"abstract":"Large antenna arrays are critical for reliability and high data rates in wireless networks at millimeter-wave and sub-terahertz bands. While traditional methods for initial beam alignment for analog phased arrays scale beam alignment overhead linearly with the array size, compressive sensing (CS) and machine learning (ML) algorithms can scale logarithmically. CS and ML methods typically utilize pseudo-random or heuristic beam designs as compressive codebooks. However, these codebooks may not be optimal for scenarios with uncertain array impairments or multipath, particularly when measurements are phase-less or power-based. In this work, we propose a novel dictionary learning method to design codebooks for phase-less beam alignment given multipath and unknown impairment statistics. This codebook learning algorithm uses an alternating optimization with block coordinate descent to update the codebooks and Monte Carlo trials over multipath and impairments to incorporate a-priori knowledge of the hardware and environment. Additionally, we discuss engineering considerations for the codebook design algorithm, including a comparison of three proposed loss functions and three proposed beam alignment algorithms used for codebook learning. As one of the three beam alignment methods, we propose transfer learning for ML-based beam alignment to reduce the training time of both the ML model and codebook learning. We demonstrate that codebook learning and our ML-based beam alignment algorithms can significantly reduce the beam alignment overhead in terms of number of measurements required.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"4 ","pages":"318-336"},"PeriodicalIF":0.0,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11346817","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UCINet0: A Machine Learning-Based Receiver for 5G NR PUCCH Format 0 UCINet0:基于机器学习的5G NR PUCCH格式接收器
Pub Date : 2026-01-05 DOI: 10.1109/TMLCN.2025.3650730
Jeeva Keshav Sattianarayanin;Anil Kumar Yerrapragada;Radha Krishna Ganti
Accurate decoding of Uplink Control Information (UCI) on the Physical Uplink Control Channel (PUCCH) is essential for enabling 5G wireless links. This paper explores an AI/ML-based receiver design for PUCCH Format 0. Format 0 signaling encodes the UCI content within the phase of a known base waveform and even supports multiplexing of up to 12 users within the same time-frequency resources. The proposed neural network classifier, which we term UCINet0, is capable of predicting when no user is transmitting on the PUCCH, as well as decoding the UCI content for any number of multiplexed users (up to 12). The test results with simulated, hardware-captured (lab) and field datasets show that the UCINet0 model outperforms conventional correlation-based decoders across all Signal-to-Noise Ratio (SNR) ranges and multiple fading scenarios.
在物理上行控制信道(Physical Uplink Control Channel, PUCCH)上准确解码UCI (Uplink Control Information)是实现5G无线链路的关键。本文探讨了一种基于AI/ ml的PUCCH Format 0接收机设计。Format 0信令对已知基波相位内的UCI内容进行编码,甚至支持在相同时频资源内多达12个用户的多路复用。所提出的神经网络分类器,我们称之为UCINet0,能够预测何时没有用户在PUCCH上传输,以及解码任意数量的多路复用用户(最多12个)的UCI内容。模拟、硬件捕获(实验室)和现场数据集的测试结果表明,UCINet0模型在所有信噪比(SNR)范围和多种衰落情况下都优于传统的基于相关的解码器。
{"title":"UCINet0: A Machine Learning-Based Receiver for 5G NR PUCCH Format 0","authors":"Jeeva Keshav Sattianarayanin;Anil Kumar Yerrapragada;Radha Krishna Ganti","doi":"10.1109/TMLCN.2025.3650730","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3650730","url":null,"abstract":"Accurate decoding of Uplink Control Information (UCI) on the Physical Uplink Control Channel (PUCCH) is essential for enabling 5G wireless links. This paper explores an AI/ML-based receiver design for PUCCH Format 0. Format 0 signaling encodes the UCI content within the phase of a known base waveform and even supports multiplexing of up to 12 users within the same time-frequency resources. The proposed neural network classifier, which we term UCINet0, is capable of predicting when no user is transmitting on the PUCCH, as well as decoding the UCI content for any number of multiplexed users (up to 12). The test results with simulated, hardware-captured (lab) and field datasets show that the UCINet0 model outperforms conventional correlation-based decoders across all Signal-to-Noise Ratio (SNR) ranges and multiple fading scenarios.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"4 ","pages":"282-299"},"PeriodicalIF":0.0,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11328864","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Signal Whisperers: Enhancing Wireless Reception Using DRL-Guided Reflector Arrays 信号低语者:利用drl制导反射器阵列增强无线接收
Pub Date : 2026-01-01 DOI: 10.1109/TMLCN.2025.3650440
Hieu Le;Oguz Bedir;Mostafa Ibrahim;Jian Tao;Sabit Ekin
This paper presents a multi-agent reinforcement learning (MARL) approach for controlling adjustable metallic reflector arrays to enhance wireless signal reception in non-line-of-sight (NLOS) scenarios. Unlike conventional reconfigurable intelligent surfaces (RIS) that require complex channel estimation, our system employs a centralized training with decentralized execution (CTDE) paradigm where individual agents corresponding to reflector segments autonomously optimize reflector element orientation in three-dimensional space using spatial intelligence based on user location information. Through extensive ray-tracing simulations with dynamic user mobility, the proposed multi-agent beam-focusing framework demonstrates substantial performance improvements over single-agent reinforcement learning baselines, while maintaining rapid adaptation to user movement within one simulation step. Comprehensive evaluation across varying user densities and reflector configurations validates system scalability and robustness. The results demonstrate the potential of learning-based approaches for adaptive wireless propagation control.
本文提出了一种多智能体强化学习(MARL)方法,用于控制可调金属反射器阵列,以增强非视距(NLOS)场景下的无线信号接收。与需要复杂信道估计的传统可重构智能曲面(RIS)不同,我们的系统采用集中式训练与分散执行(CTDE)范式,其中对应反射器段的单个代理使用基于用户位置信息的空间智能自主优化反射器元素在三维空间中的方向。通过大量具有动态用户移动性的光线跟踪模拟,所提出的多智能体光束聚焦框架比单智能体强化学习基线显示了显著的性能改进,同时在一个模拟步骤内保持对用户移动的快速适应。对不同用户密度和反射器配置的综合评估验证了系统的可扩展性和健壮性。结果显示了基于学习的自适应无线传播控制方法的潜力。
{"title":"Signal Whisperers: Enhancing Wireless Reception Using DRL-Guided Reflector Arrays","authors":"Hieu Le;Oguz Bedir;Mostafa Ibrahim;Jian Tao;Sabit Ekin","doi":"10.1109/TMLCN.2025.3650440","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3650440","url":null,"abstract":"This paper presents a multi-agent reinforcement learning (MARL) approach for controlling adjustable metallic reflector arrays to enhance wireless signal reception in non-line-of-sight (NLOS) scenarios. Unlike conventional reconfigurable intelligent surfaces (RIS) that require complex channel estimation, our system employs a centralized training with decentralized execution (CTDE) paradigm where individual agents corresponding to reflector segments autonomously optimize reflector element orientation in three-dimensional space using spatial intelligence based on user location information. Through extensive ray-tracing simulations with dynamic user mobility, the proposed multi-agent beam-focusing framework demonstrates substantial performance improvements over single-agent reinforcement learning baselines, while maintaining rapid adaptation to user movement within one simulation step. Comprehensive evaluation across varying user densities and reflector configurations validates system scalability and robustness. The results demonstrate the potential of learning-based approaches for adaptive wireless propagation control.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"4 ","pages":"265-281"},"PeriodicalIF":0.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11322690","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145929653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Clustering-Assisted Deep Reinforcement Learning for Joint Trajectory Design and Resource Allocation in Two-Tier-Cooperated UAVs Communications 基于聚类辅助深度强化学习的两层协同无人机通信联合轨迹设计与资源分配
Pub Date : 2025-12-23 DOI: 10.1109/TMLCN.2025.3647806
Shujun Zhao;Simeng Feng;Chao Dong;Xiaojun Zhu;Qihui Wu
Considering their high mobility and relatively low cost, uncrewed aerial vehicles (UAVs) equipped with mobile base stations are regarded as a potential technological approach. However, the dual pressures of limited onboard resources of UAVs and the demand for high-quality services in dynamic low-altitude applications jointly form a bottleneck for system performance. Although multi-UAVs communication networks can provide higher system performance through coordinated deployment, the challenges of cooperation and competition among UAVs, as well as more complex optimization problems, significantly increase costs and pose formidable challenges. To overcome the challenges of low coordination efficiency and intense resource competition among multiple UAVs, and to ensure the timely and efficient satisfaction of ground users (GUs) communication service demands, this paper conceives a centralized-controlled two-tier-cooperated UAVs communication network. The network comprises a central UAV (C-UAV) tier as control center and a marginal UAV (M-UAV) tier to serve GUs. In response to the increasingly dynamic and complex scenarios, along with the challenge of insufficient generalization ability in Deep Reinforcement Learning (DRL) algorithms, we propose a clustering-assisted dual-agent soft actor critic (CDA-SAC) algorithm for trajectory design and resource allocation, aiming to maximize the fair energy efficiency of the system. Specifically, by integrating a clustering-matching method with a dual-agent strategy, the proposed CDA-SAC algorithm achieves significant improvements in generalization ability and exploration capability. Simulation results demonstrate that the proposed CDA-SAC algorithm can be deployed without retraining in scenarios with different numbers of GUs. Furthermore, the CDA-SAC algorithm outperforms both the multi-UAV scenarios based on the MADDPG algorithm and the FDMA scheme in terms of fairness and total energy efficiency.
考虑到其高机动性和相对较低的成本,配备移动基站的无人机(uav)被认为是一种潜在的技术途径。然而,无人机有限的机载资源和动态低空应用对高质量服务需求的双重压力,共同构成了系统性能的瓶颈。虽然多无人机通信网络通过协同部署可以提供更高的系统性能,但无人机之间的合作和竞争带来的挑战,以及更复杂的优化问题,大大增加了成本,带来了巨大的挑战。为了克服多架无人机之间协调效率低、资源竞争激烈的问题,保证及时、高效地满足地面用户通信服务需求,本文构想了集中控制的两层协同无人机通信网络。该网络由中心无人机层(C-UAV)作为控制中心,边缘无人机层(M-UAV)为GUs提供服务。针对日益动态和复杂的场景,以及深度强化学习(DRL)算法泛化能力不足的挑战,提出了一种聚类辅助双智能体软行为批评家(CDA-SAC)算法,用于轨迹设计和资源分配,以最大限度地提高系统的公平能量效率。具体而言,通过将聚类匹配方法与双智能体策略相结合,所提出的CDA-SAC算法在泛化能力和探索能力方面有了显著提高。仿真结果表明,所提出的CDA-SAC算法可以在不同GUs数量的场景下无需重新训练即可部署。此外,CDA-SAC算法在公平性和总能量效率方面都优于基于madpg算法和FDMA方案的多无人机场景。
{"title":"Clustering-Assisted Deep Reinforcement Learning for Joint Trajectory Design and Resource Allocation in Two-Tier-Cooperated UAVs Communications","authors":"Shujun Zhao;Simeng Feng;Chao Dong;Xiaojun Zhu;Qihui Wu","doi":"10.1109/TMLCN.2025.3647806","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3647806","url":null,"abstract":"Considering their high mobility and relatively low cost, uncrewed aerial vehicles (UAVs) equipped with mobile base stations are regarded as a potential technological approach. However, the dual pressures of limited onboard resources of UAVs and the demand for high-quality services in dynamic low-altitude applications jointly form a bottleneck for system performance. Although multi-UAVs communication networks can provide higher system performance through coordinated deployment, the challenges of cooperation and competition among UAVs, as well as more complex optimization problems, significantly increase costs and pose formidable challenges. To overcome the challenges of low coordination efficiency and intense resource competition among multiple UAVs, and to ensure the timely and efficient satisfaction of ground users (GUs) communication service demands, this paper conceives a centralized-controlled two-tier-cooperated UAVs communication network. The network comprises a central UAV (C-UAV) tier as control center and a marginal UAV (M-UAV) tier to serve GUs. In response to the increasingly dynamic and complex scenarios, along with the challenge of insufficient generalization ability in Deep Reinforcement Learning (DRL) algorithms, we propose a clustering-assisted dual-agent soft actor critic (CDA-SAC) algorithm for trajectory design and resource allocation, aiming to maximize the fair energy efficiency of the system. Specifically, by integrating a clustering-matching method with a dual-agent strategy, the proposed CDA-SAC algorithm achieves significant improvements in generalization ability and exploration capability. Simulation results demonstrate that the proposed CDA-SAC algorithm can be deployed without retraining in scenarios with different numbers of GUs. Furthermore, the CDA-SAC algorithm outperforms both the multi-UAV scenarios based on the MADDPG algorithm and the FDMA scheme in terms of fairness and total energy efficiency.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"4 ","pages":"178-197"},"PeriodicalIF":0.0,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11313631","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data-Driven Cellular Mobility Management Via Bayesian Optimization and Reinforcement Learning 基于贝叶斯优化和强化学习的数据驱动蜂窝移动性管理
Pub Date : 2025-12-23 DOI: 10.1109/TMLCN.2025.3647807
Mohamed Benzaghta;Sahar Ammar;David López-Pére;Basem Shihada;Giovanni Geraci
Mobility management in cellular networks faces increasing complexity due to network densification and heterogeneous user mobility characteristics. Traditional handover (HO) mechanisms, which rely on predefined parameters such as A3-offset and time-to-trigger (TTT), often fail to optimize mobility performance across varying speeds and deployment conditions. Fixed A3-offset and TTT configurations either delay HOs, increasing radio link failures (RLFs), or accelerate them, leading to excessive ping-pong effects. To address these challenges, we propose two distinct data-driven mobility management approaches leveraging high-dimensional Bayesian optimization (HD-BO) and deep reinforcement learning (DRL). While HD-BO optimizes predefined HO parameters such as A3-offset and TTT, DRL provides a parameter-free alternative by allowing an agent to select serving cells based on real-time network conditions. We systematically compare these two approaches in real-world site-specific deployment scenarios (employing Sionna ray tracing for site-specific channel propagation modeling), highlighting their complementary strengths. Results show that both HD-BO and DRL outperform 3GPP set-1 (TTT of 480 ms and A3-offset of 3 dB) and set-5 (TTT of 40 ms and A3-offset of −1 dB) benchmarks. We augment HD-BO with transfer learning so it can generalize across a range of user speeds. Applying the same transfer-learning strategy to the DRL method reduces its training time by a factor of 2.5 while preserving optimal HO performance, showing that it adapts efficiently to the mobility of aerial users such as UAVs. Simulations further reveal that HD-BO remains more sample-efficient than DRL, making it more suitable for scenarios with limited training data.
由于蜂窝网络的密集化和用户移动性的异构特性,使得蜂窝网络的移动性管理变得越来越复杂。传统的切换(HO)机制依赖于预定义的参数,如a3偏移量和触发时间(TTT),通常无法在不同的速度和部署条件下优化移动性性能。固定的a3偏移和TTT配置延迟HOs,增加无线电链路故障(rlf),或加速它们,导致过度的乒乓效应。为了应对这些挑战,我们提出了两种不同的数据驱动的移动性管理方法,利用高维贝叶斯优化(HD-BO)和深度强化学习(DRL)。HD-BO优化了预定义的HO参数,如a3偏移量和TTT,而DRL提供了一种无参数的替代方案,允许代理根据实时网络条件选择服务单元。我们系统地比较了这两种方法在现实世界中特定站点的部署场景(使用辛纳射线跟踪进行特定站点的通道传播建模),突出了它们的互补优势。结果表明,HD-BO和DRL都优于3GPP set-1 (TTT为480 ms, A3-offset为3 dB)和set-5 (TTT为40 ms, A3-offset为-1 dB)基准。我们用迁移学习来增强HD-BO,这样它就可以在各种用户速度范围内进行泛化。将相同的迁移学习策略应用于DRL方法,在保持最佳HO性能的同时,将其训练时间减少了2.5倍,表明该方法能够有效地适应无人机等空中用户的移动性。仿真进一步表明,HD-BO仍然比DRL具有更高的样本效率,使其更适合训练数据有限的场景。
{"title":"Data-Driven Cellular Mobility Management Via Bayesian Optimization and Reinforcement Learning","authors":"Mohamed Benzaghta;Sahar Ammar;David López-Pére;Basem Shihada;Giovanni Geraci","doi":"10.1109/TMLCN.2025.3647807","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3647807","url":null,"abstract":"Mobility management in cellular networks faces increasing complexity due to network densification and heterogeneous user mobility characteristics. Traditional handover (HO) mechanisms, which rely on predefined parameters such as A3-offset and time-to-trigger (TTT), often fail to optimize mobility performance across varying speeds and deployment conditions. Fixed A3-offset and TTT configurations either delay HOs, increasing radio link failures (RLFs), or accelerate them, leading to excessive ping-pong effects. To address these challenges, we propose two distinct data-driven mobility management approaches leveraging high-dimensional Bayesian optimization (HD-BO) and deep reinforcement learning (DRL). While HD-BO optimizes predefined HO parameters such as A3-offset and TTT, DRL provides a parameter-free alternative by allowing an agent to select serving cells based on real-time network conditions. We systematically compare these two approaches in real-world site-specific deployment scenarios (employing Sionna ray tracing for site-specific channel propagation modeling), highlighting their complementary strengths. Results show that both HD-BO and DRL outperform 3GPP set-1 (TTT of 480 ms and A3-offset of 3 dB) and set-5 (TTT of 40 ms and A3-offset of −1 dB) benchmarks. We augment HD-BO with transfer learning so it can generalize across a range of user speeds. Applying the same transfer-learning strategy to the DRL method reduces its training time by a factor of 2.5 while preserving optimal HO performance, showing that it adapts efficiently to the mobility of aerial users such as UAVs. Simulations further reveal that HD-BO remains more sample-efficient than DRL, making it more suitable for scenarios with limited training data.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"4 ","pages":"228-244"},"PeriodicalIF":0.0,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11313634","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145929689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transforming Indoor Localization: Advanced Transformer Architecture for NLOS Dominated Wireless Environments With Distributed Sensors 改造室内定位:基于分布式传感器的NLOS主导无线环境的先进变压器架构
Pub Date : 2025-12-23 DOI: 10.1109/TMLCN.2025.3647376
Saad Masrur;Jung-Fu Cheng;Atieh R. Khamesi;İsmail Güvenç
Indoor localization in challenging non-line-of-sight (NLOS) environments often leads to poor accuracy with traditional approaches. Deep learning (DL) has been applied to tackle these challenges; however, many DL approaches overlook computational complexity, especially for floating-point operations (FLOPs), making them unsuitable for resource-limited devices. Transformer-based models have achieved remarkable success in natural language processing (NLP) and computer vision (CV) tasks, motivating their use in wireless applications. However, their use in indoor localization remains nascent, and directly applying Transformers for indoor localization can be both computationally intensive and exhibit limitations in accuracy. To address these challenges, in this work, we introduce a novel tokenization approach, referred to as Sensor Snapshot Tokenization (SST), which preserves variable-specific representations of power delay profile (PDP) and enhances attention mechanisms by effectively capturing multi-variate correlation. Complementing this, we propose a lightweight Swish-Gated Linear Unit-based Transformer (L-SwiGLU-T) model, designed to reduce computational complexity without compromising localization accuracy. Together, these contributions mitigate the computational burden and dependency on large datasets, making Transformer models more efficient and suitable for resource-constrained scenarios. Experimental results on simulated and real-world datasets demonstrate that SST and L-SwiGLU-T achieve substantial accuracy and efficiency gains, outperforming larger Transformer and CNN baselines by over 40% while using significantly fewer FLOPs and training samples.
在具有挑战性的非视距(NLOS)环境中,传统方法的室内定位精度往往较差。深度学习(DL)已被应用于应对这些挑战;然而,许多深度学习方法忽略了计算复杂性,特别是对于浮点操作(flop),这使得它们不适合资源有限的设备。基于变压器的模型在自然语言处理(NLP)和计算机视觉(CV)任务中取得了显著的成功,促进了它们在无线应用中的应用。然而,它们在室内定位中的应用仍处于起步阶段,直接应用变压器进行室内定位既需要大量计算,又存在精度限制。为了应对这些挑战,在这项工作中,我们引入了一种新的标记化方法,称为传感器快照标记化(SST),它保留了功率延迟曲线(PDP)的变量特定表示,并通过有效捕获多变量相关性来增强注意机制。作为补充,我们提出了一种轻量级的swish门控线性单元变压器(L-SwiGLU-T)模型,旨在降低计算复杂性而不影响定位精度。总之,这些贡献减轻了计算负担和对大型数据集的依赖,使Transformer模型更有效,更适合资源受限的场景。在模拟和真实数据集上的实验结果表明,SST和L-SwiGLU-T取得了可观的精度和效率提升,在使用更少的FLOPs和训练样本的同时,比大型Transformer和CNN基线的性能提高了40%以上。
{"title":"Transforming Indoor Localization: Advanced Transformer Architecture for NLOS Dominated Wireless Environments With Distributed Sensors","authors":"Saad Masrur;Jung-Fu Cheng;Atieh R. Khamesi;İsmail Güvenç","doi":"10.1109/TMLCN.2025.3647376","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3647376","url":null,"abstract":"Indoor localization in challenging non-line-of-sight (NLOS) environments often leads to poor accuracy with traditional approaches. Deep learning (DL) has been applied to tackle these challenges; however, many DL approaches overlook computational complexity, especially for floating-point operations (FLOPs), making them unsuitable for resource-limited devices. Transformer-based models have achieved remarkable success in natural language processing (NLP) and computer vision (CV) tasks, motivating their use in wireless applications. However, their use in indoor localization remains nascent, and directly applying Transformers for indoor localization can be both computationally intensive and exhibit limitations in accuracy. To address these challenges, in this work, we introduce a novel tokenization approach, referred to as Sensor Snapshot Tokenization (SST), which preserves variable-specific representations of power delay profile (PDP) and enhances attention mechanisms by effectively capturing multi-variate correlation. Complementing this, we propose a lightweight Swish-Gated Linear Unit-based Transformer (L-SwiGLU-T) model, designed to reduce computational complexity without compromising localization accuracy. Together, these contributions mitigate the computational burden and dependency on large datasets, making Transformer models more efficient and suitable for resource-constrained scenarios. Experimental results on simulated and real-world datasets demonstrate that SST and L-SwiGLU-T achieve substantial accuracy and efficiency gains, outperforming larger Transformer and CNN baselines by over 40% while using significantly fewer FLOPs and training samples.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"4 ","pages":"161-177"},"PeriodicalIF":0.0,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11313538","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Explainable Multi-Agent Reinforcement Learning for Extended Reality Codec Adaptation 扩展现实编解码器适应的可解释多智能体强化学习
Pub Date : 2025-12-18 DOI: 10.1109/TMLCN.2025.3646125
Pedro Enrique Iturria-Rivera;Raimundas Gaigalas;Medhat Elsayed;Majid Bavand;Yigit Ozcan;Melike Erol-Kantarci
Extended Reality (XR) services are set to transform applications over ${mathbf {5}}^{th}$ and ${mathbf {6}}^{th}$ generation wireless networks, delivering immersive experiences. Concurrently, Artificial Intelligence (AI) advancements have expanded their role in wireless networks, however, trust and transparency in AI remain to be strengthened. Thus, providing explanations for AI-enabled systems can enhance trust. We introduce Value Function Factorization (VFF)-based Explainable (X) Multi-Agent Reinforcement Learning (MARL) algorithms, explaining reward design in XR codec adaptation through reward decomposition. We contribute four enhancements to XMARL algorithms. Firstly, we detail architectural modifications to enable reward decomposition in VFF-based MARL algorithms: Value Decomposition Networks (VDN), Mixture of Q-Values (QMIX), and Q-Transformation (Q-TRAN). Secondly, inspired by multi-task learning, we reduce the overhead of vanilla XMARL algorithms. Thirdly, we propose a new explainability metric, Reward Difference Fluctuation Explanation (RDFX), suitable for problems with adjustable parameters. Lastly, we propose adaptive XMARL, leveraging network gradients and reward decomposition for improved action selection. Simulation results indicate that, in XR codec adaptation, the Packet Delivery Ratio reward is the primary contributor to optimal performance compared to the initial composite reward, which included delay and Data Rate Ratio components. Modifications to VFF-based XMARL algorithms, incorporating multi-headed structures and adaptive loss functions, enable the best-performing algorithm, Multi-Headed Adaptive (MHA)-QMIX, to achieve significant average gains over the Adjust Packet Size baseline up to 10.7%, 41.4%, 33.3%, and 67.9% in XR index, jitter, delay, and Packet Loss Ratio (PLR), respectively.
扩展现实(XR)服务将通过${mathbf {5}}^{th}$和${mathbf {6}}^{th}$一代无线网络转换应用程序,提供沉浸式体验。与此同时,人工智能(AI)的发展扩大了其在无线网络中的作用,但对人工智能的信任和透明度仍有待加强。因此,为人工智能系统提供解释可以增强信任。我们引入了基于价值函数分解(VFF)的可解释(X)多智能体强化学习(MARL)算法,通过奖励分解来解释XR编解码器适应中的奖励设计。我们对xml算法提供了四个增强。首先,我们详细介绍了基于vff的MARL算法的架构修改,以实现奖励分解:价值分解网络(VDN)、q值混合(QMIX)和q变换(Q-TRAN)。其次,受多任务学习的启发,我们减少了普通xhtml算法的开销。第三,我们提出了一个新的可解释性度量,奖励差异波动解释(RDFX),适用于可调参数问题。最后,我们提出了自适应xml,利用网络梯度和奖励分解来改进行动选择。仿真结果表明,在XR编解码器自适应中,与包含延迟和数据速率比组件的初始复合奖励相比,包投递率奖励是最优性能的主要贡献者。对基于vff的XMARL算法进行修改,加入多头结构和自适应损失函数,使性能最好的算法multi-headed adaptive (MHA)-QMIX在XR指数、抖动、延迟和丢包率(PLR)方面的平均增益分别达到10.7%、41.4%、33.3%和67.9%。
{"title":"Explainable Multi-Agent Reinforcement Learning for Extended Reality Codec Adaptation","authors":"Pedro Enrique Iturria-Rivera;Raimundas Gaigalas;Medhat Elsayed;Majid Bavand;Yigit Ozcan;Melike Erol-Kantarci","doi":"10.1109/TMLCN.2025.3646125","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3646125","url":null,"abstract":"Extended Reality (XR) services are set to transform applications over <inline-formula> <tex-math>${mathbf {5}}^{th}$ </tex-math></inline-formula> and <inline-formula> <tex-math>${mathbf {6}}^{th}$ </tex-math></inline-formula> generation wireless networks, delivering immersive experiences. Concurrently, Artificial Intelligence (AI) advancements have expanded their role in wireless networks, however, trust and transparency in AI remain to be strengthened. Thus, providing explanations for AI-enabled systems can enhance trust. We introduce Value Function Factorization (VFF)-based Explainable (X) Multi-Agent Reinforcement Learning (MARL) algorithms, explaining reward design in XR codec adaptation through reward decomposition. We contribute four enhancements to XMARL algorithms. Firstly, we detail architectural modifications to enable reward decomposition in VFF-based MARL algorithms: Value Decomposition Networks (VDN), Mixture of Q-Values (QMIX), and Q-Transformation (Q-TRAN). Secondly, inspired by multi-task learning, we reduce the overhead of vanilla XMARL algorithms. Thirdly, we propose a new explainability metric, Reward Difference Fluctuation Explanation (RDFX), suitable for problems with adjustable parameters. Lastly, we propose adaptive XMARL, leveraging network gradients and reward decomposition for improved action selection. Simulation results indicate that, in XR codec adaptation, the Packet Delivery Ratio reward is the primary contributor to optimal performance compared to the initial composite reward, which included delay and Data Rate Ratio components. Modifications to VFF-based XMARL algorithms, incorporating multi-headed structures and adaptive loss functions, enable the best-performing algorithm, Multi-Headed Adaptive (MHA)-QMIX, to achieve significant average gains over the Adjust Packet Size baseline up to 10.7%, 41.4%, 33.3%, and 67.9% in XR index, jitter, delay, and Packet Loss Ratio (PLR), respectively.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"4 ","pages":"245-264"},"PeriodicalIF":0.0,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11303975","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145929688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AIS-Based Hybrid Vessel Trajectory Prediction for Enhanced Maritime Navigation 基于ais的船舶混合轨迹预测增强海上导航
Pub Date : 2025-12-16 DOI: 10.1109/TMLCN.2025.3644333
Ons Aouedi;Flor Ortiz;Thang X. Vu;Alexandre Lefourn;Felix Giese;Guillermo Gutierrez;Symeon Chatzinotas
The growing integration of non-terrestrial networks (NTNs), particularly low Earth orbit (LEO) satellite constellations, has significantly extended the reach of maritime connectivity, supporting critical applications such as vessel monitoring, navigation safety, and maritime surveillance in remote and oceanic regions. Automatic Identification System (AIS) data, increasingly collected through a combination of satellite and terrestrial infrastructures, provide a rich source of spatiotemporal vessel information. However, accurate trajectory prediction in maritime domains remains challenging due to irregular sampling rates, dynamic environmental conditions, and heterogeneous vessel behaviors. This study proposes a velocity-based trajectory prediction framework that leverages AIS data collected from integrated satellite–terrestrial networks. Rather than directly predicting absolute positions (latitude and longitude), our model predicts vessel motion in the form of latitude and longitude velocities. This formulation simplifies the learning task, enhances temporal continuity, and improves scalability, making it well-suited for resource-constrained NTN environments. The predictive architecture is built upon a Long Short-Term Memory network enhanced with attention mechanisms and residual connections (LSTM-RA), enabling it to capture complex temporal dependencies and adapt to noise in real-world AIS data. Extensive experiments on two maritime datasets validate the robustness and accuracy of our framework, demonstrating clear improvements over state-of-the-art baselines.
非地面网络(ntn)的日益融合,特别是低地球轨道(LEO)卫星星座,大大扩展了海上连通性的范围,支持船舶监测、导航安全和偏远和海洋地区的海上监视等关键应用。自动识别系统(AIS)数据越来越多地通过卫星和地面基础设施的结合收集,提供了丰富的时空船舶信息来源。然而,由于不规则的采样率、动态环境条件和异质船舶行为,在海洋领域进行准确的轨迹预测仍然具有挑战性。本研究提出了一种基于速度的轨迹预测框架,该框架利用从卫星-地面综合网络收集的AIS数据。我们的模型不是直接预测绝对位置(纬度和经度),而是以纬度和经度速度的形式预测船舶运动。该公式简化了学习任务,增强了时间连续性,提高了可扩展性,使其非常适合资源受限的NTN环境。预测架构建立在长短期记忆网络的基础上,增强了注意机制和残余连接(LSTM-RA),使其能够捕捉复杂的时间依赖性,并适应现实AIS数据中的噪声。在两个海事数据集上进行的大量实验验证了我们的框架的稳健性和准确性,证明了比最先进的基线有明显的改进。
{"title":"AIS-Based Hybrid Vessel Trajectory Prediction for Enhanced Maritime Navigation","authors":"Ons Aouedi;Flor Ortiz;Thang X. Vu;Alexandre Lefourn;Felix Giese;Guillermo Gutierrez;Symeon Chatzinotas","doi":"10.1109/TMLCN.2025.3644333","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3644333","url":null,"abstract":"The growing integration of non-terrestrial networks (NTNs), particularly low Earth orbit (LEO) satellite constellations, has significantly extended the reach of maritime connectivity, supporting critical applications such as vessel monitoring, navigation safety, and maritime surveillance in remote and oceanic regions. Automatic Identification System (AIS) data, increasingly collected through a combination of satellite and terrestrial infrastructures, provide a rich source of spatiotemporal vessel information. However, accurate trajectory prediction in maritime domains remains challenging due to irregular sampling rates, dynamic environmental conditions, and heterogeneous vessel behaviors. This study proposes a velocity-based trajectory prediction framework that leverages AIS data collected from integrated satellite–terrestrial networks. Rather than directly predicting absolute positions (latitude and longitude), our model predicts vessel motion in the form of latitude and longitude velocities. This formulation simplifies the learning task, enhances temporal continuity, and improves scalability, making it well-suited for resource-constrained NTN environments. The predictive architecture is built upon a Long Short-Term Memory network enhanced with attention mechanisms and residual connections (<monospace>LSTM-RA</monospace>), enabling it to capture complex temporal dependencies and adapt to noise in real-world AIS data. Extensive experiments on two maritime datasets validate the robustness and accuracy of our framework, demonstrating clear improvements over state-of-the-art baselines.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"4 ","pages":"198-210"},"PeriodicalIF":0.0,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11301841","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Agent Federated Learning Using Covariance-Based Nearest Neighbor Gaussian Processes 基于协方差的最近邻高斯过程的多智能体联邦学习
Pub Date : 2025-12-12 DOI: 10.1109/TMLCN.2025.3643409
George P. Kontoudis;Daniel J. Stilwell
In this paper, we propose scalable methods for Gaussian process (GP) prediction in decentralized multi-agent systems. Multiple aggregation techniques for GP prediction are decentralized with the use of iterative and consensus methods. Moreover, we introduce a covariance-based nearest neighbor selection strategy that leverages cross-covariance similarity, enabling subsets of agents to make accurate predictions. The proposed decentralized schemes preserve the consistency properties of their centralized counterparts, while adhering to federated learning principles by restricting raw data exchange between agents. We validate the efficacy of the proposed decentralized algorithms with numerical experiments on real-world sea surface temperature and ground elevation map datasets across multiple fleet sizes.
本文提出了分散多智能体系统中高斯过程(GP)预测的可扩展方法。采用迭代法和共识法对GP预测的多重聚合技术进行了去中心化处理。此外,我们引入了基于协方差的最近邻选择策略,该策略利用交叉协方差相似性,使代理子集能够做出准确的预测。所提出的去中心化方案保留了中心化方案的一致性,同时通过限制代理之间的原始数据交换来坚持联邦学习原则。我们在多个船队规模的真实海面温度和地面高程图数据集上进行了数值实验,验证了所提出的分散算法的有效性。
{"title":"Multi-Agent Federated Learning Using Covariance-Based Nearest Neighbor Gaussian Processes","authors":"George P. Kontoudis;Daniel J. Stilwell","doi":"10.1109/TMLCN.2025.3643409","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3643409","url":null,"abstract":"In this paper, we propose scalable methods for Gaussian process (GP) prediction in decentralized multi-agent systems. Multiple aggregation techniques for GP prediction are decentralized with the use of iterative and consensus methods. Moreover, we introduce a covariance-based nearest neighbor selection strategy that leverages cross-covariance similarity, enabling subsets of agents to make accurate predictions. The proposed decentralized schemes preserve the consistency properties of their centralized counterparts, while adhering to federated learning principles by restricting raw data exchange between agents. We validate the efficacy of the proposed decentralized algorithms with numerical experiments on real-world sea surface temperature and ground elevation map datasets across multiple fleet sizes.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"4 ","pages":"115-138"},"PeriodicalIF":0.0,"publicationDate":"2025-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11299094","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145778374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Machine Learning in Communications and Networking
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1