首页 > 最新文献

IEEE Transactions on Machine Learning in Communications and Networking最新文献

英文 中文
Semantic Meta-Split Learning: A TinyML Scheme for Few-Shot Wireless Image Classification 语义元分割学习:一种用于少拍无线图像分类的TinyML方案
Pub Date : 2025-04-03 DOI: 10.1109/TMLCN.2025.3557734
Eslam Eldeeb;Mohammad Shehab;Hirley Alves;Mohamed-Slim Alouini
Semantic and goal-oriented (SGO) communication is an emerging technology that only transmits significant information for a given task. Semantic communication encounters many challenges, such as computational complexity at end users, availability of data, and privacy-preserving. This work presents a TinyML-based semantic communication framework for few-shot wireless image classification that integrates split-learning and meta-learning. We exploit split-learning to limit the computations performed by the end-users while ensuring privacy-preserving. In addition, meta-learning overcomes data availability concerns and speeds up training by utilizing similarly trained tasks. The proposed algorithm is tested using a data set of images of hand-written letters. In addition, we present an uncertainty analysis of the predictions using conformal prediction (CP) techniques. Simulation results show that the proposed Semantic-MSL outperforms conventional schemes by achieving a ${20} %$ gain in classification accuracy using fewer data points yet less training energy consumption.
语义和面向目标通信是一种新兴的通信技术,它只传输给定任务的重要信息。语义通信面临许多挑战,例如终端用户的计算复杂性、数据的可用性和隐私保护。本文提出了一种基于tinyml的无线图像分类语义通信框架,该框架集成了分裂学习和元学习。我们利用分裂学习来限制最终用户执行的计算,同时确保隐私保护。此外,元学习克服了数据可用性问题,并通过使用类似的训练任务来加快训练速度。用一组手写字母图像对该算法进行了测试。此外,我们还利用保形预测(CP)技术对预测结果进行了不确定性分析。仿真结果表明,本文提出的语义- msl算法在使用更少的数据点和更少的训练能量消耗的情况下,分类精度提高了100亿美元,优于传统算法。
{"title":"Semantic Meta-Split Learning: A TinyML Scheme for Few-Shot Wireless Image Classification","authors":"Eslam Eldeeb;Mohammad Shehab;Hirley Alves;Mohamed-Slim Alouini","doi":"10.1109/TMLCN.2025.3557734","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3557734","url":null,"abstract":"Semantic and goal-oriented (SGO) communication is an emerging technology that only transmits significant information for a given task. Semantic communication encounters many challenges, such as computational complexity at end users, availability of data, and privacy-preserving. This work presents a TinyML-based semantic communication framework for few-shot wireless image classification that integrates split-learning and meta-learning. We exploit split-learning to limit the computations performed by the end-users while ensuring privacy-preserving. In addition, meta-learning overcomes data availability concerns and speeds up training by utilizing similarly trained tasks. The proposed algorithm is tested using a data set of images of hand-written letters. In addition, we present an uncertainty analysis of the predictions using conformal prediction (CP) techniques. Simulation results show that the proposed Semantic-MSL outperforms conventional schemes by achieving a <inline-formula> <tex-math>${20} %$ </tex-math></inline-formula> gain in classification accuracy using fewer data points yet less training energy consumption.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"491-501"},"PeriodicalIF":0.0,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10948463","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143817929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatio-Temporal Predictive Learning Using Crossover Attention for Communications and Networking Applications 基于交叉注意的时空预测学习在通信和网络中的应用
Pub Date : 2025-03-31 DOI: 10.1109/TMLCN.2025.3555975
Ke He;Thang Xuan Vu;Lisheng Fan;Symeon Chatzinotas;Björn Ottersten
This paper investigates the spatio-temporal predictive learning problem, which is crucial in diverse applications such as MIMO channel prediction, mobile traffic analysis, and network slicing. To address this problem, the attention mechanism has been adopted by many existing models to predict future outputs. However, most of these models use a single-domain attention which captures input dependency structures only in the temporal domain. This limitation reduces their prediction accuracy in spatio-temporal predictive learning, where understanding both spatial and temporal dependencies is essential. To tackle this issue and enhance the prediction performance, we propose a novel crossover attention mechanism in this paper. The crossover attention can be understood as a learnable regression kernel which prioritizes the input sequence with both spatial and temporal similarities and extracts relevant information for generating the output of future time slots. Simulation results and ablation studies based on synthetic and realistic datasets show that the proposed crossover attention achieves considerable prediction accuracy improvement compared to the conventional attention layers.
本文研究了在MIMO信道预测、移动流量分析和网络切片等多种应用中至关重要的时空预测学习问题。为了解决这个问题,许多现有模型采用了注意机制来预测未来的输出。然而,这些模型中的大多数使用单域关注,只在时间域中捕获输入依赖结构。这种限制降低了他们在时空预测学习中的预测准确性,在时空预测学习中,理解空间和时间依赖性是必不可少的。为了解决这一问题并提高预测性能,本文提出了一种新的交叉注意机制。交叉注意可以理解为一个可学习的回归核,它对具有空间和时间相似性的输入序列进行优先级排序,并提取相关信息以生成未来时隙的输出。仿真结果和基于合成和真实数据集的烧蚀研究表明,与传统的注意层相比,所提出的交叉注意层的预测精度有较大提高。
{"title":"Spatio-Temporal Predictive Learning Using Crossover Attention for Communications and Networking Applications","authors":"Ke He;Thang Xuan Vu;Lisheng Fan;Symeon Chatzinotas;Björn Ottersten","doi":"10.1109/TMLCN.2025.3555975","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3555975","url":null,"abstract":"This paper investigates the spatio-temporal predictive learning problem, which is crucial in diverse applications such as MIMO channel prediction, mobile traffic analysis, and network slicing. To address this problem, the attention mechanism has been adopted by many existing models to predict future outputs. However, most of these models use a single-domain attention which captures input dependency structures only in the temporal domain. This limitation reduces their prediction accuracy in spatio-temporal predictive learning, where understanding both spatial and temporal dependencies is essential. To tackle this issue and enhance the prediction performance, we propose a novel crossover attention mechanism in this paper. The crossover attention can be understood as a learnable regression kernel which prioritizes the input sequence with both spatial and temporal similarities and extracts relevant information for generating the output of future time slots. Simulation results and ablation studies based on synthetic and realistic datasets show that the proposed crossover attention achieves considerable prediction accuracy improvement compared to the conventional attention layers.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"479-490"},"PeriodicalIF":0.0,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10945971","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143817995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neuromorphic Wireless Split Computing With Multi-Level Spikes 神经形态无线分割计算与多层次尖峰
Pub Date : 2025-03-31 DOI: 10.1109/TMLCN.2025.3556634
Dengyu Wu;Jiechen Chen;Bipin Rajendran;H. Vincent Poor;Osvaldo Simeone
Inspired by biological processes, neuromorphic computing leverages spiking neural networks (SNNs) to perform inference tasks, offering significant efficiency gains for workloads involving sequential data. Recent advances in hardware and software have shown that embedding a small payload within each spike exchanged between spiking neurons can enhance inference accuracy without increasing energy consumption. To scale neuromorphic computing to larger workloads, split computing—where an SNN is partitioned across two devices—is a promising solution. In such architectures, the device hosting the initial layers must transmit information about the spikes generated by its output neurons to the second device. This establishes a trade-off between the benefits of multi-level spikes, which carry additional payload information, and the communication resources required for transmitting extra bits between devices. This paper presents the first comprehensive study of a neuromorphic wireless split computing architecture that employs multi-level SNNs. We propose digital and analog modulation schemes for an orthogonal frequency division multiplexing (OFDM) radio interface to enable efficient communication. Simulation and experimental results using software-defined radios reveal performance improvements achieved by multi-level SNN models and provide insights into the optimal payload size as a function of the connection quality between the transmitter and receiver.
受生物过程的启发,神经形态计算利用峰值神经网络(snn)来执行推理任务,为涉及顺序数据的工作负载提供了显著的效率提升。硬件和软件的最新进展表明,在每个尖峰神经元之间交换的尖峰中嵌入一个小的有效载荷可以在不增加能量消耗的情况下提高推理精度。为了将神经形态计算扩展到更大的工作负载,拆分计算(SNN在两个设备上进行分区)是一个很有前途的解决方案。在这种架构中,承载初始层的设备必须将其输出神经元产生的尖峰信息传输到第二个设备。这在多电平尖峰的好处(它携带额外的有效载荷信息)和设备之间传输额外比特所需的通信资源之间建立了一种权衡。本文首次全面研究了采用多级snn的神经形态无线分离计算架构。我们提出了一个正交频分复用(OFDM)无线电接口的数字和模拟调制方案,以实现有效的通信。使用软件定义无线电的仿真和实验结果揭示了多级SNN模型所实现的性能改进,并提供了作为发射器和接收器之间连接质量函数的最佳有效载荷大小的见解。
{"title":"Neuromorphic Wireless Split Computing With Multi-Level Spikes","authors":"Dengyu Wu;Jiechen Chen;Bipin Rajendran;H. Vincent Poor;Osvaldo Simeone","doi":"10.1109/TMLCN.2025.3556634","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3556634","url":null,"abstract":"Inspired by biological processes, neuromorphic computing leverages spiking neural networks (SNNs) to perform inference tasks, offering significant efficiency gains for workloads involving sequential data. Recent advances in hardware and software have shown that embedding a small payload within each spike exchanged between spiking neurons can enhance inference accuracy without increasing energy consumption. To scale neuromorphic computing to larger workloads, split computing—where an SNN is partitioned across two devices—is a promising solution. In such architectures, the device hosting the initial layers must transmit information about the spikes generated by its output neurons to the second device. This establishes a trade-off between the benefits of multi-level spikes, which carry additional payload information, and the communication resources required for transmitting extra bits between devices. This paper presents the first comprehensive study of a neuromorphic wireless split computing architecture that employs multi-level SNNs. We propose digital and analog modulation schemes for an orthogonal frequency division multiplexing (OFDM) radio interface to enable efficient communication. Simulation and experimental results using software-defined radios reveal performance improvements achieved by multi-level SNN models and provide insights into the optimal payload size as a function of the connection quality between the transmitter and receiver.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"502-516"},"PeriodicalIF":0.0,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10946192","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143835442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Generalized GNN-Transformer-Based Radio Link Failure Prediction Framework in 5G RAN 基于广义gnn变压器的5G无线链路故障预测框架
Pub Date : 2025-03-30 DOI: 10.1109/TMLCN.2025.3575368
Kazi Hasan;Khaleda Papry;Thomas Trappenberg;Israat Haque
Radio Link Failure (RLF) prediction system in Radio Access Networks (RANs) is critical for ensuring seamless communication and meeting the stringent requirements of high data rates, low latency, and improved reliability in 5G networks. However, weather conditions such as precipitation, humidity, temperature, and wind impact these communication links. Usually, historical radio link Key Performance Indicators (KPIs) and their surrounding weather station observations are utilized for building learning-based RLF prediction models. However, such models must be capable of learning the spatial weather context in a dynamic RAN and effectively encoding time series KPIs with the weather observation data. Existing work utilizes a heuristic-based and non-generalizable weather station aggregation method that uses Long Short-Term Memory (LSTM) for non-weighted sequence modeling. This paper fills the gap by proposing GenTrap, a novel RLF prediction framework that introduces a Graph Neural Network (GNN)-based learnable weather effect aggregation module and employs state-of-the-art time series transformer as the temporal feature extractor for radio link failure prediction. The GNN module encodes surrounding weather station data of each radio site while the transformer module encodes historical radio and weather observation features. The proposed aggregation method of GenTrap can be integrated into any existing prediction model to achieve better performance and generalizability. We evaluate GenTrap on two real-world datasets (rural and urban) with 2.6 million KPI data points and show that GenTrap offers a significantly higher F1-score of 0.93 for rural and 0.79 for urban, an increase of 29% and 21% respectively, compared to the state-of-the-art LSTM-based solutions while offering a 20% increased generalization capability.
无线接入网(RANs)中的RLF (Radio Link Failure)预测系统对于确保无缝通信,满足5G网络对高数据速率、低延迟和高可靠性的严格要求至关重要。然而,诸如降水、湿度、温度和风等天气条件会影响这些通信链路。通常,利用历史无线电链路关键性能指标(kpi)及其周围气象站观测数据建立基于学习的RLF预测模型。然而,这种模型必须能够在动态RAN中学习空间天气环境,并有效地用天气观测数据编码时间序列kpi。现有工作采用基于启发式的非一般化气象站聚合方法,该方法使用长短期记忆(LSTM)进行非加权序列建模。本文提出了一种新的RLF预测框架GenTrap,该框架引入了基于图神经网络(GNN)的可学习天气效应聚合模块,并采用最先进的时间序列变压器作为无线电链路故障预测的时间特征提取器,填补了这一空白。GNN模块对每个无线电站点周围气象站数据进行编码,变压器模块对历史无线电和天气观测特征进行编码。所提出的GenTrap聚合方法可以集成到任何现有的预测模型中,以获得更好的性能和泛化性。我们在两个真实世界的数据集(农村和城市)上使用260万KPI数据点对GenTrap进行了评估,结果表明,与最先进的基于lstm的解决方案相比,GenTrap在农村和城市的f1得分分别为0.93和0.79,分别提高了29%和21%,同时泛化能力提高了20%。
{"title":"A Generalized GNN-Transformer-Based Radio Link Failure Prediction Framework in 5G RAN","authors":"Kazi Hasan;Khaleda Papry;Thomas Trappenberg;Israat Haque","doi":"10.1109/TMLCN.2025.3575368","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3575368","url":null,"abstract":"Radio Link Failure (RLF) prediction system in Radio Access Networks (RANs) is critical for ensuring seamless communication and meeting the stringent requirements of high data rates, low latency, and improved reliability in 5G networks. However, weather conditions such as precipitation, humidity, temperature, and wind impact these communication links. Usually, historical radio link Key Performance Indicators (KPIs) and their surrounding weather station observations are utilized for building learning-based RLF prediction models. However, such models must be capable of learning the spatial weather context in a dynamic RAN and effectively encoding time series KPIs with the weather observation data. Existing work utilizes a heuristic-based and non-generalizable weather station aggregation method that uses Long Short-Term Memory (LSTM) for non-weighted sequence modeling. This paper fills the gap by proposing GenTrap, a novel RLF prediction framework that introduces a Graph Neural Network (GNN)-based learnable weather effect aggregation module and employs state-of-the-art time series transformer as the temporal feature extractor for radio link failure prediction. The GNN module encodes surrounding weather station data of each radio site while the transformer module encodes historical radio and weather observation features. The proposed aggregation method of GenTrap can be integrated into any existing prediction model to achieve better performance and generalizability. We evaluate GenTrap on two real-world datasets (rural and urban) with 2.6 million KPI data points and show that GenTrap offers a significantly higher F1-score of 0.93 for rural and 0.79 for urban, an increase of 29% and 21% respectively, compared to the state-of-the-art LSTM-based solutions while offering a 20% increased generalization capability.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"710-724"},"PeriodicalIF":0.0,"publicationDate":"2025-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11018489","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144308387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AERO: Adaptive Edge-Cloud Orchestration With a Sub-1K-Parameter Forecasting Model AERO:利用 1K 参数以下的预测模型进行自适应边缘云协调
Pub Date : 2025-03-20 DOI: 10.1109/TMLCN.2025.3553100
Berend J. D. Gort;Godfrey M. Kibalya;Angelos Antonopoulos
Effective resource management in edge-cloud networks is crucial for meeting Quality of Service (QoS) requirements while minimizing operational costs. However, dynamic and fluctuating workloads pose significant challenges for accurate workload prediction and efficient resource allocation, particularly in resource-constrained edge environments. In this paper, we introduce AERO (Adaptive Edge-cloud Resource Orchestration), a novel lightweight forecasting model designed to address these challenges. AERO features an adaptive period detection mechanism that dynamically identifies dominant periodicities in multivariate workload data, allowing it to adjust to varying patterns and abrupt changes. With fewer than 1,000 parameters, AERO is highly suitable for deployment on edge devices with limited computational capacity. We formalize our approach through a comprehensive system model and extend an existing simulation framework with predictor modules to evaluate AERO’s performance in realistic cloud-edge environments. Our extensive evaluations on real-world cloud workload datasets demonstrate that AERO achieves comparable prediction accuracy to complex state-of-the-art models with millions of parameters, while significantly reducing model size and computational overhead. In addition, simulations show that AERO improves orchestration performance, reducing energy consumption and response times compared to existing proactive and reactive approaches. Our live deployment experiments further validate these findings, demonstrating that AERO consistently delivers superior performance. These results highlight AERO as an effective solution for improving resource management and reducing operational costs in dynamic cloud-edge environments.
在边缘云网络中,有效的资源管理对于满足服务质量(QoS)要求同时最小化运营成本至关重要。然而,动态和波动的工作负载对准确的工作负载预测和有效的资源分配构成了重大挑战,特别是在资源受限的边缘环境中。在本文中,我们介绍了AERO(自适应边缘云资源编排),这是一种新的轻量级预测模型,旨在解决这些挑战。AERO具有自适应周期检测机制,可以动态识别多变量工作负载数据中的主要周期性,使其能够适应不同的模式和突变。AERO只有不到1000个参数,非常适合部署在计算能力有限的边缘设备上。我们通过一个全面的系统模型形式化了我们的方法,并使用预测模块扩展了现有的仿真框架,以评估AERO在现实云边缘环境中的性能。我们对现实世界的云工作负载数据集进行了广泛的评估,结果表明,AERO的预测精度与具有数百万个参数的最先进的复杂模型相当,同时显著降低了模型尺寸和计算开销。此外,仿真表明,与现有的主动和被动方法相比,AERO提高了编排性能,减少了能耗和响应时间。我们的现场部署实验进一步验证了这些发现,证明AERO始终提供卓越的性能。这些结果表明,AERO是一种在动态云边缘环境中改善资源管理和降低运营成本的有效解决方案。
{"title":"AERO: Adaptive Edge-Cloud Orchestration With a Sub-1K-Parameter Forecasting Model","authors":"Berend J. D. Gort;Godfrey M. Kibalya;Angelos Antonopoulos","doi":"10.1109/TMLCN.2025.3553100","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3553100","url":null,"abstract":"Effective resource management in edge-cloud networks is crucial for meeting Quality of Service (QoS) requirements while minimizing operational costs. However, dynamic and fluctuating workloads pose significant challenges for accurate workload prediction and efficient resource allocation, particularly in resource-constrained edge environments. In this paper, we introduce AERO (Adaptive Edge-cloud Resource Orchestration), a novel lightweight forecasting model designed to address these challenges. AERO features an adaptive period detection mechanism that dynamically identifies dominant periodicities in multivariate workload data, allowing it to adjust to varying patterns and abrupt changes. With fewer than 1,000 parameters, AERO is highly suitable for deployment on edge devices with limited computational capacity. We formalize our approach through a comprehensive system model and extend an existing simulation framework with predictor modules to evaluate AERO’s performance in realistic cloud-edge environments. Our extensive evaluations on real-world cloud workload datasets demonstrate that AERO achieves comparable prediction accuracy to complex state-of-the-art models with millions of parameters, while significantly reducing model size and computational overhead. In addition, simulations show that AERO improves orchestration performance, reducing energy consumption and response times compared to existing proactive and reactive approaches. Our live deployment experiments further validate these findings, demonstrating that AERO consistently delivers superior performance. These results highlight AERO as an effective solution for improving resource management and reducing operational costs in dynamic cloud-edge environments.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"463-478"},"PeriodicalIF":0.0,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10935743","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143740375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation Metrics and Methods for Generative Models in the Wireless PHY Layer 无线物理层生成模型的评价指标与方法
Pub Date : 2025-03-19 DOI: 10.1109/TMLCN.2025.3571026
Michael Baur;Nurettin Turan;Simon Wallner;Wolfgang Utschick
Generative models are typically evaluated by direct inspection of their generated samples, e.g., by visual inspection in the case of images. Further evaluation metrics like the Fréchet inception distance or maximum mean discrepancy are intricate to interpret and lack physical motivation. These observations make evaluating generative models in the wireless PHY layer non-trivial. This work establishes a framework consisting of evaluation metrics and methods for generative models applied to the wireless PHY layer. The proposed metrics and methods are motivated by wireless applications, facilitating interpretation and understandability for the wireless community. In particular, we propose a spectral efficiency analysis for validating the generated channel norms and a codebook fingerprinting method to validate the generated channel directions. Moreover, we propose an application cross-check to evaluate the generative model’s samples for training machine learning-based models in relevant downstream tasks. Our analysis is based on real-world measurement data and includes the Gaussian mixture model, variational autoencoder, diffusion model, and generative adversarial network. Our results indicate that solely relying on metrics like the maximum mean discrepancy produces inconsistent and uninterpretable evaluation outcomes. In contrast, the proposed metrics and methods exhibit consistent and explainable behavior.
生成模型通常通过直接检查其生成的样本来评估,例如,在图像的情况下通过目视检查。进一步的评估指标,如fr起始距离或最大平均差异是复杂的解释和缺乏物理动机。这些观察结果使得评估无线物理层中的生成模型变得非常重要。这项工作建立了一个框架,包括用于无线物理层的生成模型的评估指标和方法。所提出的度量和方法是由无线应用驱动的,便于无线社区的解释和理解。特别是,我们提出了一种频谱效率分析来验证生成的信道规范和一种码本指纹识别方法来验证生成的信道方向。此外,我们建议应用交叉检查来评估生成模型的样本,以便在相关的下游任务中训练基于机器学习的模型。我们的分析基于真实世界的测量数据,包括高斯混合模型、变分自编码器、扩散模型和生成对抗网络。我们的研究结果表明,仅仅依赖于像最大平均差异这样的指标会产生不一致和不可解释的评估结果。相反,建议的度量标准和方法表现出一致且可解释的行为。
{"title":"Evaluation Metrics and Methods for Generative Models in the Wireless PHY Layer","authors":"Michael Baur;Nurettin Turan;Simon Wallner;Wolfgang Utschick","doi":"10.1109/TMLCN.2025.3571026","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3571026","url":null,"abstract":"Generative models are typically evaluated by direct inspection of their generated samples, e.g., by visual inspection in the case of images. Further evaluation metrics like the Fréchet inception distance or maximum mean discrepancy are intricate to interpret and lack physical motivation. These observations make evaluating generative models in the wireless PHY layer non-trivial. This work establishes a framework consisting of evaluation metrics and methods for generative models applied to the wireless PHY layer. The proposed metrics and methods are motivated by wireless applications, facilitating interpretation and understandability for the wireless community. In particular, we propose a spectral efficiency analysis for validating the generated channel norms and a codebook fingerprinting method to validate the generated channel directions. Moreover, we propose an application cross-check to evaluate the generative model’s samples for training machine learning-based models in relevant downstream tasks. Our analysis is based on real-world measurement data and includes the Gaussian mixture model, variational autoencoder, diffusion model, and generative adversarial network. Our results indicate that solely relying on metrics like the maximum mean discrepancy produces inconsistent and uninterpretable evaluation outcomes. In contrast, the proposed metrics and methods exhibit consistent and explainable behavior.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"677-689"},"PeriodicalIF":0.0,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11007069","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144219745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Closed-Loop Clustering-Based Global Bandwidth Prediction in Real-Time Video Streaming 基于闭环聚类的实时视频流全局带宽预测
Pub Date : 2025-03-18 DOI: 10.1109/TMLCN.2025.3551689
Sepideh Afshar;Reza Razavi;Mohammad Moshirpour
Accurate throughput forecasting is essential for ensuring the seamless operation of Real-Time Communication (RTC) applications. These demands for accurate throughput forecasting become particularly challenging when dealing with wireless access links, as they inherently exhibit fluctuating bandwidth. Ensuring an exceptional user Quality of Experience (QoE) in this scenario depends on accurately predicting available bandwidth in the short term since it plays a pivotal role in guiding video rate adaptation. Yet, current methodologies for short-term bandwidth prediction (SBP) struggle to perform adequately in dynamically changing real-world network environments and lack generalizability to adapt across varied network conditions. Also, acquiring long and representative traces that capture real-world network complexity is challenging. To overcome these challenges, we propose closed-loop clustering-based Global Forecasting Models (GFMs) for SBP. Unlike local models, GFMs apply the same function to all traces enabling cross-learning, and leveraging relationships among traces to address the performance issues seen in current SBP algorithms. To address potential heterogeneity within the data and improve prediction quality, a clustered-wise GFM is utilized to group similar traces based on prediction accuracy. Finally, the proposed method is validated using real-world datasets of HSDPA 3G, NYC LTE, and Irish 5G data demonstrating significant improvements in accuracy and generalizability.
准确的吞吐量预测对于确保实时通信(RTC)应用的无缝运行至关重要。在处理无线接入链路时,这些对准确吞吐量预测的要求变得特别具有挑战性,因为它们固有地表现出波动的带宽。在这种情况下,确保卓越的用户体验质量(QoE)取决于短期内可用带宽的准确预测,因为它在指导视频速率适应方面起着关键作用。然而,目前的短期带宽预测(SBP)方法难以在动态变化的现实网络环境中充分发挥作用,并且缺乏适应各种网络条件的通用性。此外,获取捕获真实网络复杂性的长且具有代表性的跟踪是具有挑战性的。为了克服这些挑战,我们提出了基于闭环聚类的SBP全球预测模型(GFMs)。与局部模型不同,GFMs对所有迹线应用相同的功能,从而实现交叉学习,并利用迹线之间的关系来解决当前SBP算法中出现的性能问题。为了解决数据内部潜在的异质性并提高预测质量,利用聚类GFM根据预测精度对相似轨迹进行分组。最后,使用HSDPA 3G、NYC LTE和爱尔兰5G数据的真实数据集对所提出的方法进行了验证,结果表明该方法在准确性和泛化性方面有显著提高。
{"title":"Closed-Loop Clustering-Based Global Bandwidth Prediction in Real-Time Video Streaming","authors":"Sepideh Afshar;Reza Razavi;Mohammad Moshirpour","doi":"10.1109/TMLCN.2025.3551689","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3551689","url":null,"abstract":"Accurate throughput forecasting is essential for ensuring the seamless operation of Real-Time Communication (RTC) applications. These demands for accurate throughput forecasting become particularly challenging when dealing with wireless access links, as they inherently exhibit fluctuating bandwidth. Ensuring an exceptional user Quality of Experience (QoE) in this scenario depends on accurately predicting available bandwidth in the short term since it plays a pivotal role in guiding video rate adaptation. Yet, current methodologies for short-term bandwidth prediction (SBP) struggle to perform adequately in dynamically changing real-world network environments and lack generalizability to adapt across varied network conditions. Also, acquiring long and representative traces that capture real-world network complexity is challenging. To overcome these challenges, we propose closed-loop clustering-based Global Forecasting Models (GFMs) for SBP. Unlike local models, GFMs apply the same function to all traces enabling cross-learning, and leveraging relationships among traces to address the performance issues seen in current SBP algorithms. To address potential heterogeneity within the data and improve prediction quality, a clustered-wise GFM is utilized to group similar traces based on prediction accuracy. Finally, the proposed method is validated using real-world datasets of HSDPA 3G, NYC LTE, and Irish 5G data demonstrating significant improvements in accuracy and generalizability.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"448-462"},"PeriodicalIF":0.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10929655","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143716486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Randomized Quantization for Privacy in Resource Constrained Machine Learning at-the-Edge and Federated Learning 资源约束下边缘机器学习和联邦学习中隐私的随机量化
Pub Date : 2025-03-10 DOI: 10.1109/TMLCN.2025.3550119
Ce Feng;Parv Venkitasubramaniam
The increasing adoption of machine learning at the edge (ML-at-the-edge) and federated learning (FL) presents a dual challenge: ensuring data privacy as well as addressing resource constraints such as limited computational power, memory, and communication bandwidth. Traditional approaches typically apply differentially private stochastic gradient descent (DP-SGD) to preserve privacy, followed by quantization techniques as a post-processing step to reduce model size and communication overhead. However, this sequential framework introduces inherent drawbacks, as quantization alone lacks privacy guarantees and often introduces errors that degrade model performance. In this work, we propose randomized quantization as an integrated solution to address these dual challenges by embedding randomness directly into the quantization process. This approach enhances privacy while simultaneously reducing communication and computational overhead. To achieve this, we introduce Randomized Quantizer Projection Stochastic Gradient Descent (RQP-SGD), a method designed for ML-at-the-edge that embeds DP-SGD within a randomized quantization-based projection during model training. For federated learning, we develop Gaussian Sampling Quantization (GSQ), which integrates discrete Gaussian sampling into the quantization process to ensure local differential privacy (LDP). Unlike conventional methods that rely on Gaussian noise addition, GSQ achieves privacy through discrete Gaussian sampling while improving communication efficiency and model utility across distributed systems. Through rigorous theoretical analysis and extensive experiments on benchmark datasets, we demonstrate that these methods significantly enhance the utility-privacy trade-off and computational efficiency in both ML-at-the-edge and FL systems. RQP-SGD is evaluated on MNIST and the Breast Cancer Diagnostic dataset, showing an average 10.62% utility improvement over the deterministic quantization-based projected DP-SGD while maintaining (1.0, 0)-DP. In federated learning tasks, GSQ-FL improves accuracy by an average 11.52% over DP-FedPAQ across MNIST and FashionMNIST under non-IID conditions. Additionally, GSQ-FL outperforms DP-FedPAQ by 16.54% on CIFAR-10 and 8.7% on FEMNIST.
边缘机器学习(ML-at-the-edge)和联邦学习(FL)的日益普及带来了双重挑战:确保数据隐私以及解决资源限制,如有限的计算能力、内存和通信带宽。传统方法通常采用差分私有随机梯度下降(DP-SGD)来保护隐私,然后采用量化技术作为后处理步骤来减少模型大小和通信开销。然而,这个顺序框架引入了固有的缺点,因为单独的量化缺乏隐私保证,并且经常引入降低模型性能的错误。在这项工作中,我们提出随机量化作为一种集成的解决方案,通过将随机性直接嵌入量化过程来解决这些双重挑战。这种方法增强了隐私,同时减少了通信和计算开销。为了实现这一目标,我们引入了随机量化投影随机梯度下降(RQP-SGD),这是一种为边缘机器学习设计的方法,在模型训练期间将DP-SGD嵌入到随机量化投影中。对于联邦学习,我们开发了高斯采样量化(GSQ),它将离散高斯采样集成到量化过程中,以确保局部差分隐私(LDP)。与依赖高斯噪声添加的传统方法不同,GSQ通过离散高斯采样实现隐私,同时提高了跨分布式系统的通信效率和模型效用。通过严格的理论分析和对基准数据集的广泛实验,我们证明了这些方法显着提高了边缘机器学习和FL系统中的效用-隐私权衡和计算效率。RQP-SGD在MNIST和乳腺癌诊断数据集上进行了评估,结果显示,与基于确定性量化的预测DP-SGD相比,RQP-SGD的平均效用提高了10.62%,同时保持(1.0,0)-DP。在联邦学习任务中,GSQ-FL在非iid条件下,在MNIST和FashionMNIST中比DP-FedPAQ平均提高了11.52%的准确率。此外,GSQ-FL在CIFAR-10和FEMNIST上的表现分别比DP-FedPAQ高16.54%和8.7%。
{"title":"Randomized Quantization for Privacy in Resource Constrained Machine Learning at-the-Edge and Federated Learning","authors":"Ce Feng;Parv Venkitasubramaniam","doi":"10.1109/TMLCN.2025.3550119","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3550119","url":null,"abstract":"The increasing adoption of machine learning at the edge (ML-at-the-edge) and federated learning (FL) presents a dual challenge: ensuring data privacy as well as addressing resource constraints such as limited computational power, memory, and communication bandwidth. Traditional approaches typically apply differentially private stochastic gradient descent (DP-SGD) to preserve privacy, followed by quantization techniques as a post-processing step to reduce model size and communication overhead. However, this sequential framework introduces inherent drawbacks, as quantization alone lacks privacy guarantees and often introduces errors that degrade model performance. In this work, we propose randomized quantization as an integrated solution to address these dual challenges by embedding randomness directly into the quantization process. This approach enhances privacy while simultaneously reducing communication and computational overhead. To achieve this, we introduce Randomized Quantizer Projection Stochastic Gradient Descent (RQP-SGD), a method designed for ML-at-the-edge that embeds DP-SGD within a randomized quantization-based projection during model training. For federated learning, we develop Gaussian Sampling Quantization (GSQ), which integrates discrete Gaussian sampling into the quantization process to ensure local differential privacy (LDP). Unlike conventional methods that rely on Gaussian noise addition, GSQ achieves privacy through discrete Gaussian sampling while improving communication efficiency and model utility across distributed systems. Through rigorous theoretical analysis and extensive experiments on benchmark datasets, we demonstrate that these methods significantly enhance the utility-privacy trade-off and computational efficiency in both ML-at-the-edge and FL systems. RQP-SGD is evaluated on MNIST and the Breast Cancer Diagnostic dataset, showing an average 10.62% utility improvement over the deterministic quantization-based projected DP-SGD while maintaining (1.0, 0)-DP. In federated learning tasks, GSQ-FL improves accuracy by an average 11.52% over DP-FedPAQ across MNIST and FashionMNIST under non-IID conditions. Additionally, GSQ-FL outperforms DP-FedPAQ by 16.54% on CIFAR-10 and 8.7% on FEMNIST.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"395-419"},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10919124","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143645337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimal Stopping Theory-Based Online Node Selection in IoT Networks for Multi-Parameter Federated Learning 基于最优停止理论的物联网多参数联邦学习在线节点选择
Pub Date : 2025-03-06 DOI: 10.1109/TMLCN.2025.3567370
Seda Dogan-Tusha;Faissal El Bouanani;Marwa Qaraqe
Federated Learning (FL) has attracted the interest of researchers since it hinders inefficient resource utilization by developing a global learning model based on local model parameters (LMP). This study introduces a novel optimal stopping theory (OST) based online node selection scheme for low complex and multi-parameter FL procedure in IoT networks. Global model accuracy (GMA) in FL depends on the accuracy of the LMP received by the central entity (CE). It is therefore essential to choose trusty nodes to guarantee a certain level of global model accuracy without inducing additional system complexity. For this reason, the proposed technique in this study utilizes the secretary problem (SP) approach as an OST to perform node selection considering both received signal strength (RSS) and local model accuracy (LMA) of available nodes. By leveraging the SP, the proposed technique employs a stopping rule that maximizes the probability of selecting the node with the best quality, and thereby avoids testing all candidate nodes. To this end, this work provides a mathematical framework for maximizing the selection probability of the best node amongst candidate nodes. Specifically, the developed framework has been used to calculate the weighting coefficients of the RSS and LMA to define the node quality. Comprehensive analysis and simulation results illustrate that the OST based proposed technique outperforms state-of-the-art methods including the random node selection and the offline node selection (exhaustive search methods) in terms of GMA and computational complexity, respectively.
联邦学习(FL)通过建立基于局部模型参数的全局学习模型来抑制资源的低效利用,引起了研究人员的广泛关注。针对物联网网络中低复杂度多参数FL过程,提出了一种基于最优停止理论(OST)的在线节点选择方案。FL中的全局模型精度(GMA)取决于中央实体(CE)接收的LMP的精度。因此,必须选择可信节点,以保证一定程度的全局模型精度,而不会引起额外的系统复杂性。因此,本研究提出的技术利用秘书问题(SP)方法作为OST,同时考虑可用节点的接收信号强度(RSS)和局部模型精度(LMA)进行节点选择。通过利用SP,所提出的技术采用了一个停止规则,使选择具有最佳质量的节点的概率最大化,从而避免测试所有候选节点。为此,本工作提供了一个数学框架,用于最大化候选节点中最佳节点的选择概率。具体来说,使用开发的框架计算RSS和LMA的权重系数来定义节点质量。综合分析和仿真结果表明,基于OST的方法在GMA和计算复杂度方面分别优于随机节点选择和离线节点选择(穷举搜索方法)。
{"title":"Optimal Stopping Theory-Based Online Node Selection in IoT Networks for Multi-Parameter Federated Learning","authors":"Seda Dogan-Tusha;Faissal El Bouanani;Marwa Qaraqe","doi":"10.1109/TMLCN.2025.3567370","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3567370","url":null,"abstract":"Federated Learning (FL) has attracted the interest of researchers since it hinders inefficient resource utilization by developing a global learning model based on local model parameters (LMP). This study introduces a novel optimal stopping theory (OST) based online node selection scheme for low complex and multi-parameter FL procedure in IoT networks. Global model accuracy (GMA) in FL depends on the accuracy of the LMP received by the central entity (CE). It is therefore essential to choose trusty nodes to guarantee a certain level of global model accuracy without inducing additional system complexity. For this reason, the proposed technique in this study utilizes the secretary problem (SP) approach as an OST to perform node selection considering both received signal strength (RSS) and local model accuracy (LMA) of available nodes. By leveraging the SP, the proposed technique employs a stopping rule that maximizes the probability of selecting the node with the best quality, and thereby avoids testing all candidate nodes. To this end, this work provides a mathematical framework for maximizing the selection probability of the best node amongst candidate nodes. Specifically, the developed framework has been used to calculate the weighting coefficients of the RSS and LMA to define the node quality. Comprehensive analysis and simulation results illustrate that the OST based proposed technique outperforms state-of-the-art methods including the random node selection and the offline node selection (exhaustive search methods) in terms of GMA and computational complexity, respectively.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"659-676"},"PeriodicalIF":0.0,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10988901","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144100011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Paths Optimization by Jointing Link Management and Channel Estimation Using Variational Autoencoder With Attention for IRS-MIMO Systems 关注IRS-MIMO系统的变分自编码器结合链路管理和信道估计的路径优化
Pub Date : 2025-03-03 DOI: 10.1109/TMLCN.2025.3547689
Meng-Hsun Wu;Hong-Yunn Chen;Ta-Wei Yang;Chih-Chuan Hsu;Chih-Wei Huang;Cheng-Fu Chou
In massive MIMO systems, achieving optimal end-to-end transmission encompasses various aspects such as power control, modulation schemes, path selection, and accurate channel estimation. Nonetheless, optimizing resource allocation remains a significant challenge. In path selection, the direct link is a straightforward link between the transmitter and the receiver. On the other hand, the indirect link involves reflections, diffraction, or scattering, often due to interactions with objects or obstacles. Relying exclusively on one type of link can lead to suboptimal and limited performance. Link management (LM) is emerging as a viable solution, and accurate channel estimation provides essential information to make informed decisions about transmission parameters. In this paper, we study LM and channel estimation that flexibly adjust the transmission ratio of direct and indirect links to improve generalization, using a denoising variational autoencoder with attention modules (DVAE-ATT) to enhance sum rate. Our experiments show significant improvements in IRS-assisted millimeter-wave MIMO systems. Incorporating LM increased the sum rate and reduced MSE by approximately 9%. Variational autoencoders (VAE) outperformed traditional autoencoders in the spatial domain, as confirmed by heatmap analysis. Additionally, our investigation of DVAE-ATT reveals notable differences in the temporal domain with and without attention mechanisms. Finally, we analyze performance across varying numbers of users and ranges. Across various distances—5m, 15m, 25m, and 35m—performance improvements averaged 6%, 11%, 16%, and 22%, respectively.
在大规模MIMO系统中,实现最佳端到端传输包括功率控制、调制方案、路径选择和准确的信道估计等各个方面。尽管如此,优化资源分配仍然是一个重大挑战。在路径选择中,直接链路是发射器和接收器之间的直接链路。另一方面,间接联系涉及反射、衍射或散射,通常是由于与物体或障碍物的相互作用。完全依赖于一种类型的链接可能导致次优和有限的性能。链路管理(LM)正在成为一种可行的解决方案,准确的信道估计为做出有关传输参数的明智决策提供了必要的信息。在本文中,我们研究了LM和信道估计,灵活调整直接和间接链路的传输率来提高泛化,使用带有注意模块的去噪变分自编码器(DVAE-ATT)来提高和率。我们的实验显示了irs辅助毫米波MIMO系统的显著改进。合并LM提高了总和率,并将MSE降低了约9%。热力图分析证实了变分自编码器(VAE)在空间域上优于传统的自编码器。此外,我们对DVAE-ATT的调查显示,在有和没有注意机制的情况下,颞域存在显著差异。最后,我们分析不同数量的用户和范围的性能。在不同距离(5m、15m、25m和35m)上,性能提升的平均幅度分别为6%、11%、16%和22%。
{"title":"Paths Optimization by Jointing Link Management and Channel Estimation Using Variational Autoencoder With Attention for IRS-MIMO Systems","authors":"Meng-Hsun Wu;Hong-Yunn Chen;Ta-Wei Yang;Chih-Chuan Hsu;Chih-Wei Huang;Cheng-Fu Chou","doi":"10.1109/TMLCN.2025.3547689","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3547689","url":null,"abstract":"In massive MIMO systems, achieving optimal end-to-end transmission encompasses various aspects such as power control, modulation schemes, path selection, and accurate channel estimation. Nonetheless, optimizing resource allocation remains a significant challenge. In path selection, the direct link is a straightforward link between the transmitter and the receiver. On the other hand, the indirect link involves reflections, diffraction, or scattering, often due to interactions with objects or obstacles. Relying exclusively on one type of link can lead to suboptimal and limited performance. Link management (LM) is emerging as a viable solution, and accurate channel estimation provides essential information to make informed decisions about transmission parameters. In this paper, we study LM and channel estimation that flexibly adjust the transmission ratio of direct and indirect links to improve generalization, using a denoising variational autoencoder with attention modules (DVAE-ATT) to enhance sum rate. Our experiments show significant improvements in IRS-assisted millimeter-wave MIMO systems. Incorporating LM increased the sum rate and reduced MSE by approximately 9%. Variational autoencoders (VAE) outperformed traditional autoencoders in the spatial domain, as confirmed by heatmap analysis. Additionally, our investigation of DVAE-ATT reveals notable differences in the temporal domain with and without attention mechanisms. Finally, we analyze performance across varying numbers of users and ranges. Across various distances—5m, 15m, 25m, and 35m—performance improvements averaged 6%, 11%, 16%, and 22%, respectively.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"381-394"},"PeriodicalIF":0.0,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10909334","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143583165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Machine Learning in Communications and Networking
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1