Pub Date : 2025-06-11DOI: 10.1109/TMLCN.2025.3578577
Ghazi Gharsallah;Georges Kaddoum
The evolution toward sixth-generation (6G) wireless communications introduces unprecedented demands for ultra-reliable low-latency communication (URLLC) in vehicle-to-everything (V2X) networks, where fast-moving vehicles and the use of high-frequency bands make it challenging to acquire the channel state information to maintain high-quality connectivity. Traditional methods for estimating channel coefficients rely on pilot symbols transmitted during each coherence interval; however, the combination of high mobility and high frequencies significantly reduces the coherence times, necessitating substantial bandwidth for pilot transmission. Consequently, these conventional approaches are becoming inadequate, potentially causing inefficient channel estimation and degraded throughput in such dynamic environments. This paper presents a novel multimodal collaborative perception framework for dynamic channel prediction in 6G V2X networks, integrating LiDAR data to enhance the accuracy and robustness of channel predictions. Our approach synergizes information from connected agents and infrastructure, enabling a more comprehensive understanding of the dynamic vehicular environment. A key innovation in our framework is the prediction horizon optimization (PHO) component, which dynamically adjusts the prediction interval based on real-time evaluations of channel conditions, ensuring that predictions remain relevant and accurate. Extensive simulations using the MVX (Multimodal V2X) high-fidelity co-simulation framework demonstrate the effectiveness of our solution. Compared to baseline methods—namely, a classical LS-LMMSE approach and a wireless-based model that solely relies on channel measurements—our framework achieves up to a 30.82% reduction in mean squared error (MSE) and a 32.76% increase in goodput. These gains underscore the efficiency of the PHO component in reducing prediction errors, maintaining low bit error rates, and meeting the stringent requirements of 6G V2X communications. Consequently, our framework establishes a new benchmark for AI-driven channel prediction in next-generation wireless networks, particularly in challenging urban and rural scenarios.
{"title":"Multimodal Collaborative Perception for Dynamic Channel Prediction in 6G V2X Networks","authors":"Ghazi Gharsallah;Georges Kaddoum","doi":"10.1109/TMLCN.2025.3578577","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3578577","url":null,"abstract":"The evolution toward sixth-generation (6G) wireless communications introduces unprecedented demands for ultra-reliable low-latency communication (URLLC) in vehicle-to-everything (V2X) networks, where fast-moving vehicles and the use of high-frequency bands make it challenging to acquire the channel state information to maintain high-quality connectivity. Traditional methods for estimating channel coefficients rely on pilot symbols transmitted during each coherence interval; however, the combination of high mobility and high frequencies significantly reduces the coherence times, necessitating substantial bandwidth for pilot transmission. Consequently, these conventional approaches are becoming inadequate, potentially causing inefficient channel estimation and degraded throughput in such dynamic environments. This paper presents a novel multimodal collaborative perception framework for dynamic channel prediction in 6G V2X networks, integrating LiDAR data to enhance the accuracy and robustness of channel predictions. Our approach synergizes information from connected agents and infrastructure, enabling a more comprehensive understanding of the dynamic vehicular environment. A key innovation in our framework is the prediction horizon optimization (PHO) component, which dynamically adjusts the prediction interval based on real-time evaluations of channel conditions, ensuring that predictions remain relevant and accurate. Extensive simulations using the MVX (Multimodal V2X) high-fidelity co-simulation framework demonstrate the effectiveness of our solution. Compared to baseline methods—namely, a classical LS-LMMSE approach and a wireless-based model that solely relies on channel measurements—our framework achieves up to a 30.82% reduction in mean squared error (MSE) and a 32.76% increase in goodput. These gains underscore the efficiency of the PHO component in reducing prediction errors, maintaining low bit error rates, and meeting the stringent requirements of 6G V2X communications. Consequently, our framework establishes a new benchmark for AI-driven channel prediction in next-generation wireless networks, particularly in challenging urban and rural scenarios.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"725-743"},"PeriodicalIF":0.0,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11030661","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144323233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-04DOI: 10.1109/TMLCN.2025.3576727
Mazene Ameur;Bouziane Brik;Adlen Ksentini
The advent of 6G networks heralds a transformative shift in communication technology, with Artificial Intelligence (AI) and Machine Learning (ML) forming the backbone of its architecture and operations. However, the dynamic nature of 6G environments renders these models vulnerable to performance degradation due to model drift. Existing drift detection approaches, despite advancements, often fail to address the diverse and complex types of drift encountered in telecommunications, particularly in time-series data. To bridge this gap, we propose, for the first time, a novel drift detection framework featuring a Dual Self-Attention AutoEncoder (DSA-AE) designed to handle all major manifestations of drift in 6G networks, including data, label, and concept drift. This architectural design leverages the autoencoder’s reconstruction capabilities to monitor both input features and target variables, effectively detecting data and label drift. Additionally, its dual self-attention mechanisms comprising feature and temporal attention blocks capture spatiotemporal fluctuations, addressing concept drift. Extensive evaluations across three diverse telecommunications datasets (two time-series and one non-time-series) demonstrate that our framework achieves substantial advancements over state-of-the-art methods, delivering over a 13.6% improvement in drift detection accuracy and a remarkable 94.7% reduction in detection latency. By balancing higher accuracy with lower latency, this approach offers a robust and efficient solution for model drift detection in the dynamic and complex landscape of 6G networks.
{"title":"Dual Self-Attention is What You Need for Model Drift Detection in 6G Networks","authors":"Mazene Ameur;Bouziane Brik;Adlen Ksentini","doi":"10.1109/TMLCN.2025.3576727","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3576727","url":null,"abstract":"The advent of 6G networks heralds a transformative shift in communication technology, with Artificial Intelligence (AI) and Machine Learning (ML) forming the backbone of its architecture and operations. However, the dynamic nature of 6G environments renders these models vulnerable to performance degradation due to model drift. Existing drift detection approaches, despite advancements, often fail to address the diverse and complex types of drift encountered in telecommunications, particularly in time-series data. To bridge this gap, we propose, for the first time, a novel drift detection framework featuring a Dual Self-Attention AutoEncoder (DSA-AE) designed to handle all major manifestations of drift in 6G networks, including data, label, and concept drift. This architectural design leverages the autoencoder’s reconstruction capabilities to monitor both input features and target variables, effectively detecting data and label drift. Additionally, its dual self-attention mechanisms comprising feature and temporal attention blocks capture spatiotemporal fluctuations, addressing concept drift. Extensive evaluations across three diverse telecommunications datasets (two time-series and one non-time-series) demonstrate that our framework achieves substantial advancements over state-of-the-art methods, delivering over a 13.6% improvement in drift detection accuracy and a remarkable 94.7% reduction in detection latency. By balancing higher accuracy with lower latency, this approach offers a robust and efficient solution for model drift detection in the dynamic and complex landscape of 6G networks.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"690-709"},"PeriodicalIF":0.0,"publicationDate":"2025-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11024186","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144308388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-28DOI: 10.1109/TMLCN.2025.3564912
Mohammed Ashfaaq M. Farzaan;Mohamed Chahine Ghanem;Ayman El-Hajjar;Deepthi N. Ratnayake
The growing complexity and frequency of cyber threats in cloud environments call for innovative and automated solutions to maintain effective and efficient incident response. This study tackles this urgent issue by introducing a cutting-edge AI-driven cyber incident response system specifically designed for cloud platforms. Unlike conventional methods, our system employs advanced Artificial Intelligence (AI) and Machine Learning (ML) techniques to provide accurate, scalable, and seamless integration with platforms like Google Cloud and Microsoft Azure. Key features include an automated pipeline that integrates Network Traffic Classification, Web Intrusion Detection, and Post-Incident Malware Analysis into a cohesive framework implemented via a Flask application. To validate the effectiveness of the system, we tested it using three prominent datasets: NSL-KDD, UNSW-NB15, and CIC-IDS-2017. The Random Forest model achieved accuracies of 90%, 75%, and 99%, respectively, for the classification of network traffic, while it attained 96% precision for malware analysis. Furthermore, a neural network-based malware analysis model set a new benchmark with an impressive accuracy rate of 99%. By incorporating deep learning models with cloud-based GPUs and TPUs, we demonstrate how to meet high computational demands without compromising efficiency. Furthermore, containerisation ensures that the system is both scalable and portable across a wide range of cloud environments. By reducing incident response times, lowering operational risks, and offering cost-effective deployment, our system equips organizations with a robust tool to proactively safeguard their cloud infrastructure. This innovative integration of AI and containerised architecture not only sets a new benchmark in threat detection but also significantly advances the state-of-the-art in cybersecurity, promising transformative benefits for critical industries. This research makes a significant contribution to the field of AI-powered cybersecurity by showcasing the powerful combination of AI models and cloud infrastructure to fill critical gaps in cyber incident response. Our findings emphasise the superior performance of Random Forest and deep learning models in accurately identifying and classifying cyber threats, setting a new standard for real-world deployment in cloud environments.
{"title":"AI-Powered System for an Efficient and Effective Cyber Incidents Detection and Response in Cloud Environments","authors":"Mohammed Ashfaaq M. Farzaan;Mohamed Chahine Ghanem;Ayman El-Hajjar;Deepthi N. Ratnayake","doi":"10.1109/TMLCN.2025.3564912","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3564912","url":null,"abstract":"The growing complexity and frequency of cyber threats in cloud environments call for innovative and automated solutions to maintain effective and efficient incident response. This study tackles this urgent issue by introducing a cutting-edge AI-driven cyber incident response system specifically designed for cloud platforms. Unlike conventional methods, our system employs advanced Artificial Intelligence (AI) and Machine Learning (ML) techniques to provide accurate, scalable, and seamless integration with platforms like Google Cloud and Microsoft Azure. Key features include an automated pipeline that integrates Network Traffic Classification, Web Intrusion Detection, and Post-Incident Malware Analysis into a cohesive framework implemented via a Flask application. To validate the effectiveness of the system, we tested it using three prominent datasets: NSL-KDD, UNSW-NB15, and CIC-IDS-2017. The Random Forest model achieved accuracies of 90%, 75%, and 99%, respectively, for the classification of network traffic, while it attained 96% precision for malware analysis. Furthermore, a neural network-based malware analysis model set a new benchmark with an impressive accuracy rate of 99%. By incorporating deep learning models with cloud-based GPUs and TPUs, we demonstrate how to meet high computational demands without compromising efficiency. Furthermore, containerisation ensures that the system is both scalable and portable across a wide range of cloud environments. By reducing incident response times, lowering operational risks, and offering cost-effective deployment, our system equips organizations with a robust tool to proactively safeguard their cloud infrastructure. This innovative integration of AI and containerised architecture not only sets a new benchmark in threat detection but also significantly advances the state-of-the-art in cybersecurity, promising transformative benefits for critical industries. This research makes a significant contribution to the field of AI-powered cybersecurity by showcasing the powerful combination of AI models and cloud infrastructure to fill critical gaps in cyber incident response. Our findings emphasise the superior performance of Random Forest and deep learning models in accurately identifying and classifying cyber threats, setting a new standard for real-world deployment in cloud environments.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"623-643"},"PeriodicalIF":0.0,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10979487","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143943965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-28DOI: 10.1109/TMLCN.2025.3564907
Mehdi Letafati;Seyyed Amirhossein Ameli Kalkhoran;Ecenaz Erdemir;Babak Hossein Khalaj;Hamid Behroozi;Deniz Gündüz
Deep neural network (DNN)-based joint source and channel coding is proposed for privacy-aware end-to-end image transmission against multiple eavesdroppers. Both scenarios of colluding and non-colluding eavesdroppers are considered. Unlike prior works that assume perfectly known and independent identically distributed (i.i.d.) source and channel statistics, the proposed scheme operates under unknown and non-i.i.d. conditions, making it more applicable to real-world scenarios. The goal is to transmit images with minimum distortion, while simultaneously preventing eavesdroppers from inferring certain private attributes of images. Simultaneously generalizing the ideas of privacy funnel and wiretap coding, a multi-objective optimization framework is expressed that characterizes the trade-off between image reconstruction quality and information leakage to eavesdroppers, taking into account the structural similarity index (SSIM) for improving the perceptual quality of image reconstruction. Extensive experiments on the CIFAR-10 and CelebA, along with ablation studies, demonstrate significant performance improvements in terms of SSIM, adversarial accuracy, and the mutual information leakage compared to benchmarks. Experiments show that the proposed scheme restrains the adversarially-trained eavesdroppers from intercepting privatized data for both cases of eavesdropping a common secret, as well as the case in which eavesdroppers are interested in different secrets. Furthermore, useful insights on the privacy-utility trade-off are also provided.
{"title":"Deep Joint Source Channel Coding for Privacy-Aware End-to-End Image Transmission","authors":"Mehdi Letafati;Seyyed Amirhossein Ameli Kalkhoran;Ecenaz Erdemir;Babak Hossein Khalaj;Hamid Behroozi;Deniz Gündüz","doi":"10.1109/TMLCN.2025.3564907","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3564907","url":null,"abstract":"Deep neural network (DNN)-based joint source and channel coding is proposed for privacy-aware end-to-end image transmission against multiple eavesdroppers. Both scenarios of colluding and non-colluding eavesdroppers are considered. Unlike prior works that assume perfectly known and independent identically distributed (i.i.d.) source and channel statistics, the proposed scheme operates under unknown and non-i.i.d. conditions, making it more applicable to real-world scenarios. The goal is to transmit images with minimum distortion, while simultaneously preventing eavesdroppers from inferring certain private attributes of images. Simultaneously generalizing the ideas of privacy funnel and wiretap coding, a multi-objective optimization framework is expressed that characterizes the trade-off between image reconstruction quality and information leakage to eavesdroppers, taking into account the structural similarity index (SSIM) for improving the perceptual quality of image reconstruction. Extensive experiments on the CIFAR-10 and CelebA, along with ablation studies, demonstrate significant performance improvements in terms of SSIM, adversarial accuracy, and the mutual information leakage compared to benchmarks. Experiments show that the proposed scheme restrains the adversarially-trained eavesdroppers from intercepting privatized data for both cases of eavesdropping a common secret, as well as the case in which eavesdroppers are interested in different secrets. Furthermore, useful insights on the privacy-utility trade-off are also provided.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"568-584"},"PeriodicalIF":0.0,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10979442","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143925029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-28DOI: 10.1109/TMLCN.2025.3564587
Ying-Dar Lin;Yi-Hsin Lu;Ren-Hung Hwang;Yuan-Cheng Lai;Didik Sudyana;Wei-Bin Lee
Existing Intrusion Detection System (IDS) relies on pre-trained models that struggle to keep pace with the evolving nature of network threats, as they cannot detect new types of network attacks until updated. Cyber Threat Intelligence (CTI) is analyzed by professional teams and shared among organizations for collective defense. However, due to its diverse forms, existing research often only analyzes reports and extracts Indicators of Compromise (IoC) to create an IoC Database for configuring blocklists, a method that attackers can easily circumvent. Our study introduces a unified solution named Dynamic IDS with CTI Integrated (DICI), which focuses on enhancing IDS capabilities by integrating continuously updated CTI. This approach involves two key AI models: the first serves as the IDS Model, detecting network traffic, while the second, the CTI Transfer Model, analyzes and transforms CTI into actionable training data. The CTI Transfer Model continuously converts CTI information into training data for IDS, enabling dynamic model updates that improve and adapt to emerging threats dynamically. Our experimental results show that DICI significantly enhances detection capabilities. Integrating the IDS Model with CTI in DICI improved the F1 score by 9.29% compared to the system without CTI, allowing for more effective detection of complex threats such as port obfuscation and port hopping attacks. Furthermore, within the CTI Transfer Model, involving the ML method led to a 30.92% F1 score improvement over heuristic methods. These results confirm that continuously integrating CTI within DICI substantially boosts its ability to detect and respond to new types of cyber attacks.
现有的入侵检测系统(IDS)依赖于预先训练的模型,这些模型很难跟上网络威胁不断发展的步伐,因为它们在更新之前无法检测到新的网络攻击类型。CTI (Cyber Threat Intelligence)即网络威胁情报,由专业团队进行分析,并在各组织之间共享,实现集体防御。然而,由于其形式多样,现有的研究往往只是对报告进行分析,提取IoC (Indicators of Compromise)来创建IoC数据库,用于配置黑名单,这很容易被攻击者绕过。我们的研究介绍了一个统一的解决方案,称为动态IDS与CTI集成(DICI),它的重点是通过集成不断更新的CTI来增强IDS能力。该方法涉及两个关键的AI模型:第一个作为IDS模型,检测网络流量,而第二个是CTI传输模型,分析CTI并将其转换为可操作的训练数据。CTI传输模型不断地将CTI信息转换为IDS的训练数据,实现模型的动态更新,以动态地改进和适应新出现的威胁。实验结果表明,DICI显著提高了检测能力。将IDS模型与CTI集成到DICI中,与没有CTI的系统相比,F1得分提高了9.29%,可以更有效地检测端口混淆和端口跳变攻击等复杂威胁。此外,在CTI转移模型中,涉及ML方法导致比启发式方法提高30.92%的F1分数。这些结果证实,不断将CTI集成到DICI中可以大大提高其检测和响应新型网络攻击的能力。
{"title":"Evolving ML-Based Intrusion Detection: Cyber Threat Intelligence for Dynamic Model Updates","authors":"Ying-Dar Lin;Yi-Hsin Lu;Ren-Hung Hwang;Yuan-Cheng Lai;Didik Sudyana;Wei-Bin Lee","doi":"10.1109/TMLCN.2025.3564587","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3564587","url":null,"abstract":"Existing Intrusion Detection System (IDS) relies on pre-trained models that struggle to keep pace with the evolving nature of network threats, as they cannot detect new types of network attacks until updated. Cyber Threat Intelligence (CTI) is analyzed by professional teams and shared among organizations for collective defense. However, due to its diverse forms, existing research often only analyzes reports and extracts Indicators of Compromise (IoC) to create an IoC Database for configuring blocklists, a method that attackers can easily circumvent. Our study introduces a unified solution named Dynamic IDS with CTI Integrated (DICI), which focuses on enhancing IDS capabilities by integrating continuously updated CTI. This approach involves two key AI models: the first serves as the IDS Model, detecting network traffic, while the second, the CTI Transfer Model, analyzes and transforms CTI into actionable training data. The CTI Transfer Model continuously converts CTI information into training data for IDS, enabling dynamic model updates that improve and adapt to emerging threats dynamically. Our experimental results show that DICI significantly enhances detection capabilities. Integrating the IDS Model with CTI in DICI improved the F1 score by 9.29% compared to the system without CTI, allowing for more effective detection of complex threats such as port obfuscation and port hopping attacks. Furthermore, within the CTI Transfer Model, involving the ML method led to a 30.92% F1 score improvement over heuristic methods. These results confirm that continuously integrating CTI within DICI substantially boosts its ability to detect and respond to new types of cyber attacks.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"605-622"},"PeriodicalIF":0.0,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10978877","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144073128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-25DOI: 10.1109/TMLCN.2025.3562808
Mattia Fabiani;Asmaa Abdallah;Abdulkadir Celik;Omer Haliloglu;Ahmed M. Eltawil
Cell-free massive multiple-input multiple-output (CF-mMIMO) surmounts conventional cellular network limitations in terms of coverage, capacity, and interference management. This paper aims to introduce a novel unsupervised learning framework for the downlink (DL) power allocation problem in CF-mMIMO networks, utilizing only large-scale fading (LSF) coefficients as input, rather than the hard-to-obtain exact user location or channel state information (CSI). Both centralized and distributed CF-mMIMO power control learning frameworks are explored, with deep neural networks (DNNs) trained to estimate power coefficients while addressing the constraints of pilot contamination and power budgets. For both learning frameworks, the proposed approach is utilized to maximize three well-known power control objectives under maximum-ratio and regularized zero-forcing precoding schemes: 1) sum of spectral efficiency, 2) minimum signal-to-interference-plus-noise ratio (SINR) for max-min fairness, and 3) product of SINRs for proportional fairness, for each of which customized loss functions are formulated. The proposed unsupervised learning approach circumvents the arduous task of training data computations, typically required in supervised learning methods, bypassing the use of conventional complex optimization methods and heuristic methodologies. Furthermore, an LSF-based radio unit (RU) selection algorithm is employed to activate only the contributing RUs, allowing efficient utilization of network resources. Simulation results demonstrate that our proposed unsupervised learning framework outperforms existing supervised learning and heuristic solutions, showcasing an improvement of up to 20% in spectral efficiency and more than 40% in terms of energy efficiency compared to state-of-the-art supervised learning counterparts.
{"title":"Unsupervised Learning for Distributed Downlink Power Allocation in Cell-Free mMIMO Networks","authors":"Mattia Fabiani;Asmaa Abdallah;Abdulkadir Celik;Omer Haliloglu;Ahmed M. Eltawil","doi":"10.1109/TMLCN.2025.3562808","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3562808","url":null,"abstract":"Cell-free massive multiple-input multiple-output (CF-mMIMO) surmounts conventional cellular network limitations in terms of coverage, capacity, and interference management. This paper aims to introduce a novel unsupervised learning framework for the downlink (DL) power allocation problem in CF-mMIMO networks, utilizing only large-scale fading (LSF) coefficients as input, rather than the hard-to-obtain exact user location or channel state information (CSI). Both centralized and distributed CF-mMIMO power control learning frameworks are explored, with deep neural networks (DNNs) trained to estimate power coefficients while addressing the constraints of pilot contamination and power budgets. For both learning frameworks, the proposed approach is utilized to maximize three well-known power control objectives under maximum-ratio and regularized zero-forcing precoding schemes: 1) sum of spectral efficiency, 2) minimum signal-to-interference-plus-noise ratio (SINR) for max-min fairness, and 3) product of SINRs for proportional fairness, for each of which customized loss functions are formulated. The proposed unsupervised learning approach circumvents the arduous task of training data computations, typically required in supervised learning methods, bypassing the use of conventional complex optimization methods and heuristic methodologies. Furthermore, an LSF-based radio unit (RU) selection algorithm is employed to activate only the contributing RUs, allowing efficient utilization of network resources. Simulation results demonstrate that our proposed unsupervised learning framework outperforms existing supervised learning and heuristic solutions, showcasing an improvement of up to 20% in spectral efficiency and more than 40% in terms of energy efficiency compared to state-of-the-art supervised learning counterparts.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"644-658"},"PeriodicalIF":0.0,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10976604","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144099971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-24DOI: 10.1109/TMLCN.2025.3564239
Hendrik Schippers;Melina Geis;Stefan Böcker;Christian Wietfeld
Future mobile use cases such as teleoperation rely on highly available mobile networks. Due to the nature of the mobile access channel and the inherent competition, the availability may be restricted in certain initially unknown areas or timespans. We automated mobile network data acquisition using a smartphone application and dedicated hardware to address this challenge, providing detailed connectivity insights. DoNext, a massive dataset of 4G and 5G mobile network data and active measurements, was collected over two years in Dortmund, Germany. To the best ofour knowledge, it is the most extensive openly available mobile dataset. Machine learning methods were applied to the data to demonstrate its utility inkey performance indicator prediction. Radio environmental maps facilitating key performance indicator predictions and application planning across different locations are generated through spatial aggregation for in-advance predictions. We also showcase signal strength modeling with transfer learning for arbitrary locations in individual mobile network cells, covering private and restricted areas. By openly providing the dataset, we aim to enable other researchers to develop and evaluate their machine-learning methods without conducting extensive measurement campaigns.
{"title":"DoNext: An Open-Access Measurement Dataset for Machine Learning-Driven 5G Mobile Network Analysis","authors":"Hendrik Schippers;Melina Geis;Stefan Böcker;Christian Wietfeld","doi":"10.1109/TMLCN.2025.3564239","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3564239","url":null,"abstract":"Future mobile use cases such as teleoperation rely on highly available mobile networks. Due to the nature of the mobile access channel and the inherent competition, the availability may be restricted in certain initially unknown areas or timespans. We automated mobile network data acquisition using a smartphone application and dedicated hardware to address this challenge, providing detailed connectivity insights. DoNext, a massive dataset of 4G and 5G mobile network data and active measurements, was collected over two years in Dortmund, Germany. To the best ofour knowledge, it is the most extensive openly available mobile dataset. Machine learning methods were applied to the data to demonstrate its utility inkey performance indicator prediction. Radio environmental maps facilitating key performance indicator predictions and application planning across different locations are generated through spatial aggregation for in-advance predictions. We also showcase signal strength modeling with transfer learning for arbitrary locations in individual mobile network cells, covering private and restricted areas. By openly providing the dataset, we aim to enable other researchers to develop and evaluate their machine-learning methods without conducting extensive measurement campaigns.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"585-604"},"PeriodicalIF":0.0,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10976440","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143937922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The next-generation wireless network, 6G and beyond, envisions to integrate communication and sensing to overcome interference, improve spectrum efficiency, and reduce hardware and power consumption. Massive Multiple-Input Multiple Output (mMIMO)-based Joint Communication and Sensing (JCAS) systems realize this integration for 6G applications such as autonomous driving, as it requires accurate environmental sensing and time-critical communication with neighbouring vehicles. Reinforcement Learning (RL) is used for mMIMO antenna beamforming in the existing literature. However, the huge search space for actions associated with antenna beamforming causes the learning process for the RL agent to be inefficient due to high beam training overhead. The learning process does not consider the causal relationship between action space and the reward, and gives all actions equal importance. In this work, we explore a causally-aware RL agent which can intervene and discover causal relationships for mMIMO-based JCAS environments, during the training phase. We use a state dependent action dimension selection strategy to realize causal discovery for RL-based JCAS. Evaluation of the causally-aware RL framework in different JCAS scenarios shows the benefit of our proposed solution over baseline methods in terms of the higher reward. We have shown that in the presence of interfering users and sensing signal clutters, our proposed solution achieves 30% higher data rate in comparison to the communication-only state-of-the-art beam pattern learning method while maintaining sensing performance.
{"title":"Causally-Aware Reinforcement Learning for Joint Communication and Sensing","authors":"Anik Roy;Serene Banerjee;Jishnu Sadasivan;Arnab Sarkar;Soumyajit Dey","doi":"10.1109/TMLCN.2025.3562557","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3562557","url":null,"abstract":"The next-generation wireless network, 6G and beyond, envisions to integrate communication and sensing to overcome interference, improve spectrum efficiency, and reduce hardware and power consumption. Massive Multiple-Input Multiple Output (mMIMO)-based Joint Communication and Sensing (JCAS) systems realize this integration for 6G applications such as autonomous driving, as it requires accurate environmental sensing and time-critical communication with neighbouring vehicles. Reinforcement Learning (RL) is used for mMIMO antenna beamforming in the existing literature. However, the huge search space for actions associated with antenna beamforming causes the learning process for the RL agent to be inefficient due to high beam training overhead. The learning process does not consider the causal relationship between action space and the reward, and gives all actions equal importance. In this work, we explore a causally-aware RL agent which can intervene and discover causal relationships for mMIMO-based JCAS environments, during the training phase. We use a state dependent action dimension selection strategy to realize causal discovery for RL-based JCAS. Evaluation of the causally-aware RL framework in different JCAS scenarios shows the benefit of our proposed solution over baseline methods in terms of the higher reward. We have shown that in the presence of interfering users and sensing signal clutters, our proposed solution achieves 30% higher data rate in comparison to the communication-only state-of-the-art beam pattern learning method while maintaining sensing performance.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"552-567"},"PeriodicalIF":0.0,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10971373","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143896251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-09DOI: 10.1109/TMLCN.2025.3559472
Hamidreza Hashempoor;Wan Choi
We introduce a novel structure empowered by deep learning models, accompanied by a thorough training methodology, for enhancing channel estimation and data detection in multiple input multiple output (MIMO) orthogonal frequency division multiplexing (OFDM) systems. Central to our approach is the incorporation of a Denoising Block, which comprises three meticulously designed deep neural networks (DNNs) tasked with accurately extracting noiseless embeddings from the received signal. Alongside, we develop the Correctness Classifier, a classification algorithm adept at distinguishing correctly detected data by leveraging the denoised received signal. By selectively utilizing these identified data symbols as additional pilot signals, we augment the available pilot signals for channel estimation. Our Denoising Block also enables direct data detection, rendering the system well-suited for low-latency applications. To enable model training, we propose a hybrid likelihood objective of the detected symbols. We analytically derive the gradients with respect to the hybrid likelihood, enabling us to successfully complete the training phase. When compared to other conventional methods, experiments and simulations show that the proposed data-aided channel estimator significantly lowers the mean-squared-error (MSE) of the estimation and thus improves data detection performance. Github repository link is https://github.com/Hamidreza-Hashempoor/5g-dataaided-channel-estimate.
{"title":"Deep Learning-Based Data-Assisted Channel Estimation and Detection","authors":"Hamidreza Hashempoor;Wan Choi","doi":"10.1109/TMLCN.2025.3559472","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3559472","url":null,"abstract":"We introduce a novel structure empowered by deep learning models, accompanied by a thorough training methodology, for enhancing channel estimation and data detection in multiple input multiple output (MIMO) orthogonal frequency division multiplexing (OFDM) systems. Central to our approach is the incorporation of a Denoising Block, which comprises three meticulously designed deep neural networks (DNNs) tasked with accurately extracting noiseless embeddings from the received signal. Alongside, we develop the Correctness Classifier, a classification algorithm adept at distinguishing correctly detected data by leveraging the denoised received signal. By selectively utilizing these identified data symbols as additional pilot signals, we augment the available pilot signals for channel estimation. Our Denoising Block also enables direct data detection, rendering the system well-suited for low-latency applications. To enable model training, we propose a hybrid likelihood objective of the detected symbols. We analytically derive the gradients with respect to the hybrid likelihood, enabling us to successfully complete the training phase. When compared to other conventional methods, experiments and simulations show that the proposed data-aided channel estimator significantly lowers the mean-squared-error (MSE) of the estimation and thus improves data detection performance. Github repository link is <uri>https://github.com/Hamidreza-Hashempoor/5g-dataaided-channel-estimate</uri>.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"534-551"},"PeriodicalIF":0.0,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10960353","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143875215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-08DOI: 10.1109/TMLCN.2025.3558204
Jingxuan Chen;Dianrun Huang;Yijie Wang;Ziping Yu;Zhongliang Zhao;Xianbin Cao;Yang Liu;Tony Q. S. Quek;Dapeng Oliver Wu
Vehicular Ad-hoc Networks (VANETs) have gained significant attention as a key enabler for intelligent transportation systems, facilitating vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communications. Despite their potential, VANETs face critical challenges in maintaining reliable end-to-end connectivity due to their highly dynamic topology and sparse node distribution, particularly in areas with limited infrastructure coverage. Addressing these limitations is crucial for advancing the reliability and scalability of VANETs. To bridge these gaps, this work introduces a heterogeneous UAV-aided VANET framework that leverages uncrewed aerial vehicles (UAVs), also known as autonomous aerial vehicles, to enhance data transmission. The key contributions of this paper include: 1) the design of a novel adaptive dual-model routing (ADMR) protocol that operates in two modes: direct vehicle clustering for intra-cluster communication and UAV/RSU-assisted routing for inter-cluster communication; 2) the development of a modified density-based clustering algorithm (MDBSCAN) for dynamic vehicle node clustering; and 3) an improved UAV trajectory planning method based on a multi-agent soft actor-critic (MASAC) deep reinforcement learning algorithm, which optimizes network reachability. Simulation results reveal that the UAV trajectory optimization method achieves higher network reachability ratios compared to existing approaches. Also, the proposed ADMR protocol improves the packet delivery ratio (PDR) while maintaining low end-to-end latency. These findings demonstrate the potential to enhance VANET performance, while also providing valuable insights for the development of intelligent transportation systems and related fields.
{"title":"Enhancing Routing Performance Through Trajectory Planning With DRL in UAV-Aided VANETs","authors":"Jingxuan Chen;Dianrun Huang;Yijie Wang;Ziping Yu;Zhongliang Zhao;Xianbin Cao;Yang Liu;Tony Q. S. Quek;Dapeng Oliver Wu","doi":"10.1109/TMLCN.2025.3558204","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3558204","url":null,"abstract":"Vehicular Ad-hoc Networks (VANETs) have gained significant attention as a key enabler for intelligent transportation systems, facilitating vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communications. Despite their potential, VANETs face critical challenges in maintaining reliable end-to-end connectivity due to their highly dynamic topology and sparse node distribution, particularly in areas with limited infrastructure coverage. Addressing these limitations is crucial for advancing the reliability and scalability of VANETs. To bridge these gaps, this work introduces a heterogeneous UAV-aided VANET framework that leverages uncrewed aerial vehicles (UAVs), also known as autonomous aerial vehicles, to enhance data transmission. The key contributions of this paper include: 1) the design of a novel adaptive dual-model routing (ADMR) protocol that operates in two modes: direct vehicle clustering for intra-cluster communication and UAV/RSU-assisted routing for inter-cluster communication; 2) the development of a modified density-based clustering algorithm (MDBSCAN) for dynamic vehicle node clustering; and 3) an improved UAV trajectory planning method based on a multi-agent soft actor-critic (MASAC) deep reinforcement learning algorithm, which optimizes network reachability. Simulation results reveal that the UAV trajectory optimization method achieves higher network reachability ratios compared to existing approaches. Also, the proposed ADMR protocol improves the packet delivery ratio (PDR) while maintaining low end-to-end latency. These findings demonstrate the potential to enhance VANET performance, while also providing valuable insights for the development of intelligent transportation systems and related fields.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"517-533"},"PeriodicalIF":0.0,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10951108","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143845541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}