首页 > 最新文献

IEEE Transactions on Machine Learning in Communications and Networking最新文献

英文 中文
Multimodal Collaborative Perception for Dynamic Channel Prediction in 6G V2X Networks 6G V2X网络中动态信道预测的多模态协同感知
Pub Date : 2025-06-11 DOI: 10.1109/TMLCN.2025.3578577
Ghazi Gharsallah;Georges Kaddoum
The evolution toward sixth-generation (6G) wireless communications introduces unprecedented demands for ultra-reliable low-latency communication (URLLC) in vehicle-to-everything (V2X) networks, where fast-moving vehicles and the use of high-frequency bands make it challenging to acquire the channel state information to maintain high-quality connectivity. Traditional methods for estimating channel coefficients rely on pilot symbols transmitted during each coherence interval; however, the combination of high mobility and high frequencies significantly reduces the coherence times, necessitating substantial bandwidth for pilot transmission. Consequently, these conventional approaches are becoming inadequate, potentially causing inefficient channel estimation and degraded throughput in such dynamic environments. This paper presents a novel multimodal collaborative perception framework for dynamic channel prediction in 6G V2X networks, integrating LiDAR data to enhance the accuracy and robustness of channel predictions. Our approach synergizes information from connected agents and infrastructure, enabling a more comprehensive understanding of the dynamic vehicular environment. A key innovation in our framework is the prediction horizon optimization (PHO) component, which dynamically adjusts the prediction interval based on real-time evaluations of channel conditions, ensuring that predictions remain relevant and accurate. Extensive simulations using the MVX (Multimodal V2X) high-fidelity co-simulation framework demonstrate the effectiveness of our solution. Compared to baseline methods—namely, a classical LS-LMMSE approach and a wireless-based model that solely relies on channel measurements—our framework achieves up to a 30.82% reduction in mean squared error (MSE) and a 32.76% increase in goodput. These gains underscore the efficiency of the PHO component in reducing prediction errors, maintaining low bit error rates, and meeting the stringent requirements of 6G V2X communications. Consequently, our framework establishes a new benchmark for AI-driven channel prediction in next-generation wireless networks, particularly in challenging urban and rural scenarios.
第六代(6G)无线通信的发展对车联网(V2X)网络中的超可靠低延迟通信(URLLC)提出了前所未有的需求,在这种网络中,快速移动的车辆和高频频段的使用使得获取信道状态信息以保持高质量连接变得具有挑战性。传统的信道系数估计方法依赖于在每个相干间隔内传输的导频符号;然而,高迁移率和高频率的结合大大减少了相干时间,需要大量的带宽用于导频传输。因此,这些传统的方法变得不充分,可能导致在这种动态环境中低效的信道估计和降低吞吐量。本文提出了一种用于6G V2X网络动态信道预测的新型多模态协同感知框架,该框架集成了激光雷达数据,以提高信道预测的准确性和鲁棒性。我们的方法将来自互联代理和基础设施的信息协同起来,使我们能够更全面地了解动态的车辆环境。我们的框架中的一个关键创新是预测水平优化(PHO)组件,它根据信道条件的实时评估动态调整预测间隔,确保预测保持相关性和准确性。使用MVX (Multimodal V2X)高保真联合仿真框架进行的大量仿真证明了我们的解决方案的有效性。与基线方法(即经典的LS-LMMSE方法和仅依赖于信道测量的基于无线的模型)相比,我们的框架实现了均方误差(MSE)减少30.82%,goodput增加32.76%。这些增益强调了PHO组件在减少预测误差、保持低误码率和满足6G V2X通信严格要求方面的效率。因此,我们的框架为下一代无线网络中人工智能驱动的信道预测建立了新的基准,特别是在具有挑战性的城市和农村场景中。
{"title":"Multimodal Collaborative Perception for Dynamic Channel Prediction in 6G V2X Networks","authors":"Ghazi Gharsallah;Georges Kaddoum","doi":"10.1109/TMLCN.2025.3578577","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3578577","url":null,"abstract":"The evolution toward sixth-generation (6G) wireless communications introduces unprecedented demands for ultra-reliable low-latency communication (URLLC) in vehicle-to-everything (V2X) networks, where fast-moving vehicles and the use of high-frequency bands make it challenging to acquire the channel state information to maintain high-quality connectivity. Traditional methods for estimating channel coefficients rely on pilot symbols transmitted during each coherence interval; however, the combination of high mobility and high frequencies significantly reduces the coherence times, necessitating substantial bandwidth for pilot transmission. Consequently, these conventional approaches are becoming inadequate, potentially causing inefficient channel estimation and degraded throughput in such dynamic environments. This paper presents a novel multimodal collaborative perception framework for dynamic channel prediction in 6G V2X networks, integrating LiDAR data to enhance the accuracy and robustness of channel predictions. Our approach synergizes information from connected agents and infrastructure, enabling a more comprehensive understanding of the dynamic vehicular environment. A key innovation in our framework is the prediction horizon optimization (PHO) component, which dynamically adjusts the prediction interval based on real-time evaluations of channel conditions, ensuring that predictions remain relevant and accurate. Extensive simulations using the MVX (Multimodal V2X) high-fidelity co-simulation framework demonstrate the effectiveness of our solution. Compared to baseline methods—namely, a classical LS-LMMSE approach and a wireless-based model that solely relies on channel measurements—our framework achieves up to a 30.82% reduction in mean squared error (MSE) and a 32.76% increase in goodput. These gains underscore the efficiency of the PHO component in reducing prediction errors, maintaining low bit error rates, and meeting the stringent requirements of 6G V2X communications. Consequently, our framework establishes a new benchmark for AI-driven channel prediction in next-generation wireless networks, particularly in challenging urban and rural scenarios.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"725-743"},"PeriodicalIF":0.0,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11030661","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144323233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual Self-Attention is What You Need for Model Drift Detection in 6G Networks 双重自关注是6G网络中模型漂移检测所需要的
Pub Date : 2025-06-04 DOI: 10.1109/TMLCN.2025.3576727
Mazene Ameur;Bouziane Brik;Adlen Ksentini
The advent of 6G networks heralds a transformative shift in communication technology, with Artificial Intelligence (AI) and Machine Learning (ML) forming the backbone of its architecture and operations. However, the dynamic nature of 6G environments renders these models vulnerable to performance degradation due to model drift. Existing drift detection approaches, despite advancements, often fail to address the diverse and complex types of drift encountered in telecommunications, particularly in time-series data. To bridge this gap, we propose, for the first time, a novel drift detection framework featuring a Dual Self-Attention AutoEncoder (DSA-AE) designed to handle all major manifestations of drift in 6G networks, including data, label, and concept drift. This architectural design leverages the autoencoder’s reconstruction capabilities to monitor both input features and target variables, effectively detecting data and label drift. Additionally, its dual self-attention mechanisms comprising feature and temporal attention blocks capture spatiotemporal fluctuations, addressing concept drift. Extensive evaluations across three diverse telecommunications datasets (two time-series and one non-time-series) demonstrate that our framework achieves substantial advancements over state-of-the-art methods, delivering over a 13.6% improvement in drift detection accuracy and a remarkable 94.7% reduction in detection latency. By balancing higher accuracy with lower latency, this approach offers a robust and efficient solution for model drift detection in the dynamic and complex landscape of 6G networks.
6G网络的出现预示着通信技术的变革,人工智能(AI)和机器学习(ML)构成了其架构和运营的支柱。然而,6G环境的动态特性使得这些模型容易因模型漂移而导致性能下降。现有的漂移检测方法尽管取得了进步,但往往无法解决电信中遇到的各种复杂类型的漂移,特别是在时间序列数据中。为了弥补这一差距,我们首次提出了一种新的漂移检测框架,该框架具有双自注意自动编码器(DSA-AE),旨在处理6G网络中所有主要的漂移表现,包括数据、标签和概念漂移。这种架构设计利用了自动编码器的重建功能来监控输入特征和目标变量,有效地检测数据和标签漂移。此外,它的双重自注意机制包括特征和时间注意块捕捉时空波动,解决概念漂移。对三个不同的电信数据集(两个时间序列和一个非时间序列)的广泛评估表明,我们的框架比最先进的方法取得了实质性的进步,漂移检测精度提高了13.6%,检测延迟降低了94.7%。通过平衡更高的精度和更低的延迟,该方法为6G网络动态和复杂环境中的模型漂移检测提供了鲁棒和高效的解决方案。
{"title":"Dual Self-Attention is What You Need for Model Drift Detection in 6G Networks","authors":"Mazene Ameur;Bouziane Brik;Adlen Ksentini","doi":"10.1109/TMLCN.2025.3576727","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3576727","url":null,"abstract":"The advent of 6G networks heralds a transformative shift in communication technology, with Artificial Intelligence (AI) and Machine Learning (ML) forming the backbone of its architecture and operations. However, the dynamic nature of 6G environments renders these models vulnerable to performance degradation due to model drift. Existing drift detection approaches, despite advancements, often fail to address the diverse and complex types of drift encountered in telecommunications, particularly in time-series data. To bridge this gap, we propose, for the first time, a novel drift detection framework featuring a Dual Self-Attention AutoEncoder (DSA-AE) designed to handle all major manifestations of drift in 6G networks, including data, label, and concept drift. This architectural design leverages the autoencoder’s reconstruction capabilities to monitor both input features and target variables, effectively detecting data and label drift. Additionally, its dual self-attention mechanisms comprising feature and temporal attention blocks capture spatiotemporal fluctuations, addressing concept drift. Extensive evaluations across three diverse telecommunications datasets (two time-series and one non-time-series) demonstrate that our framework achieves substantial advancements over state-of-the-art methods, delivering over a 13.6% improvement in drift detection accuracy and a remarkable 94.7% reduction in detection latency. By balancing higher accuracy with lower latency, this approach offers a robust and efficient solution for model drift detection in the dynamic and complex landscape of 6G networks.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"690-709"},"PeriodicalIF":0.0,"publicationDate":"2025-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11024186","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144308388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI-Powered System for an Efficient and Effective Cyber Incidents Detection and Response in Cloud Environments
Pub Date : 2025-04-28 DOI: 10.1109/TMLCN.2025.3564912
Mohammed Ashfaaq M. Farzaan;Mohamed Chahine Ghanem;Ayman El-Hajjar;Deepthi N. Ratnayake
The growing complexity and frequency of cyber threats in cloud environments call for innovative and automated solutions to maintain effective and efficient incident response. This study tackles this urgent issue by introducing a cutting-edge AI-driven cyber incident response system specifically designed for cloud platforms. Unlike conventional methods, our system employs advanced Artificial Intelligence (AI) and Machine Learning (ML) techniques to provide accurate, scalable, and seamless integration with platforms like Google Cloud and Microsoft Azure. Key features include an automated pipeline that integrates Network Traffic Classification, Web Intrusion Detection, and Post-Incident Malware Analysis into a cohesive framework implemented via a Flask application. To validate the effectiveness of the system, we tested it using three prominent datasets: NSL-KDD, UNSW-NB15, and CIC-IDS-2017. The Random Forest model achieved accuracies of 90%, 75%, and 99%, respectively, for the classification of network traffic, while it attained 96% precision for malware analysis. Furthermore, a neural network-based malware analysis model set a new benchmark with an impressive accuracy rate of 99%. By incorporating deep learning models with cloud-based GPUs and TPUs, we demonstrate how to meet high computational demands without compromising efficiency. Furthermore, containerisation ensures that the system is both scalable and portable across a wide range of cloud environments. By reducing incident response times, lowering operational risks, and offering cost-effective deployment, our system equips organizations with a robust tool to proactively safeguard their cloud infrastructure. This innovative integration of AI and containerised architecture not only sets a new benchmark in threat detection but also significantly advances the state-of-the-art in cybersecurity, promising transformative benefits for critical industries. This research makes a significant contribution to the field of AI-powered cybersecurity by showcasing the powerful combination of AI models and cloud infrastructure to fill critical gaps in cyber incident response. Our findings emphasise the superior performance of Random Forest and deep learning models in accurately identifying and classifying cyber threats, setting a new standard for real-world deployment in cloud environments.
云环境中日益复杂和频繁的网络威胁需要创新和自动化的解决方案,以保持有效和高效的事件响应。本研究通过引入专门为云平台设计的尖端人工智能驱动的网络事件响应系统来解决这一紧迫问题。与传统方法不同,我们的系统采用先进的人工智能(AI)和机器学习(ML)技术,与谷歌云和微软Azure等平台提供准确、可扩展和无缝的集成。关键特性包括一个自动化的管道,它将网络流量分类、Web入侵检测和事后恶意软件分析集成到一个通过Flask应用程序实现的内聚框架中。为了验证该系统的有效性,我们使用三个重要的数据集进行了测试:NSL-KDD, UNSW-NB15和CIC-IDS-2017。随机森林模型在网络流量分类方面分别达到了90%、75%和99%的准确率,而在恶意软件分析方面达到了96%的准确率。此外,基于神经网络的恶意软件分析模型设定了新的基准,准确率高达99%。通过将深度学习模型与基于云的gpu和tpu相结合,我们展示了如何在不影响效率的情况下满足高计算需求。此外,容器化确保了系统在广泛的云环境中既可扩展又可移植。通过减少事件响应时间,降低操作风险,并提供经济高效的部署,我们的系统为组织提供了一个强大的工具来主动保护他们的云基础设施。这种人工智能和容器化架构的创新集成不仅在威胁检测方面树立了新的基准,而且还显著推进了网络安全的最新发展,有望为关键行业带来变革性的好处。这项研究通过展示人工智能模型和云基础设施的强大结合,填补了网络事件响应的关键空白,为人工智能驱动的网络安全领域做出了重大贡献。我们的研究结果强调了随机森林和深度学习模型在准确识别和分类网络威胁方面的卓越性能,为云环境中的实际部署设定了新的标准。
{"title":"AI-Powered System for an Efficient and Effective Cyber Incidents Detection and Response in Cloud Environments","authors":"Mohammed Ashfaaq M. Farzaan;Mohamed Chahine Ghanem;Ayman El-Hajjar;Deepthi N. Ratnayake","doi":"10.1109/TMLCN.2025.3564912","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3564912","url":null,"abstract":"The growing complexity and frequency of cyber threats in cloud environments call for innovative and automated solutions to maintain effective and efficient incident response. This study tackles this urgent issue by introducing a cutting-edge AI-driven cyber incident response system specifically designed for cloud platforms. Unlike conventional methods, our system employs advanced Artificial Intelligence (AI) and Machine Learning (ML) techniques to provide accurate, scalable, and seamless integration with platforms like Google Cloud and Microsoft Azure. Key features include an automated pipeline that integrates Network Traffic Classification, Web Intrusion Detection, and Post-Incident Malware Analysis into a cohesive framework implemented via a Flask application. To validate the effectiveness of the system, we tested it using three prominent datasets: NSL-KDD, UNSW-NB15, and CIC-IDS-2017. The Random Forest model achieved accuracies of 90%, 75%, and 99%, respectively, for the classification of network traffic, while it attained 96% precision for malware analysis. Furthermore, a neural network-based malware analysis model set a new benchmark with an impressive accuracy rate of 99%. By incorporating deep learning models with cloud-based GPUs and TPUs, we demonstrate how to meet high computational demands without compromising efficiency. Furthermore, containerisation ensures that the system is both scalable and portable across a wide range of cloud environments. By reducing incident response times, lowering operational risks, and offering cost-effective deployment, our system equips organizations with a robust tool to proactively safeguard their cloud infrastructure. This innovative integration of AI and containerised architecture not only sets a new benchmark in threat detection but also significantly advances the state-of-the-art in cybersecurity, promising transformative benefits for critical industries. This research makes a significant contribution to the field of AI-powered cybersecurity by showcasing the powerful combination of AI models and cloud infrastructure to fill critical gaps in cyber incident response. Our findings emphasise the superior performance of Random Forest and deep learning models in accurately identifying and classifying cyber threats, setting a new standard for real-world deployment in cloud environments.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"623-643"},"PeriodicalIF":0.0,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10979487","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143943965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Joint Source Channel Coding for Privacy-Aware End-to-End Image Transmission 面向隐私感知的端到端图像传输的深度联合源信道编码
Pub Date : 2025-04-28 DOI: 10.1109/TMLCN.2025.3564907
Mehdi Letafati;Seyyed Amirhossein Ameli Kalkhoran;Ecenaz Erdemir;Babak Hossein Khalaj;Hamid Behroozi;Deniz Gündüz
Deep neural network (DNN)-based joint source and channel coding is proposed for privacy-aware end-to-end image transmission against multiple eavesdroppers. Both scenarios of colluding and non-colluding eavesdroppers are considered. Unlike prior works that assume perfectly known and independent identically distributed (i.i.d.) source and channel statistics, the proposed scheme operates under unknown and non-i.i.d. conditions, making it more applicable to real-world scenarios. The goal is to transmit images with minimum distortion, while simultaneously preventing eavesdroppers from inferring certain private attributes of images. Simultaneously generalizing the ideas of privacy funnel and wiretap coding, a multi-objective optimization framework is expressed that characterizes the trade-off between image reconstruction quality and information leakage to eavesdroppers, taking into account the structural similarity index (SSIM) for improving the perceptual quality of image reconstruction. Extensive experiments on the CIFAR-10 and CelebA, along with ablation studies, demonstrate significant performance improvements in terms of SSIM, adversarial accuracy, and the mutual information leakage compared to benchmarks. Experiments show that the proposed scheme restrains the adversarially-trained eavesdroppers from intercepting privatized data for both cases of eavesdropping a common secret, as well as the case in which eavesdroppers are interested in different secrets. Furthermore, useful insights on the privacy-utility trade-off are also provided.
提出了一种基于深度神经网络(Deep neural network, DNN)的联合源信道编码方法,用于具有隐私意识的端到端图像传输。考虑了串通窃听者和非串通窃听者两种情况。与先前的工作假设完全已知和独立的同分布(i.i.d)源和信道统计不同,该方案在未知和非i.i.d下运行。条件,使其更适用于现实世界的场景。目标是以最小的失真传输图像,同时防止窃听者推断图像的某些私有属性。在推广隐私漏斗和窃听编码思想的同时,提出了一个多目标优化框架,该框架考虑了图像重建质量与窃听者信息泄露之间的权衡,并考虑了提高图像重建感知质量的结构相似指数(SSIM)。在CIFAR-10和CelebA上进行的大量实验以及消融研究表明,与基准测试相比,在SSIM、对抗精度和相互信息泄漏方面有了显著的性能改进。实验表明,该方案在窃听共同秘密和窃听者对不同秘密感兴趣两种情况下,都抑制了对抗性训练的窃听者对隐私数据的拦截。此外,还提供了关于隐私-效用权衡的有用见解。
{"title":"Deep Joint Source Channel Coding for Privacy-Aware End-to-End Image Transmission","authors":"Mehdi Letafati;Seyyed Amirhossein Ameli Kalkhoran;Ecenaz Erdemir;Babak Hossein Khalaj;Hamid Behroozi;Deniz Gündüz","doi":"10.1109/TMLCN.2025.3564907","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3564907","url":null,"abstract":"Deep neural network (DNN)-based joint source and channel coding is proposed for privacy-aware end-to-end image transmission against multiple eavesdroppers. Both scenarios of colluding and non-colluding eavesdroppers are considered. Unlike prior works that assume perfectly known and independent identically distributed (i.i.d.) source and channel statistics, the proposed scheme operates under unknown and non-i.i.d. conditions, making it more applicable to real-world scenarios. The goal is to transmit images with minimum distortion, while simultaneously preventing eavesdroppers from inferring certain private attributes of images. Simultaneously generalizing the ideas of privacy funnel and wiretap coding, a multi-objective optimization framework is expressed that characterizes the trade-off between image reconstruction quality and information leakage to eavesdroppers, taking into account the structural similarity index (SSIM) for improving the perceptual quality of image reconstruction. Extensive experiments on the CIFAR-10 and CelebA, along with ablation studies, demonstrate significant performance improvements in terms of SSIM, adversarial accuracy, and the mutual information leakage compared to benchmarks. Experiments show that the proposed scheme restrains the adversarially-trained eavesdroppers from intercepting privatized data for both cases of eavesdropping a common secret, as well as the case in which eavesdroppers are interested in different secrets. Furthermore, useful insights on the privacy-utility trade-off are also provided.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"568-584"},"PeriodicalIF":0.0,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10979442","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143925029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evolving ML-Based Intrusion Detection: Cyber Threat Intelligence for Dynamic Model Updates 基于机器学习的入侵检测:动态模型更新的网络威胁情报
Pub Date : 2025-04-28 DOI: 10.1109/TMLCN.2025.3564587
Ying-Dar Lin;Yi-Hsin Lu;Ren-Hung Hwang;Yuan-Cheng Lai;Didik Sudyana;Wei-Bin Lee
Existing Intrusion Detection System (IDS) relies on pre-trained models that struggle to keep pace with the evolving nature of network threats, as they cannot detect new types of network attacks until updated. Cyber Threat Intelligence (CTI) is analyzed by professional teams and shared among organizations for collective defense. However, due to its diverse forms, existing research often only analyzes reports and extracts Indicators of Compromise (IoC) to create an IoC Database for configuring blocklists, a method that attackers can easily circumvent. Our study introduces a unified solution named Dynamic IDS with CTI Integrated (DICI), which focuses on enhancing IDS capabilities by integrating continuously updated CTI. This approach involves two key AI models: the first serves as the IDS Model, detecting network traffic, while the second, the CTI Transfer Model, analyzes and transforms CTI into actionable training data. The CTI Transfer Model continuously converts CTI information into training data for IDS, enabling dynamic model updates that improve and adapt to emerging threats dynamically. Our experimental results show that DICI significantly enhances detection capabilities. Integrating the IDS Model with CTI in DICI improved the F1 score by 9.29% compared to the system without CTI, allowing for more effective detection of complex threats such as port obfuscation and port hopping attacks. Furthermore, within the CTI Transfer Model, involving the ML method led to a 30.92% F1 score improvement over heuristic methods. These results confirm that continuously integrating CTI within DICI substantially boosts its ability to detect and respond to new types of cyber attacks.
现有的入侵检测系统(IDS)依赖于预先训练的模型,这些模型很难跟上网络威胁不断发展的步伐,因为它们在更新之前无法检测到新的网络攻击类型。CTI (Cyber Threat Intelligence)即网络威胁情报,由专业团队进行分析,并在各组织之间共享,实现集体防御。然而,由于其形式多样,现有的研究往往只是对报告进行分析,提取IoC (Indicators of Compromise)来创建IoC数据库,用于配置黑名单,这很容易被攻击者绕过。我们的研究介绍了一个统一的解决方案,称为动态IDS与CTI集成(DICI),它的重点是通过集成不断更新的CTI来增强IDS能力。该方法涉及两个关键的AI模型:第一个作为IDS模型,检测网络流量,而第二个是CTI传输模型,分析CTI并将其转换为可操作的训练数据。CTI传输模型不断地将CTI信息转换为IDS的训练数据,实现模型的动态更新,以动态地改进和适应新出现的威胁。实验结果表明,DICI显著提高了检测能力。将IDS模型与CTI集成到DICI中,与没有CTI的系统相比,F1得分提高了9.29%,可以更有效地检测端口混淆和端口跳变攻击等复杂威胁。此外,在CTI转移模型中,涉及ML方法导致比启发式方法提高30.92%的F1分数。这些结果证实,不断将CTI集成到DICI中可以大大提高其检测和响应新型网络攻击的能力。
{"title":"Evolving ML-Based Intrusion Detection: Cyber Threat Intelligence for Dynamic Model Updates","authors":"Ying-Dar Lin;Yi-Hsin Lu;Ren-Hung Hwang;Yuan-Cheng Lai;Didik Sudyana;Wei-Bin Lee","doi":"10.1109/TMLCN.2025.3564587","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3564587","url":null,"abstract":"Existing Intrusion Detection System (IDS) relies on pre-trained models that struggle to keep pace with the evolving nature of network threats, as they cannot detect new types of network attacks until updated. Cyber Threat Intelligence (CTI) is analyzed by professional teams and shared among organizations for collective defense. However, due to its diverse forms, existing research often only analyzes reports and extracts Indicators of Compromise (IoC) to create an IoC Database for configuring blocklists, a method that attackers can easily circumvent. Our study introduces a unified solution named Dynamic IDS with CTI Integrated (DICI), which focuses on enhancing IDS capabilities by integrating continuously updated CTI. This approach involves two key AI models: the first serves as the IDS Model, detecting network traffic, while the second, the CTI Transfer Model, analyzes and transforms CTI into actionable training data. The CTI Transfer Model continuously converts CTI information into training data for IDS, enabling dynamic model updates that improve and adapt to emerging threats dynamically. Our experimental results show that DICI significantly enhances detection capabilities. Integrating the IDS Model with CTI in DICI improved the F1 score by 9.29% compared to the system without CTI, allowing for more effective detection of complex threats such as port obfuscation and port hopping attacks. Furthermore, within the CTI Transfer Model, involving the ML method led to a 30.92% F1 score improvement over heuristic methods. These results confirm that continuously integrating CTI within DICI substantially boosts its ability to detect and respond to new types of cyber attacks.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"605-622"},"PeriodicalIF":0.0,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10978877","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144073128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised Learning for Distributed Downlink Power Allocation in Cell-Free mMIMO Networks 无小区mimo网络中分布式下行功率分配的无监督学习
Pub Date : 2025-04-25 DOI: 10.1109/TMLCN.2025.3562808
Mattia Fabiani;Asmaa Abdallah;Abdulkadir Celik;Omer Haliloglu;Ahmed M. Eltawil
Cell-free massive multiple-input multiple-output (CF-mMIMO) surmounts conventional cellular network limitations in terms of coverage, capacity, and interference management. This paper aims to introduce a novel unsupervised learning framework for the downlink (DL) power allocation problem in CF-mMIMO networks, utilizing only large-scale fading (LSF) coefficients as input, rather than the hard-to-obtain exact user location or channel state information (CSI). Both centralized and distributed CF-mMIMO power control learning frameworks are explored, with deep neural networks (DNNs) trained to estimate power coefficients while addressing the constraints of pilot contamination and power budgets. For both learning frameworks, the proposed approach is utilized to maximize three well-known power control objectives under maximum-ratio and regularized zero-forcing precoding schemes: 1) sum of spectral efficiency, 2) minimum signal-to-interference-plus-noise ratio (SINR) for max-min fairness, and 3) product of SINRs for proportional fairness, for each of which customized loss functions are formulated. The proposed unsupervised learning approach circumvents the arduous task of training data computations, typically required in supervised learning methods, bypassing the use of conventional complex optimization methods and heuristic methodologies. Furthermore, an LSF-based radio unit (RU) selection algorithm is employed to activate only the contributing RUs, allowing efficient utilization of network resources. Simulation results demonstrate that our proposed unsupervised learning framework outperforms existing supervised learning and heuristic solutions, showcasing an improvement of up to 20% in spectral efficiency and more than 40% in terms of energy efficiency compared to state-of-the-art supervised learning counterparts.
无蜂窝大规模多输入多输出(CF-mMIMO)在覆盖范围、容量和干扰管理方面超越了传统蜂窝网络的限制。本文旨在为CF-mMIMO网络中的下行链路(DL)功率分配问题引入一种新的无监督学习框架,仅利用大规模衰落(LSF)系数作为输入,而不是难以获得的精确用户位置或信道状态信息(CSI)。本文探索了集中式和分布式CF-mMIMO功率控制学习框架,并训练了深度神经网络(dnn)来估计功率系数,同时解决了试点污染和功率预算的约束。对于这两种学习框架,所提出的方法利用最大比和正则化强制零预编码方案最大化三个众所周知的功率控制目标:1)频谱效率之和,2)最小信噪比(SINR)达到最大最小公平性,3)SINR的乘积达到比例公平性,并为每一个目标都制定了定制的损失函数。提出的无监督学习方法绕过了监督学习方法中通常需要的训练数据计算的艰巨任务,绕过了传统复杂优化方法和启发式方法的使用。此外,采用基于lsf的无线电单元(RU)选择算法,只激活有贡献的RU,从而有效利用网络资源。仿真结果表明,我们提出的无监督学习框架优于现有的监督学习和启发式解决方案,与最先进的监督学习相比,频谱效率提高了20%,能源效率提高了40%以上。
{"title":"Unsupervised Learning for Distributed Downlink Power Allocation in Cell-Free mMIMO Networks","authors":"Mattia Fabiani;Asmaa Abdallah;Abdulkadir Celik;Omer Haliloglu;Ahmed M. Eltawil","doi":"10.1109/TMLCN.2025.3562808","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3562808","url":null,"abstract":"Cell-free massive multiple-input multiple-output (CF-mMIMO) surmounts conventional cellular network limitations in terms of coverage, capacity, and interference management. This paper aims to introduce a novel unsupervised learning framework for the downlink (DL) power allocation problem in CF-mMIMO networks, utilizing only large-scale fading (LSF) coefficients as input, rather than the hard-to-obtain exact user location or channel state information (CSI). Both centralized and distributed CF-mMIMO power control learning frameworks are explored, with deep neural networks (DNNs) trained to estimate power coefficients while addressing the constraints of pilot contamination and power budgets. For both learning frameworks, the proposed approach is utilized to maximize three well-known power control objectives under maximum-ratio and regularized zero-forcing precoding schemes: 1) sum of spectral efficiency, 2) minimum signal-to-interference-plus-noise ratio (SINR) for max-min fairness, and 3) product of SINRs for proportional fairness, for each of which customized loss functions are formulated. The proposed unsupervised learning approach circumvents the arduous task of training data computations, typically required in supervised learning methods, bypassing the use of conventional complex optimization methods and heuristic methodologies. Furthermore, an LSF-based radio unit (RU) selection algorithm is employed to activate only the contributing RUs, allowing efficient utilization of network resources. Simulation results demonstrate that our proposed unsupervised learning framework outperforms existing supervised learning and heuristic solutions, showcasing an improvement of up to 20% in spectral efficiency and more than 40% in terms of energy efficiency compared to state-of-the-art supervised learning counterparts.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"644-658"},"PeriodicalIF":0.0,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10976604","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144099971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DoNext: An Open-Access Measurement Dataset for Machine Learning-Driven 5G Mobile Network Analysis donnext:用于机器学习驱动的5G移动网络分析的开放访问测量数据集
Pub Date : 2025-04-24 DOI: 10.1109/TMLCN.2025.3564239
Hendrik Schippers;Melina Geis;Stefan Böcker;Christian Wietfeld
Future mobile use cases such as teleoperation rely on highly available mobile networks. Due to the nature of the mobile access channel and the inherent competition, the availability may be restricted in certain initially unknown areas or timespans. We automated mobile network data acquisition using a smartphone application and dedicated hardware to address this challenge, providing detailed connectivity insights. DoNext, a massive dataset of 4G and 5G mobile network data and active measurements, was collected over two years in Dortmund, Germany. To the best ofour knowledge, it is the most extensive openly available mobile dataset. Machine learning methods were applied to the data to demonstrate its utility inkey performance indicator prediction. Radio environmental maps facilitating key performance indicator predictions and application planning across different locations are generated through spatial aggregation for in-advance predictions. We also showcase signal strength modeling with transfer learning for arbitrary locations in individual mobile network cells, covering private and restricted areas. By openly providing the dataset, we aim to enable other researchers to develop and evaluate their machine-learning methods without conducting extensive measurement campaigns.
未来的移动用例(如远程操作)依赖于高度可用的移动网络。由于移动接入信道的性质和固有的竞争,可用性可能在某些最初未知的区域或时间范围内受到限制。我们使用智能手机应用程序和专用硬件自动化移动网络数据采集来解决这一挑战,提供详细的连接洞察。DoNext是在德国多特蒙德历时两年收集的4G和5G移动网络数据和主动测量数据集。据我们所知,这是最广泛的公开可用移动数据集。将机器学习方法应用于数据,以证明其在关键性能指标预测中的实用性。通过空间聚合生成无线电环境地图,促进关键性能指标预测和跨不同地点的应用规划,以便进行提前预测。我们还展示了在覆盖私人和限制区域的单个移动网络单元中任意位置使用迁移学习的信号强度建模。通过公开提供数据集,我们的目标是使其他研究人员能够开发和评估他们的机器学习方法,而无需进行广泛的测量活动。
{"title":"DoNext: An Open-Access Measurement Dataset for Machine Learning-Driven 5G Mobile Network Analysis","authors":"Hendrik Schippers;Melina Geis;Stefan Böcker;Christian Wietfeld","doi":"10.1109/TMLCN.2025.3564239","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3564239","url":null,"abstract":"Future mobile use cases such as teleoperation rely on highly available mobile networks. Due to the nature of the mobile access channel and the inherent competition, the availability may be restricted in certain initially unknown areas or timespans. We automated mobile network data acquisition using a smartphone application and dedicated hardware to address this challenge, providing detailed connectivity insights. DoNext, a massive dataset of 4G and 5G mobile network data and active measurements, was collected over two years in Dortmund, Germany. To the best ofour knowledge, it is the most extensive openly available mobile dataset. Machine learning methods were applied to the data to demonstrate its utility inkey performance indicator prediction. Radio environmental maps facilitating key performance indicator predictions and application planning across different locations are generated through spatial aggregation for in-advance predictions. We also showcase signal strength modeling with transfer learning for arbitrary locations in individual mobile network cells, covering private and restricted areas. By openly providing the dataset, we aim to enable other researchers to develop and evaluate their machine-learning methods without conducting extensive measurement campaigns.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"585-604"},"PeriodicalIF":0.0,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10976440","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143937922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Causally-Aware Reinforcement Learning for Joint Communication and Sensing 联合通信与感知的因果感知强化学习
Pub Date : 2025-04-21 DOI: 10.1109/TMLCN.2025.3562557
Anik Roy;Serene Banerjee;Jishnu Sadasivan;Arnab Sarkar;Soumyajit Dey
The next-generation wireless network, 6G and beyond, envisions to integrate communication and sensing to overcome interference, improve spectrum efficiency, and reduce hardware and power consumption. Massive Multiple-Input Multiple Output (mMIMO)-based Joint Communication and Sensing (JCAS) systems realize this integration for 6G applications such as autonomous driving, as it requires accurate environmental sensing and time-critical communication with neighbouring vehicles. Reinforcement Learning (RL) is used for mMIMO antenna beamforming in the existing literature. However, the huge search space for actions associated with antenna beamforming causes the learning process for the RL agent to be inefficient due to high beam training overhead. The learning process does not consider the causal relationship between action space and the reward, and gives all actions equal importance. In this work, we explore a causally-aware RL agent which can intervene and discover causal relationships for mMIMO-based JCAS environments, during the training phase. We use a state dependent action dimension selection strategy to realize causal discovery for RL-based JCAS. Evaluation of the causally-aware RL framework in different JCAS scenarios shows the benefit of our proposed solution over baseline methods in terms of the higher reward. We have shown that in the presence of interfering users and sensing signal clutters, our proposed solution achieves 30% higher data rate in comparison to the communication-only state-of-the-art beam pattern learning method while maintaining sensing performance.
下一代无线网络(6G及以上)设想将通信和传感集成在一起,以克服干扰,提高频谱效率,降低硬件和功耗。基于大规模多输入多输出(mMIMO)的联合通信与传感(JCAS)系统为自动驾驶等6G应用实现了这种集成,因为它需要精确的环境传感和与邻近车辆的时间关键通信。现有文献将强化学习(RL)用于mimo天线波束形成。然而,与天线波束形成相关的动作的巨大搜索空间导致RL代理的学习过程由于高波束训练开销而效率低下。学习过程不考虑行动空间和奖励之间的因果关系,并给予所有行动同等的重要性。在这项工作中,我们探索了一个因果感知的强化学习代理,它可以在训练阶段干预并发现基于mimo的JCAS环境的因果关系。我们使用状态依赖的动作维度选择策略来实现基于rl的JCAS的因果发现。在不同的JCAS场景中对因果感知RL框架的评估表明,就更高的回报而言,我们提出的解决方案优于基线方法。我们已经证明,在存在干扰用户和传感信号杂波的情况下,与仅通信的最先进波束模式学习方法相比,我们提出的解决方案在保持传感性能的同时实现了30%的高数据速率。
{"title":"Causally-Aware Reinforcement Learning for Joint Communication and Sensing","authors":"Anik Roy;Serene Banerjee;Jishnu Sadasivan;Arnab Sarkar;Soumyajit Dey","doi":"10.1109/TMLCN.2025.3562557","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3562557","url":null,"abstract":"The next-generation wireless network, 6G and beyond, envisions to integrate communication and sensing to overcome interference, improve spectrum efficiency, and reduce hardware and power consumption. Massive Multiple-Input Multiple Output (mMIMO)-based Joint Communication and Sensing (JCAS) systems realize this integration for 6G applications such as autonomous driving, as it requires accurate environmental sensing and time-critical communication with neighbouring vehicles. Reinforcement Learning (RL) is used for mMIMO antenna beamforming in the existing literature. However, the huge search space for actions associated with antenna beamforming causes the learning process for the RL agent to be inefficient due to high beam training overhead. The learning process does not consider the causal relationship between action space and the reward, and gives all actions equal importance. In this work, we explore a causally-aware RL agent which can intervene and discover causal relationships for mMIMO-based JCAS environments, during the training phase. We use a state dependent action dimension selection strategy to realize causal discovery for RL-based JCAS. Evaluation of the causally-aware RL framework in different JCAS scenarios shows the benefit of our proposed solution over baseline methods in terms of the higher reward. We have shown that in the presence of interfering users and sensing signal clutters, our proposed solution achieves 30% higher data rate in comparison to the communication-only state-of-the-art beam pattern learning method while maintaining sensing performance.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"552-567"},"PeriodicalIF":0.0,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10971373","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143896251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning-Based Data-Assisted Channel Estimation and Detection 基于深度学习的数据辅助信道估计与检测
Pub Date : 2025-04-09 DOI: 10.1109/TMLCN.2025.3559472
Hamidreza Hashempoor;Wan Choi
We introduce a novel structure empowered by deep learning models, accompanied by a thorough training methodology, for enhancing channel estimation and data detection in multiple input multiple output (MIMO) orthogonal frequency division multiplexing (OFDM) systems. Central to our approach is the incorporation of a Denoising Block, which comprises three meticulously designed deep neural networks (DNNs) tasked with accurately extracting noiseless embeddings from the received signal. Alongside, we develop the Correctness Classifier, a classification algorithm adept at distinguishing correctly detected data by leveraging the denoised received signal. By selectively utilizing these identified data symbols as additional pilot signals, we augment the available pilot signals for channel estimation. Our Denoising Block also enables direct data detection, rendering the system well-suited for low-latency applications. To enable model training, we propose a hybrid likelihood objective of the detected symbols. We analytically derive the gradients with respect to the hybrid likelihood, enabling us to successfully complete the training phase. When compared to other conventional methods, experiments and simulations show that the proposed data-aided channel estimator significantly lowers the mean-squared-error (MSE) of the estimation and thus improves data detection performance. Github repository link is https://github.com/Hamidreza-Hashempoor/5g-dataaided-channel-estimate.
我们介绍了一种由深度学习模型授权的新结构,伴随着彻底的训练方法,用于增强多输入多输出(MIMO)正交频分复用(OFDM)系统中的信道估计和数据检测。我们的方法的核心是结合去噪块,它由三个精心设计的深度神经网络(dnn)组成,其任务是从接收信号中准确提取无噪声嵌入。此外,我们还开发了正确分类器,这是一种通过利用降噪的接收信号来区分正确检测数据的分类算法。通过选择性地利用这些识别的数据符号作为额外的导频信号,我们增加了可用的导频信号用于信道估计。我们的降噪块还支持直接数据检测,使系统非常适合低延迟应用。为了实现模型训练,我们提出了检测到的符号的混合似然目标。我们解析地推导了关于混合似然的梯度,使我们能够成功地完成训练阶段。实验和仿真结果表明,该方法显著降低了信道估计的均方误差(MSE),提高了信道的检测性能。Github存储库链接为https://github.com/Hamidreza-Hashempoor/5g-dataaided-channel-estimate。
{"title":"Deep Learning-Based Data-Assisted Channel Estimation and Detection","authors":"Hamidreza Hashempoor;Wan Choi","doi":"10.1109/TMLCN.2025.3559472","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3559472","url":null,"abstract":"We introduce a novel structure empowered by deep learning models, accompanied by a thorough training methodology, for enhancing channel estimation and data detection in multiple input multiple output (MIMO) orthogonal frequency division multiplexing (OFDM) systems. Central to our approach is the incorporation of a Denoising Block, which comprises three meticulously designed deep neural networks (DNNs) tasked with accurately extracting noiseless embeddings from the received signal. Alongside, we develop the Correctness Classifier, a classification algorithm adept at distinguishing correctly detected data by leveraging the denoised received signal. By selectively utilizing these identified data symbols as additional pilot signals, we augment the available pilot signals for channel estimation. Our Denoising Block also enables direct data detection, rendering the system well-suited for low-latency applications. To enable model training, we propose a hybrid likelihood objective of the detected symbols. We analytically derive the gradients with respect to the hybrid likelihood, enabling us to successfully complete the training phase. When compared to other conventional methods, experiments and simulations show that the proposed data-aided channel estimator significantly lowers the mean-squared-error (MSE) of the estimation and thus improves data detection performance. Github repository link is <uri>https://github.com/Hamidreza-Hashempoor/5g-dataaided-channel-estimate</uri>.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"534-551"},"PeriodicalIF":0.0,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10960353","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143875215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Routing Performance Through Trajectory Planning With DRL in UAV-Aided VANETs 基于DRL的无人机辅助VANETs路径规划提高路由性能
Pub Date : 2025-04-08 DOI: 10.1109/TMLCN.2025.3558204
Jingxuan Chen;Dianrun Huang;Yijie Wang;Ziping Yu;Zhongliang Zhao;Xianbin Cao;Yang Liu;Tony Q. S. Quek;Dapeng Oliver Wu
Vehicular Ad-hoc Networks (VANETs) have gained significant attention as a key enabler for intelligent transportation systems, facilitating vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communications. Despite their potential, VANETs face critical challenges in maintaining reliable end-to-end connectivity due to their highly dynamic topology and sparse node distribution, particularly in areas with limited infrastructure coverage. Addressing these limitations is crucial for advancing the reliability and scalability of VANETs. To bridge these gaps, this work introduces a heterogeneous UAV-aided VANET framework that leverages uncrewed aerial vehicles (UAVs), also known as autonomous aerial vehicles, to enhance data transmission. The key contributions of this paper include: 1) the design of a novel adaptive dual-model routing (ADMR) protocol that operates in two modes: direct vehicle clustering for intra-cluster communication and UAV/RSU-assisted routing for inter-cluster communication; 2) the development of a modified density-based clustering algorithm (MDBSCAN) for dynamic vehicle node clustering; and 3) an improved UAV trajectory planning method based on a multi-agent soft actor-critic (MASAC) deep reinforcement learning algorithm, which optimizes network reachability. Simulation results reveal that the UAV trajectory optimization method achieves higher network reachability ratios compared to existing approaches. Also, the proposed ADMR protocol improves the packet delivery ratio (PDR) while maintaining low end-to-end latency. These findings demonstrate the potential to enhance VANET performance, while also providing valuable insights for the development of intelligent transportation systems and related fields.
车辆自组织网络(vanet)作为智能交通系统的关键推动者,促进了车辆对车辆(V2V)和车辆对基础设施(V2I)的通信,已经获得了极大的关注。尽管具有潜力,但由于其高度动态的拓扑结构和稀疏的节点分布,特别是在基础设施覆盖有限的地区,vanet在保持可靠的端到端连接方面面临着严峻的挑战。解决这些限制对于提高VANETs的可靠性和可扩展性至关重要。为了弥补这些差距,这项工作引入了一种异构无人机辅助VANET框架,该框架利用无人驾驶飞行器(uav),也称为自主飞行器,来增强数据传输。本文的主要贡献包括:1)设计了一种新的自适应双模型路由(ADMR)协议,该协议在两种模式下工作:集群内通信的直接车辆集群和集群间通信的无人机/ rsu辅助路由;2)提出了一种改进的基于密度的动态车辆节点聚类算法(MDBSCAN);3)基于多智能体软行为者评价(MASAC)深度强化学习算法的改进无人机轨迹规划方法,优化网络可达性。仿真结果表明,所提出的无人机轨迹优化方法比现有方法具有更高的网络可达率。此外,ADMR协议在保持低端到端延迟的同时,提高了包投递率(PDR)。这些发现证明了提高VANET性能的潜力,同时也为智能交通系统和相关领域的发展提供了有价值的见解。
{"title":"Enhancing Routing Performance Through Trajectory Planning With DRL in UAV-Aided VANETs","authors":"Jingxuan Chen;Dianrun Huang;Yijie Wang;Ziping Yu;Zhongliang Zhao;Xianbin Cao;Yang Liu;Tony Q. S. Quek;Dapeng Oliver Wu","doi":"10.1109/TMLCN.2025.3558204","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3558204","url":null,"abstract":"Vehicular Ad-hoc Networks (VANETs) have gained significant attention as a key enabler for intelligent transportation systems, facilitating vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communications. Despite their potential, VANETs face critical challenges in maintaining reliable end-to-end connectivity due to their highly dynamic topology and sparse node distribution, particularly in areas with limited infrastructure coverage. Addressing these limitations is crucial for advancing the reliability and scalability of VANETs. To bridge these gaps, this work introduces a heterogeneous UAV-aided VANET framework that leverages uncrewed aerial vehicles (UAVs), also known as autonomous aerial vehicles, to enhance data transmission. The key contributions of this paper include: 1) the design of a novel adaptive dual-model routing (ADMR) protocol that operates in two modes: direct vehicle clustering for intra-cluster communication and UAV/RSU-assisted routing for inter-cluster communication; 2) the development of a modified density-based clustering algorithm (MDBSCAN) for dynamic vehicle node clustering; and 3) an improved UAV trajectory planning method based on a multi-agent soft actor-critic (MASAC) deep reinforcement learning algorithm, which optimizes network reachability. Simulation results reveal that the UAV trajectory optimization method achieves higher network reachability ratios compared to existing approaches. Also, the proposed ADMR protocol improves the packet delivery ratio (PDR) while maintaining low end-to-end latency. These findings demonstrate the potential to enhance VANET performance, while also providing valuable insights for the development of intelligent transportation systems and related fields.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"517-533"},"PeriodicalIF":0.0,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10951108","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143845541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Machine Learning in Communications and Networking
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1