Pub Date : 2025-10-17DOI: 10.1109/LNET.2025.3613807
Geng Sun;Octavia A. Dobre;Adlen Ksentini;Dusit Niyato;Wen Wu;Jiacheng Wang
The IEEE Networking Letters special issue on Generative AI (GAI) and Large Language Models (LLMs)-enabled Edge Intelligence explores the latest advancements and applications of GAI and LLMs at the network edge, which aim to address the growing demand for intelligence and adaptability driven by the rapid evolution of modern communication systems. Unlike traditional artificial intelligence (AI) technologies, GAI and LLMs possess powerful self-learning, data-modeling, and context-aware capabilities. Specifically, GAI can extract deep features and semantic information from large amounts of data, which allows it to make intelligent predictions and optimize processes for a flexible response to changes in complex environments. Moreover, by simulating network behaviors under different scenarios and generating new data points, GAI can optimize network architecture and expand training datasets to improve model robustness and generalization. Additionally, LLMs can rapidly adapt to diverse tasks through a few-shot learning. Furthermore, they can incorporate techniques such as retrieval-augmented generation (RAG), thereby strengthening their capacity to handle complex text-based tasks and achieve precise information generation and intelligent interaction.
{"title":"Guest Editorial Special Issue on Generative AI and Large Language Models-Enabled Edge Intelligence","authors":"Geng Sun;Octavia A. Dobre;Adlen Ksentini;Dusit Niyato;Wen Wu;Jiacheng Wang","doi":"10.1109/LNET.2025.3613807","DOIUrl":"https://doi.org/10.1109/LNET.2025.3613807","url":null,"abstract":"The IEEE Networking Letters special issue on Generative AI (GAI) and Large Language Models (LLMs)-enabled Edge Intelligence explores the latest advancements and applications of GAI and LLMs at the network edge, which aim to address the growing demand for intelligence and adaptability driven by the rapid evolution of modern communication systems. Unlike traditional artificial intelligence (AI) technologies, GAI and LLMs possess powerful self-learning, data-modeling, and context-aware capabilities. Specifically, GAI can extract deep features and semantic information from large amounts of data, which allows it to make intelligent predictions and optimize processes for a flexible response to changes in complex environments. Moreover, by simulating network behaviors under different scenarios and generating new data points, GAI can optimize network architecture and expand training datasets to improve model robustness and generalization. Additionally, LLMs can rapidly adapt to diverse tasks through a few-shot learning. Furthermore, they can incorporate techniques such as retrieval-augmented generation (RAG), thereby strengthening their capacity to handle complex text-based tasks and achieve precise information generation and intelligent interaction.","PeriodicalId":100628,"journal":{"name":"IEEE Networking Letters","volume":"7 3","pages":"157-160"},"PeriodicalIF":0.0,"publicationDate":"2025-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11206727","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145352045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-06DOI: 10.1109/LNET.2025.3617577
George Makropoulos;Harilaos Koumaras;Nancy Alonistioti
As 5th Generation (5G) and Beyond 5G (B5G) networks evolve, dynamic resource allocation and management is crucial for supporting the diversity of devices and the mixed data traffic types. Network slicing enables the logical segmentation of an infrastructure to meet specific Quality of Service (QoS) requirements posed by applications, but factors such as fluctuating traffic, user mobility, and cross-slice interference, pose challenges towards proactive resource allocation. Traditional methods struggle with these factors, leading to inefficiencies. Therefore, this letter explores the concept of an AI-driven network performance prediction and resource allocation framework using Temporal Graph Networks (TGNs). By integrating TGN with the NS-3 simulator, the work in this letter demonstrates an efficient approach to predict network throughput. The proposed solution advances spatiotemporal Artificial Intelligence (AI) techniques enabling more accurate prediction of network performance and adaptive resource optimization, supporting dynamic network slicing.
{"title":"AI-Driven Dynamic Network Slicing Optimization Leveraging Temporal Graph Networks","authors":"George Makropoulos;Harilaos Koumaras;Nancy Alonistioti","doi":"10.1109/LNET.2025.3617577","DOIUrl":"https://doi.org/10.1109/LNET.2025.3617577","url":null,"abstract":"As 5th Generation (5G) and Beyond 5G (B5G) networks evolve, dynamic resource allocation and management is crucial for supporting the diversity of devices and the mixed data traffic types. Network slicing enables the logical segmentation of an infrastructure to meet specific Quality of Service (QoS) requirements posed by applications, but factors such as fluctuating traffic, user mobility, and cross-slice interference, pose challenges towards proactive resource allocation. Traditional methods struggle with these factors, leading to inefficiencies. Therefore, this letter explores the concept of an AI-driven network performance prediction and resource allocation framework using Temporal Graph Networks (TGNs). By integrating TGN with the NS-3 simulator, the work in this letter demonstrates an efficient approach to predict network throughput. The proposed solution advances spatiotemporal Artificial Intelligence (AI) techniques enabling more accurate prediction of network performance and adaptive resource optimization, supporting dynamic network slicing.","PeriodicalId":100628,"journal":{"name":"IEEE Networking Letters","volume":"7 4","pages":"362-366"},"PeriodicalIF":0.0,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-16DOI: 10.1109/LNET.2025.3610610
Saransh Shankar;Divjot Singh;Anurag Badoni;Mahendra K. Shukla;Om Jee Pandey;Nadjib Aitsaadi
The growing sophistication of cyberattacks demands Intrusion Detection Systems (IDS) that are both accurate and adaptive to diverse network threats. Traditional IDS often suffer degraded performance due to high-dimensional features and severe class imbalance in network traffic datasets. To address these issues, we propose a hybrid IDS framework integrating four optimized models (XGBoost, Long Short-Term Memory, MiniVGGNet, and AlexNet) enhanced through Random Forest Regressor-based feature selection and the Difficult Set Sampling Technique (DSSTE) for class balancing. Two integration strategies are employed: a hard-voting Ensemble and a Mixture of Experts (MoE) with a gating network for adaptive weighting. Comprehensive hyperparameter tuning via Keras Tuner and RandomizedSearchCV maximizes model performance. Evaluated on the CICIDS-2017 dataset, the system achieves detection rates above 99% with micro-average AUC values near 1.0, demonstrating strong generalization and effectiveness in detecting both majority and minority intrusions. The proposed framework holds strong relevance for security-critical domains, particularly wireless health monitoring systems, where ensuring the confidentiality and integrity of sensitive data is vital, thereby underscoring its suitability for real-world deployment.
{"title":"Toward Robust IDS in Network Security: Handling Class Imbalance With Deep Hybrid Architectures","authors":"Saransh Shankar;Divjot Singh;Anurag Badoni;Mahendra K. Shukla;Om Jee Pandey;Nadjib Aitsaadi","doi":"10.1109/LNET.2025.3610610","DOIUrl":"https://doi.org/10.1109/LNET.2025.3610610","url":null,"abstract":"The growing sophistication of cyberattacks demands Intrusion Detection Systems (IDS) that are both accurate and adaptive to diverse network threats. Traditional IDS often suffer degraded performance due to high-dimensional features and severe class imbalance in network traffic datasets. To address these issues, we propose a hybrid IDS framework integrating four optimized models (XGBoost, Long Short-Term Memory, MiniVGGNet, and AlexNet) enhanced through Random Forest Regressor-based feature selection and the Difficult Set Sampling Technique (DSSTE) for class balancing. Two integration strategies are employed: a hard-voting Ensemble and a Mixture of Experts (MoE) with a gating network for adaptive weighting. Comprehensive hyperparameter tuning via Keras Tuner and RandomizedSearchCV maximizes model performance. Evaluated on the CICIDS-2017 dataset, the system achieves detection rates above 99% with micro-average AUC values near 1.0, demonstrating strong generalization and effectiveness in detecting both majority and minority intrusions. The proposed framework holds strong relevance for security-critical domains, particularly wireless health monitoring systems, where ensuring the confidentiality and integrity of sensitive data is vital, thereby underscoring its suitability for real-world deployment.","PeriodicalId":100628,"journal":{"name":"IEEE Networking Letters","volume":"7 4","pages":"357-361"},"PeriodicalIF":0.0,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-15DOI: 10.1109/LNET.2025.3589273
Xinyu Xu;Yingxu Lai;Xiao Zhang
With the increasing openness of network environments in industrial control systems, cybersecurity threats have become increasingly severe. While rule-based intrusion detection remains widely used, such methods are limited by their reliance on expert knowledge and the complexity of rule generation, hindering effective responses. In contrast, deep learning has demonstrated strong capabilities in capturing complex attack patterns from large-scale data, but its lack of interpretability poses significant challenges for deployment in safety-critical industrial settings. To address these challenges, this letter proposes a novel method that integrates deep learning with neuro-symbolic representation to enable automated and high-quality rule generation for intrusion detection. Specifically, the approach leverages a deep neural network to learn a set of candidate rules highly correlated with attack behaviors. A heuristic search strategy is then employed to enhance the interpretability of the rules while maintaining detection effectiveness. Experiments on two public datasets demonstrate that the generated rules achieve high detection accuracy with low false positive rates, while maintaining simplicity and clarity, highlighting its strong potential for deployment in real-world industrial environments.
{"title":"Explaining Intrusion Detection in Industrial Control Systems Through Rule Set Learning","authors":"Xinyu Xu;Yingxu Lai;Xiao Zhang","doi":"10.1109/LNET.2025.3589273","DOIUrl":"https://doi.org/10.1109/LNET.2025.3589273","url":null,"abstract":"With the increasing openness of network environments in industrial control systems, cybersecurity threats have become increasingly severe. While rule-based intrusion detection remains widely used, such methods are limited by their reliance on expert knowledge and the complexity of rule generation, hindering effective responses. In contrast, deep learning has demonstrated strong capabilities in capturing complex attack patterns from large-scale data, but its lack of interpretability poses significant challenges for deployment in safety-critical industrial settings. To address these challenges, this letter proposes a novel method that integrates deep learning with neuro-symbolic representation to enable automated and high-quality rule generation for intrusion detection. Specifically, the approach leverages a deep neural network to learn a set of candidate rules highly correlated with attack behaviors. A heuristic search strategy is then employed to enhance the interpretability of the rules while maintaining detection effectiveness. Experiments on two public datasets demonstrate that the generated rules achieve high detection accuracy with low false positive rates, while maintaining simplicity and clarity, highlighting its strong potential for deployment in real-world industrial environments.","PeriodicalId":100628,"journal":{"name":"IEEE Networking Letters","volume":"7 3","pages":"234-238"},"PeriodicalIF":0.0,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145351978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-08DOI: 10.1109/LNET.2025.3586883
S. Vengadeswaran;P. Dhavakumar;Viji Viswanathan
The execution of MapReduce (MR) applications in Hadoop cluster poses significant challenges due to the non consideration of 1. Grouping semantics in Data-intensive applications, 2. Heterogeneity in the computing nodes resulting in suboptimal block distribution, concentrating execution on fewer nodes, thereby increasing processing time and reducing data locality. This letter proposes improved data placement by exploiting grouping semantics and heterogeneity (GSHetero) to boost MR performance. Initially, the execution traces will be analyzed to identify the data access pattern. The grouping semantics are extracted by applying the MCL algorithm. Then GSHetero algorithm is proposed which re-organises the default data layouts based on grouping semantics to ensure higher parallelism. The efficiency of the GSHetero is demonstrated by the 10-node Hadoop cluster deployed on the cloud by executing the Linear Regression over the weather dataset. The results show that GSHetero improves data locality by 27.4% and CPU utilization by 47%. The efficiency of the GSHetero is also demonstrated by executing Hadoop benchmark (WordCount) on varying cluster sizes (15, 20 nodes) for varying workloads.
{"title":"GSHetero - Grouping and Heterogeneity-Aware Data Placement to Improve MapReduce Performance in Hadoop","authors":"S. Vengadeswaran;P. Dhavakumar;Viji Viswanathan","doi":"10.1109/LNET.2025.3586883","DOIUrl":"https://doi.org/10.1109/LNET.2025.3586883","url":null,"abstract":"The execution of MapReduce (MR) applications in Hadoop cluster poses significant challenges due to the non consideration of 1. Grouping semantics in Data-intensive applications, 2. Heterogeneity in the computing nodes resulting in suboptimal block distribution, concentrating execution on fewer nodes, thereby increasing processing time and reducing data locality. This letter proposes improved data placement by exploiting grouping semantics and heterogeneity (GSHetero) to boost MR performance. Initially, the execution traces will be analyzed to identify the data access pattern. The grouping semantics are extracted by applying the MCL algorithm. Then GSHetero algorithm is proposed which re-organises the default data layouts based on grouping semantics to ensure higher parallelism. The efficiency of the GSHetero is demonstrated by the 10-node Hadoop cluster deployed on the cloud by executing the Linear Regression over the weather dataset. The results show that GSHetero improves data locality by 27.4% and CPU utilization by 47%. The efficiency of the GSHetero is also demonstrated by executing Hadoop benchmark (WordCount) on varying cluster sizes (15, 20 nodes) for varying workloads.","PeriodicalId":100628,"journal":{"name":"IEEE Networking Letters","volume":"7 3","pages":"229-233"},"PeriodicalIF":0.0,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145352071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}