Pub Date : 2026-01-22DOI: 10.1109/TNSM.2026.3656925
Fekri Saleh;Abraham O. Fapojuwo;Diwakar Krishnamurthy
Smart city applications require diverse fifth generation network services with stringent performance and isolation requirements, necessitating scalable and efficient network slicing mechanisms. This paper proposes a novel framework for flow-based network slicing in edge cloud environments, termed virtual edge (vEdge). The framework leverages virtual medium access control addresses to identify flows at the data link layer (Layer 2), achieving robust flow-based slice isolation and efficient resource management. The proposed solution integrates a vEdge software module within the software defined networking controller to create, manage, and isolate network slices for both Third Generation Partnership Project (3GPP) and non-3GPP devices. By isolating traffic at Layer 2, the framework simplifies address matching and eliminates the computational overhead associated with deep packet inspection at upper layers (e.g., Layer 3/4 or Layer 7). The proposed vEdge further provides customizable flow-based network slices, each managed by a dedicated controller, providing self-contained virtual networks tailored to diverse applications within the smart city sector. Experimental evaluations demonstrate the efficacy of vEdge in enhancing network performance, achieving a 30% reduction in latency compared to flow-based network slicing that uses non-Layer 2 parameters to identify flows.
{"title":"vEdge: Flow-Based Network Slicing for Smart Cities in Edge Cloud Environments","authors":"Fekri Saleh;Abraham O. Fapojuwo;Diwakar Krishnamurthy","doi":"10.1109/TNSM.2026.3656925","DOIUrl":"https://doi.org/10.1109/TNSM.2026.3656925","url":null,"abstract":"Smart city applications require diverse fifth generation network services with stringent performance and isolation requirements, necessitating scalable and efficient network slicing mechanisms. This paper proposes a novel framework for flow-based network slicing in edge cloud environments, termed virtual edge (vEdge). The framework leverages virtual medium access control addresses to identify flows at the data link layer (Layer 2), achieving robust flow-based slice isolation and efficient resource management. The proposed solution integrates a vEdge software module within the software defined networking controller to create, manage, and isolate network slices for both Third Generation Partnership Project (3GPP) and non-3GPP devices. By isolating traffic at Layer 2, the framework simplifies address matching and eliminates the computational overhead associated with deep packet inspection at upper layers (e.g., Layer 3/4 or Layer 7). The proposed vEdge further provides customizable flow-based network slices, each managed by a dedicated controller, providing self-contained virtual networks tailored to diverse applications within the smart city sector. Experimental evaluations demonstrate the efficacy of vEdge in enhancing network performance, achieving a 30% reduction in latency compared to flow-based network slicing that uses non-Layer 2 parameters to identify flows.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"23 ","pages":"2104-2115"},"PeriodicalIF":5.4,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-21DOI: 10.1109/TNSM.2026.3656605
Marija Gajić;Marcin Bosk;Stanislav Lange;Thomas Zinner
5G and beyond provides connectivity for a variety of heterogeneous, often mission-critical services, placing stringent performance requirements on these systems. Providing satisfactory Quality of Experience (QoE) for diverse, coexisting applications prompts the network operators to enforce application-aware, efficient resource allocation schemes that can improve user-satisfaction, efficiency, and system utilization. For these purposes, QoS Flows and network slicing have been identified as key enablers. Those concepts move away from economy of scale, towards a fine-grained slice and flow handling with customized resource control for each application, application type, or slice. This work is particularly focused on transport slicing, where the shift towards fine-grained resource control has important implications for how network resources are scaled and optimally allocated. These aspects have been largely ignored in the existing literature. Furthermore, while capacity has been recognized as a key resource, selecting the appropriate queue size, granularity of the resource allocation scheme, and their relations with the number of clients are often neglected in the process of resource dimensioning. To address these shortcomings, we perform an in-depth evaluation of the effects that impact factors have on the overall QoE and system utilization using the OMNeT++ simulator. We show the optimization potential for QoE and resource utilization, and further formulate guidelines for efficient and QoE-aware resource allocation.
{"title":"QoE-Aware Transport Slicing Configuration: Improving Application Performance in Beyond-5G Networks","authors":"Marija Gajić;Marcin Bosk;Stanislav Lange;Thomas Zinner","doi":"10.1109/TNSM.2026.3656605","DOIUrl":"https://doi.org/10.1109/TNSM.2026.3656605","url":null,"abstract":"5G and beyond provides connectivity for a variety of heterogeneous, often mission-critical services, placing stringent performance requirements on these systems. Providing satisfactory Quality of Experience (QoE) for diverse, coexisting applications prompts the network operators to enforce application-aware, efficient resource allocation schemes that can improve user-satisfaction, efficiency, and system utilization. For these purposes, QoS Flows and network slicing have been identified as key enablers. Those concepts move away from economy of scale, towards a fine-grained slice and flow handling with customized resource control for each application, application type, or slice. This work is particularly focused on transport slicing, where the shift towards fine-grained resource control has important implications for how network resources are scaled and optimally allocated. These aspects have been largely ignored in the existing literature. Furthermore, while capacity has been recognized as a key resource, selecting the appropriate queue size, granularity of the resource allocation scheme, and their relations with the number of clients are often neglected in the process of resource dimensioning. To address these shortcomings, we perform an in-depth evaluation of the effects that impact factors have on the overall QoE and system utilization using the OMNeT++ simulator. We show the optimization potential for QoE and resource utilization, and further formulate guidelines for efficient and QoE-aware resource allocation.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"23 ","pages":"2116-2134"},"PeriodicalIF":5.4,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-19DOI: 10.1109/TNSM.2026.3655704
Xiaofeng Liu;Naigong Zheng;Fuliang Li
Although Software-Defined Network (SDN) has gained popularity in real-world deployments for its flexible management paradigm, its centralized control principle leads to various known performance issues. In this paper, we propose SDN-Mirror, a novel generalized delay analytical model based on network calculus, to interpret how the performance is affected and to illustrate how to accelerate the performance as well. We first elaborate the impact of parameters on packet forwarding delay in SDN, including device capacity, flow features and cache size. Then, building upon the analysis, we establish SDN-Mirror, which acts like a mirror, capable of not only precisely representing the relation between packet forwarding delay and each parameter but also verifying the effectiveness of optimization policies. At last, we evaluate SDN-Mirror by quantifying how each parameter affects the forwarding delay under different table matching states. We also verify a performance improvement policy with the optimized SDN-Mirror and experiment results show that packet forwarding delays of kernel space matching flow, userspace matching flow and unmatched flow can be reduced by 39.8%, 20.7% and 13.2%, respectively.
{"title":"Don’t Let SDN Obsolete: Interpreting Software-Defined Networks With Network Calculus","authors":"Xiaofeng Liu;Naigong Zheng;Fuliang Li","doi":"10.1109/TNSM.2026.3655704","DOIUrl":"https://doi.org/10.1109/TNSM.2026.3655704","url":null,"abstract":"Although Software-Defined Network (SDN) has gained popularity in real-world deployments for its flexible management paradigm, its centralized control principle leads to various known performance issues. In this paper, we propose SDN-Mirror, a novel generalized delay analytical model based on network calculus, to interpret how the performance is affected and to illustrate how to accelerate the performance as well. We first elaborate the impact of parameters on packet forwarding delay in SDN, including device capacity, flow features and cache size. Then, building upon the analysis, we establish SDN-Mirror, which acts like a mirror, capable of not only precisely representing the relation between packet forwarding delay and each parameter but also verifying the effectiveness of optimization policies. At last, we evaluate SDN-Mirror by quantifying how each parameter affects the forwarding delay under different table matching states. We also verify a performance improvement policy with the optimized SDN-Mirror and experiment results show that packet forwarding delays of kernel space matching flow, userspace matching flow and unmatched flow can be reduced by 39.8%, 20.7% and 13.2%, respectively.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"23 ","pages":"2092-2103"},"PeriodicalIF":5.4,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12DOI: 10.1109/TNSM.2026.3651896
Molka Gharbaoui;Filippo Sciarrone;Mattia Fontana;Piero Castoldi;Barbara Martini
Intent-Based Networking (IBN) enables operators to specify high-level outcomes while the system translates these intents into concrete policies and configurations. As IBN deployments grow in scale, heterogeneity and dynamicity, ensuring continuous alignment between network behavior and user objectives becomes both essential and increasingly difficult. This paper provides a technical survey of assurance and conflict detection techniques in IBN, with the goal of improving reliability, robustness, and policy compliance. We first position our survey with respect to existing work. We then review current assurance mechanisms, including the use of AI, machine learning, and real-time monitoring for validating intent fulfillment. We also examine conflict detection methods across the intent lifecycle, from capture to implementation. In addition, we outline relevant standardization efforts and open-source tools that support IBN adoption. Finally, we discuss key challenges, such as AI/ML integration, generalization, and scalability, and present a roadmap for future research aimed at strengthening robustness of IBN frameworks.
{"title":"Assurance and Conflict Detection in Intent-Based Networking: A Comprehensive Survey and Insights on Standards and Open-Source Tools","authors":"Molka Gharbaoui;Filippo Sciarrone;Mattia Fontana;Piero Castoldi;Barbara Martini","doi":"10.1109/TNSM.2026.3651896","DOIUrl":"https://doi.org/10.1109/TNSM.2026.3651896","url":null,"abstract":"Intent-Based Networking (IBN) enables operators to specify high-level outcomes while the system translates these intents into concrete policies and configurations. As IBN deployments grow in scale, heterogeneity and dynamicity, ensuring continuous alignment between network behavior and user objectives becomes both essential and increasingly difficult. This paper provides a technical survey of assurance and conflict detection techniques in IBN, with the goal of improving reliability, robustness, and policy compliance. We first position our survey with respect to existing work. We then review current assurance mechanisms, including the use of AI, machine learning, and real-time monitoring for validating intent fulfillment. We also examine conflict detection methods across the intent lifecycle, from capture to implementation. In addition, we outline relevant standardization efforts and open-source tools that support IBN adoption. Finally, we discuss key challenges, such as AI/ML integration, generalization, and scalability, and present a roadmap for future research aimed at strengthening robustness of IBN frameworks.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"23 ","pages":"1891-1912"},"PeriodicalIF":5.4,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11334180","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12DOI: 10.1109/TNSM.2026.3652529
Jack Wilkie;Hanan Hindy;Craig Michie;Christos Tachtatzis;James Irvine;Robert Atkinson
Machine learning has achieved state-of-the-art results in network intrusion detection; however, its performance significantly degrades when confronted by a new attack class— a zero-day attack. In simple terms, classical machine learning-based approaches are adept at identifying attack classes on which they have been previously trained, but struggle with those not included in their training data. One approach to addressing this shortcoming is to utilise anomaly detectors which train exclusively on benign data with the goal of generalising to all attack classes— both known and zero-day. However, this comes at the expense of a prohibitively high false positive rate. This work proposes a novel contrastive loss function which is able to maintain the advantages of other contrastive learning-based approaches (robustness to imbalanced data) but can also generalise to zero-day attacks. Unlike anomaly detectors, this model learns the distributions of benign traffic using both benign and known malign samples, i.e., other well-known attack classes (not including the zero-day class), and consequently, achieves significant performance improvements. The proposed approach is experimentally verified on the Lycos2017 dataset where it achieves an AUROC improvement of.000065 and.060883 over previous models in known and zero-day attack detection, respectively. Finally, the proposed method is extended to open-set recognition achieving OpenAUC improvements of.170883 over existing approaches.
{"title":"A Novel Contrastive Loss for Zero-Day Network Intrusion Detection","authors":"Jack Wilkie;Hanan Hindy;Craig Michie;Christos Tachtatzis;James Irvine;Robert Atkinson","doi":"10.1109/TNSM.2026.3652529","DOIUrl":"https://doi.org/10.1109/TNSM.2026.3652529","url":null,"abstract":"Machine learning has achieved state-of-the-art results in network intrusion detection; however, its performance significantly degrades when confronted by a new attack class— a zero-day attack. In simple terms, classical machine learning-based approaches are adept at identifying attack classes on which they have been previously trained, but struggle with those not included in their training data. One approach to addressing this shortcoming is to utilise anomaly detectors which train exclusively on benign data with the goal of generalising to all attack classes— both known and zero-day. However, this comes at the expense of a prohibitively high false positive rate. This work proposes a novel contrastive loss function which is able to maintain the advantages of other contrastive learning-based approaches (robustness to imbalanced data) but can also generalise to zero-day attacks. Unlike anomaly detectors, this model learns the distributions of benign traffic using both benign and known malign samples, i.e., other well-known attack classes (not including the zero-day class), and consequently, achieves significant performance improvements. The proposed approach is experimentally verified on the Lycos2017 dataset where it achieves an AUROC improvement of.000065 and.060883 over previous models in known and zero-day attack detection, respectively. Finally, the proposed method is extended to open-set recognition achieving OpenAUC improvements of.170883 over existing approaches.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"23 ","pages":"2064-2076"},"PeriodicalIF":5.4,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12DOI: 10.1109/TNSM.2026.3652800
Marco Polverini;Andrés García-López;Juan Luis Herrera;Santiago García-Gil;Francesco G. Lavacca;Antonio Cianfrani;Jaime Galan-Jimenez
Software-Defined Networking (SDN) enables flexible and programmable control over network behavior through the deployment of multiple control applications. However, when these applications operate simultaneously, each pursuing different and potentially conflicting objectives, unexpected interactions may arise, leading to policy violations, performance degradation, or inefficient resource usage. This paper presents a Digital Twin (DT)-based framework for the early detection of such application-level conflicts. The proposed framework is lightweight, modular, and designed to be seamlessly integrated into real SDN controllers. It includes multiple DT models capturing different network aspects, including end-to-end delay, link congestion, reliability, and carbon emissions. A case study in a smart factory scenario demonstrates the framework’s ability to identify conflicts arising from coexisting applications with heterogeneous goals. The solution is validated through both simulation and proof-of-concept implementation tested in an emulated environment using Mininet. The performance evaluation shows that three out of four DT models achieve a precision above 90%, while the minimum recall across all models exceeds 84%. Moreover, the proof of concept confirms that what-if analyses can be executed in a few milliseconds, enabling timely and proactive conflict detection. These results demonstrate that the framework can accurately detect conflicts and deliver feedback fast enough to support timely network adaptation.
{"title":"Avoiding SDN Application Conflicts With Digital Twins: Design, Models and Proof of Concept","authors":"Marco Polverini;Andrés García-López;Juan Luis Herrera;Santiago García-Gil;Francesco G. Lavacca;Antonio Cianfrani;Jaime Galan-Jimenez","doi":"10.1109/TNSM.2026.3652800","DOIUrl":"https://doi.org/10.1109/TNSM.2026.3652800","url":null,"abstract":"Software-Defined Networking (SDN) enables flexible and programmable control over network behavior through the deployment of multiple control applications. However, when these applications operate simultaneously, each pursuing different and potentially conflicting objectives, unexpected interactions may arise, leading to policy violations, performance degradation, or inefficient resource usage. This paper presents a Digital Twin (DT)-based framework for the early detection of such application-level conflicts. The proposed framework is lightweight, modular, and designed to be seamlessly integrated into real SDN controllers. It includes multiple DT models capturing different network aspects, including end-to-end delay, link congestion, reliability, and carbon emissions. A case study in a smart factory scenario demonstrates the framework’s ability to identify conflicts arising from coexisting applications with heterogeneous goals. The solution is validated through both simulation and proof-of-concept implementation tested in an emulated environment using Mininet. The performance evaluation shows that three out of four DT models achieve a precision above 90%, while the minimum recall across all models exceeds 84%. Moreover, the proof of concept confirms that what-if analyses can be executed in a few milliseconds, enabling timely and proactive conflict detection. These results demonstrate that the framework can accurately detect conflicts and deliver feedback fast enough to support timely network adaptation.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"23 ","pages":"2038-2050"},"PeriodicalIF":5.4,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11345480","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12DOI: 10.1109/TNSM.2026.3652304
Pieter Moens;Bram Steenwinckel;Femke Ongenae;Bruno Volckaert;Sofie Van Hoecke
Microservice applications are omnipresent due to their advantages, such as scalability, flexibility and consequentially resource cost efficiency. The loosely-coupled microservices can be easily added, replicated, updated and/or removed to address the changing workload. However, the distributed and dynamic nature of microservice architectures introduces a complexity with regard to monitoring and observability, which is paramount to ensure reliability, especially in critical domains. Anomaly detection has become an important tool to automate microservice monitoring and detect system failures. Nevertheless, state-of-the-art solutions assume the topology of the monitored application to remain static over time and fail to account for the dynamic changes the application, and the infrastructure it is deployed on, undergoes. This paper tackles these shortcomings by introducing a context-aware anomaly detection methodology using dynamic knowledge graphs to capture contextual features which describe the evolving state of the monitored system. Our methodology leverages resource and network monitoring to capture dependencies between microservices, and the infrastructure they are running on. In addition to the methodology for anomaly detection, this paper presents an open-source benchmark framework for context-aware anomaly detection that includes monitoring, fault injection and data collection. The evaluation on this benchmark shows that our methodology consistently outperforms the non-contextual baselines. These results underscore the importance of contextual awareness for robust anomaly detection in complex, topology-driven systems. Beyond these achieved improvements, our benchmark establishes a reproducible and extensible foundation for future research, facilitating the experimentation with broader ranges of models and a continued advancement in context-aware anomaly detection.
{"title":"Toward Context-Aware Anomaly Detection for AIOps in Microservices Using Dynamic Knowledge Graphs","authors":"Pieter Moens;Bram Steenwinckel;Femke Ongenae;Bruno Volckaert;Sofie Van Hoecke","doi":"10.1109/TNSM.2026.3652304","DOIUrl":"https://doi.org/10.1109/TNSM.2026.3652304","url":null,"abstract":"Microservice applications are omnipresent due to their advantages, such as scalability, flexibility and consequentially resource cost efficiency. The loosely-coupled microservices can be easily added, replicated, updated and/or removed to address the changing workload. However, the distributed and dynamic nature of microservice architectures introduces a complexity with regard to monitoring and observability, which is paramount to ensure reliability, especially in critical domains. Anomaly detection has become an important tool to automate microservice monitoring and detect system failures. Nevertheless, state-of-the-art solutions assume the topology of the monitored application to remain static over time and fail to account for the dynamic changes the application, and the infrastructure it is deployed on, undergoes. This paper tackles these shortcomings by introducing a context-aware anomaly detection methodology using dynamic knowledge graphs to capture contextual features which describe the evolving state of the monitored system. Our methodology leverages resource and network monitoring to capture dependencies between microservices, and the infrastructure they are running on. In addition to the methodology for anomaly detection, this paper presents an open-source benchmark framework for context-aware anomaly detection that includes monitoring, fault injection and data collection. The evaluation on this benchmark shows that our methodology consistently outperforms the non-contextual baselines. These results underscore the importance of contextual awareness for robust anomaly detection in complex, topology-driven systems. Beyond these achieved improvements, our benchmark establishes a reproducible and extensible foundation for future research, facilitating the experimentation with broader ranges of models and a continued advancement in context-aware anomaly detection.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"23 ","pages":"1970-1988"},"PeriodicalIF":5.4,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11341916","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Internet Autonomous System (AS) level topology includes AS topology structure and AS business relationships, describes the essence of Internet inter-domain routing, and is the basis for Internet operation and management research. Although the latest topology inference methods have made significant progress, those relying solely on local information struggle to eliminate inference errors caused by observation bias and data noise due to their lack of a global perspective. In contrast, we not only leverage local AS link features but also re-examine the hierarchical structure of Internet AS-level topology, proposing a novel inference method called topoKG. TopoKG introduces a knowledge graph to represent the relationships between different elements on a global scale and the business routing strategies of ASes at various tiers, which effectively reduces inference errors resulting from observation bias and data noise by incorporating a global perspective. First, we construct an Internet AS-level topology knowledge graph to represent relevant data, enabling us to better leverage the global perspective and uncover the complex relationships among multiple elements. Next, we employ knowledge graph meta paths to measure the similarity of AS business routing strategies and introduce this global perspective constraint to infer the AS business relationships and hierarchical structure iteratively. Additionally, we embed the entire knowledge graph upon completing the iteration and conduct knowledge inference to derive AS business relationships. This approach captures global features and more intricate relational patterns within the knowledge graph, further enhancing the accuracy of AS-level topology inference. Compared to the state-of-the-art methods, our approach achieves more accurate AS-level topology inference, reducing the average inference error across various AS link types by up to 1.2 to 4.4 times.
{"title":"TopoKG: Infer Internet AS-Level Topology From Global Perspective","authors":"Jian Ye;Lisi Mo;Gaolei Fei;Yunpeng Zhou;Ming Xian;Xuemeng Zhai;Guangmin Hu;Ming Liang","doi":"10.1109/TNSM.2026.3652956","DOIUrl":"https://doi.org/10.1109/TNSM.2026.3652956","url":null,"abstract":"Internet Autonomous System (AS) level topology includes AS topology structure and AS business relationships, describes the essence of Internet inter-domain routing, and is the basis for Internet operation and management research. Although the latest topology inference methods have made significant progress, those relying solely on local information struggle to eliminate inference errors caused by observation bias and data noise due to their lack of a global perspective. In contrast, we not only leverage local AS link features but also re-examine the hierarchical structure of Internet AS-level topology, proposing a novel inference method called topoKG. TopoKG introduces a knowledge graph to represent the relationships between different elements on a global scale and the business routing strategies of ASes at various tiers, which effectively reduces inference errors resulting from observation bias and data noise by incorporating a global perspective. First, we construct an Internet AS-level topology knowledge graph to represent relevant data, enabling us to better leverage the global perspective and uncover the complex relationships among multiple elements. Next, we employ knowledge graph meta paths to measure the similarity of AS business routing strategies and introduce this global perspective constraint to infer the AS business relationships and hierarchical structure iteratively. Additionally, we embed the entire knowledge graph upon completing the iteration and conduct knowledge inference to derive AS business relationships. This approach captures global features and more intricate relational patterns within the knowledge graph, further enhancing the accuracy of AS-level topology inference. Compared to the state-of-the-art methods, our approach achieves more accurate AS-level topology inference, reducing the average inference error across various AS link types by up to 1.2 to 4.4 times.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"23 ","pages":"2006-2023"},"PeriodicalIF":5.4,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Encrypted traffic classification has become a crucial task for network management and security with the widespread adoption of encrypted protocols across the Internet and the Internet of Things. However, existing methods often rely on discrete representations and complex models, which leads to incomplete feature extraction, limited fine-grained classification accuracy, and high computational costs. To this end, we propose TrafficAudio, a novel encrypted traffic classification method based on audio representation. TrafficAudio comprises three modules: audio representation generation (ARG), audio feature extraction (AFE), and spatiotemporal traffic classification (STC). Specifically, the ARG module first represents raw network traffic as audio to preserve temporal continuity of traffic. Then, the audio is processed by the AFE module to compute low-dimensional Mel-frequency cepstral coefficients (MFCC), encoding both temporal and spectral characteristics. Finally, spatiotemporal features are extracted from MFCC through a parallel architecture of one-dimensional convolutional neural network and bidirectional gated recurrent unit layers, enabling fine-grained traffic classification. Experiments on five public datasets across six classification tasks demonstrate that TrafficAudio consistently outperforms ten state-of-the-art baselines, achieving accuracies of 99.74%, 98.40%, 99.76%, 99.25%, 99.77%, and 99.74%. Furthermore, TrafficAudio significantly reduces computational complexity, achieving reductions of 86.88% in floating-point operations and 43.15% of model parameters over the best-performing baseline.
{"title":"TrafficAudio: Audio Representation for Lightweight Encrypted Traffic Classification in IoT","authors":"Yilu Chen;Ye Wang;Ruonan Li;Yujia Xiao;Lichen Liu;Jinlong Li;Yan Jia;Zhaoquan Gu","doi":"10.1109/TNSM.2026.3651599","DOIUrl":"https://doi.org/10.1109/TNSM.2026.3651599","url":null,"abstract":"Encrypted traffic classification has become a crucial task for network management and security with the widespread adoption of encrypted protocols across the Internet and the Internet of Things. However, existing methods often rely on discrete representations and complex models, which leads to incomplete feature extraction, limited fine-grained classification accuracy, and high computational costs. To this end, we propose TrafficAudio, a novel encrypted traffic classification method based on audio representation. TrafficAudio comprises three modules: audio representation generation (ARG), audio feature extraction (AFE), and spatiotemporal traffic classification (STC). Specifically, the ARG module first represents raw network traffic as audio to preserve temporal continuity of traffic. Then, the audio is processed by the AFE module to compute low-dimensional Mel-frequency cepstral coefficients (MFCC), encoding both temporal and spectral characteristics. Finally, spatiotemporal features are extracted from MFCC through a parallel architecture of one-dimensional convolutional neural network and bidirectional gated recurrent unit layers, enabling fine-grained traffic classification. Experiments on five public datasets across six classification tasks demonstrate that TrafficAudio consistently outperforms ten state-of-the-art baselines, achieving accuracies of 99.74%, 98.40%, 99.76%, 99.25%, 99.77%, and 99.74%. Furthermore, TrafficAudio significantly reduces computational complexity, achieving reductions of 86.88% in floating-point operations and 43.15% of model parameters over the best-performing baseline.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"23 ","pages":"2077-2091"},"PeriodicalIF":5.4,"publicationDate":"2026-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-05DOI: 10.1109/TNSM.2025.3650612
Wencheng Chen;Jun Wang;Jeng-Shyang Pan;R. Simon Sherratt;Jin Wang
The rapid advancement of Industry 5.0 has accelerated the adoption of the Industrial Internet of Things (IIoT). However, challenges such as data privacy breaches, malicious attacks, and the absence of trustworthy mechanisms continue to hinder its secure and efficient operation. To overcome these issues, this paper proposes an enhanced blockchain-based data storage framework and systematically improves the Delegated Proof of Stake (DPoS) consensus mechanism. A four-party evolutionary game model is developed, involving agent nodes, voting nodes, malicious nodes, and supervisory nodes, to comprehensively analyze the dynamic effects of key factors—including bribery intensity, malicious costs, supervision, and reputation mechanisms—on system stability. Furthermore, novel incentive and punishment strategies are introduced to foster node collaboration and suppress malicious behaviors. The simulation results show that the improved DPoS mechanism achieves significant enhancements across multiple performance dimensions. Under high-load conditions, the system increases transaction throughput by approximately 5%, reduces consensus latency, and maintains stable operation even as the network scale expands. In adversarial scenarios, the double-spending attack success rate decreases to about 2.6%, indicating strengthened security resilience. In addition, the convergence of strategy evolution is notably accelerated, enabling the system to reach cooperative and stable states more efficiently. These results demonstrate that the proposed mechanism effectively improves the efficiency, security, and dynamic stability of IIoT data storage systems, providing strong support for reliable operation in complex industrial environments.
{"title":"Enhancing the Delegated Proof of Stake Consensus Mechanism for Secure and Efficient Data Storage in the Industrial Internet of Things","authors":"Wencheng Chen;Jun Wang;Jeng-Shyang Pan;R. Simon Sherratt;Jin Wang","doi":"10.1109/TNSM.2025.3650612","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3650612","url":null,"abstract":"The rapid advancement of Industry 5.0 has accelerated the adoption of the Industrial Internet of Things (IIoT). However, challenges such as data privacy breaches, malicious attacks, and the absence of trustworthy mechanisms continue to hinder its secure and efficient operation. To overcome these issues, this paper proposes an enhanced blockchain-based data storage framework and systematically improves the Delegated Proof of Stake (DPoS) consensus mechanism. A four-party evolutionary game model is developed, involving agent nodes, voting nodes, malicious nodes, and supervisory nodes, to comprehensively analyze the dynamic effects of key factors—including bribery intensity, malicious costs, supervision, and reputation mechanisms—on system stability. Furthermore, novel incentive and punishment strategies are introduced to foster node collaboration and suppress malicious behaviors. The simulation results show that the improved DPoS mechanism achieves significant enhancements across multiple performance dimensions. Under high-load conditions, the system increases transaction throughput by approximately 5%, reduces consensus latency, and maintains stable operation even as the network scale expands. In adversarial scenarios, the double-spending attack success rate decreases to about 2.6%, indicating strengthened security resilience. In addition, the convergence of strategy evolution is notably accelerated, enabling the system to reach cooperative and stable states more efficiently. These results demonstrate that the proposed mechanism effectively improves the efficiency, security, and dynamic stability of IIoT data storage systems, providing strong support for reliable operation in complex industrial environments.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"23 ","pages":"1842-1862"},"PeriodicalIF":5.4,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}