Pub Date : 2025-02-01DOI: 10.1016/j.comnet.2024.110973
Mingming Zhan , Jin Yang , Dongqing Jia , Geyuan Fu
Encrypted traffic classification plays a critical role in network traffic management and optimization, as it helps identify and differentiate between various types of traffic, thereby enhancing the quality and efficiency of network services. However, with the continuous evolution of traffic encryption and network applications, a large and diverse volume of encrypted traffic has emerged, presenting challenges for traditional feature extraction-based methods in identifying encrypted traffic effectively. This paper introduces an encrypted traffic classification model via adversarial pre-trained transformers-EAPT. The model utilizes the SentencePiece to tokenize encrypted traffic data, effectively addressing the issue of coarse tokenization granularity, thereby ensuring that the tokenization results more accurately reflect the characteristics of the encrypted traffic. During the pre-training phase, the EAPT employs a disentangled attention mechanism and incorporates a pre-training task similar to generative adversarial networks called Replaced BURST Detection. This approach not only enhances the model’s ability to understand contextual information but also accelerates the pre-training process. Additionally, this method minimizes model parameters, thus improving the model’s generalization capability. Experimental results show that EAPT can efficiently learn traffic features from small-scale unlabeled datasets and demonstrate excellent performance across multiple datasets with a relatively small number of model parameters.
{"title":"EAPT: An encrypted traffic classification model via adversarial pre-trained transformers","authors":"Mingming Zhan , Jin Yang , Dongqing Jia , Geyuan Fu","doi":"10.1016/j.comnet.2024.110973","DOIUrl":"10.1016/j.comnet.2024.110973","url":null,"abstract":"<div><div>Encrypted traffic classification plays a critical role in network traffic management and optimization, as it helps identify and differentiate between various types of traffic, thereby enhancing the quality and efficiency of network services. However, with the continuous evolution of traffic encryption and network applications, a large and diverse volume of encrypted traffic has emerged, presenting challenges for traditional feature extraction-based methods in identifying encrypted traffic effectively. This paper introduces an encrypted traffic classification model via adversarial pre-trained transformers-EAPT. The model utilizes the SentencePiece to tokenize encrypted traffic data, effectively addressing the issue of coarse tokenization granularity, thereby ensuring that the tokenization results more accurately reflect the characteristics of the encrypted traffic. During the pre-training phase, the EAPT employs a disentangled attention mechanism and incorporates a pre-training task similar to generative adversarial networks called Replaced BURST Detection. This approach not only enhances the model’s ability to understand contextual information but also accelerates the pre-training process. Additionally, this method minimizes model parameters, thus improving the model’s generalization capability. Experimental results show that EAPT can efficiently learn traffic features from small-scale unlabeled datasets and demonstrate excellent performance across multiple datasets with a relatively small number of model parameters.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"257 ","pages":"Article 110973"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01DOI: 10.1016/j.comnet.2024.110915
Jing Zhang , Jianfeng Guan , Kexian Liu , Yizhong Hu , Ao Shen , Yuyin Ma
Traditional routing typically relies on simpler performance metrics that can be derived directly through mathematical methods for decision-making, which often results in limited optimization outcomes. As future networks expand, along with the diversity of applications and traffic volume, the network environment grows increasingly complex. In contrast, Intelligent Routing (IR) that leverages machine learning methods can model more complex performance metrics, rendering it better suited to the intricate scenarios of future networks. The increasing complexity of networks also indicates that the workload associated with collecting routing information and executing decision calculations is growing exponentially. Compared to centralized IR, Distributed IR (DIR) distributes the computational load and interaction demands across multiple nodes, thereby offering enhanced scalability. However, DIR makes decisions based on local information, which limits global optimization. In this paper, we propose a novel Scalable Intelligent Routing based on Distributed Graph Reinforcement Learning, called ScaIR. ScaIR is a full y distributed multi-agent routing method. Each router is an independent agent based on local graph Reinforcement Learning (RL). Graph Neural Networks (GNN) are employed to extract global network characteristics which serve as input for RL, thereby enhancing global optimization. Especially, GNN here is also fully distributed. Each router has an independent sub-GNN determined by the adjacency relationships with its one-hop neighbors. Instead of entire network information and model parameters, each sub-GNN only iteratively interacts with its neighbors and computes a highly compressed Feature Vector (FV) representing the current network state, which greatly saves the computing and communication cost. We carried out extensive simulation experiments under multiple real network topologies of different scales. Simulation results show that ScaIR reduces forwarding time by more than 25% and achieves faster convergence. It can better adapt to congested, dynamic or unknown environments. Compared to other methods, it significantly reduces communication cost and computational time, and has better scalability. In addition, by changing the FV length of sub-GNNs, it is verified that GNN does play a key role in ScaIR.
{"title":"ScaIR: Scalable Intelligent Routing based on Distributed Graph Reinforcement Learning","authors":"Jing Zhang , Jianfeng Guan , Kexian Liu , Yizhong Hu , Ao Shen , Yuyin Ma","doi":"10.1016/j.comnet.2024.110915","DOIUrl":"10.1016/j.comnet.2024.110915","url":null,"abstract":"<div><div>Traditional routing typically relies on simpler performance metrics that can be derived directly through mathematical methods for decision-making, which often results in limited optimization outcomes. As future networks expand, along with the diversity of applications and traffic volume, the network environment grows increasingly complex. In contrast, Intelligent Routing (IR) that leverages machine learning methods can model more complex performance metrics, rendering it better suited to the intricate scenarios of future networks. The increasing complexity of networks also indicates that the workload associated with collecting routing information and executing decision calculations is growing exponentially. Compared to centralized IR, Distributed IR (DIR) distributes the computational load and interaction demands across multiple nodes, thereby offering enhanced scalability. However, DIR makes decisions based on local information, which limits global optimization. In this paper, we propose a novel Scalable Intelligent Routing based on Distributed Graph Reinforcement Learning, called ScaIR. ScaIR is a full y distributed multi-agent routing method. Each router is an independent agent based on local graph Reinforcement Learning (RL). Graph Neural Networks (GNN) are employed to extract global network characteristics which serve as input for RL, thereby enhancing global optimization. Especially, GNN here is also fully distributed. Each router has an independent sub-GNN determined by the adjacency relationships with its one-hop neighbors. Instead of entire network information and model parameters, each sub-GNN only iteratively interacts with its neighbors and computes a highly compressed Feature Vector (FV) representing the current network state, which greatly saves the computing and communication cost. We carried out extensive simulation experiments under multiple real network topologies of different scales. Simulation results show that ScaIR reduces forwarding time by more than 25% and achieves faster convergence. It can better adapt to congested, dynamic or unknown environments. Compared to other methods, it significantly reduces communication cost and computational time, and has better scalability. In addition, by changing the FV length of sub-GNNs, it is verified that GNN does play a key role in ScaIR.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"257 ","pages":"Article 110915"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01DOI: 10.1016/j.comnet.2024.110928
Paulo J.S. Júnior, Lucas R. Costa, André C. Drummond, Marcelo A. Marotta
The rapid growth of internet traffic, driven by advancements in 5G and cloud services, has intensified the need for reliable optical networks. Infrastructure failures in these networks can lead to significant disruptions and financial losses, highlighting the critical importance of robust protection mechanisms. While existing research has explored various protection strategies based on on-demand provisioning of resources, a gap remains in understanding the effectiveness of pre-provisioning mechanisms specifically tailored for Elastic Optical Networks (EONs). Pre-provisioning is a strategy that involves reserving network resources, such as bandwidth or lightpaths, in advance of anticipated traffic demands. This proactive approach ensures that resources are readily available when needed, reducing setup times and enhancing network resilience. In this paper we propose a novel algorithm, BPreProvEON, designed to optimize resource allocation and enhance the survivability of EONs through a hybrid combination of pre-provisioning and on-demand provisioning of protection schemes. Extensive comparisons with traditional non-pre-provisioning and Shared Path Protection (SPP)-based pre-provisioning demonstrate BPreProvEON’s superior performance in bandwidth blocking rate and connection availability, highlighting its potential for enhancing the resilience and performance of optical networks.
{"title":"Minimizing unavailability in Elastic Optical Networks: Pre-provisioning and provisioning protection strategy using DLP and DPP","authors":"Paulo J.S. Júnior, Lucas R. Costa, André C. Drummond, Marcelo A. Marotta","doi":"10.1016/j.comnet.2024.110928","DOIUrl":"10.1016/j.comnet.2024.110928","url":null,"abstract":"<div><div>The rapid growth of internet traffic, driven by advancements in 5G and cloud services, has intensified the need for reliable optical networks. Infrastructure failures in these networks can lead to significant disruptions and financial losses, highlighting the critical importance of robust protection mechanisms. While existing research has explored various protection strategies based on on-demand provisioning of resources, a gap remains in understanding the effectiveness of pre-provisioning mechanisms specifically tailored for Elastic Optical Networks (EONs). Pre-provisioning is a strategy that involves reserving network resources, such as bandwidth or lightpaths, in advance of anticipated traffic demands. This proactive approach ensures that resources are readily available when needed, reducing setup times and enhancing network resilience. In this paper we propose a novel algorithm, BPreProvEON, designed to optimize resource allocation and enhance the survivability of EONs through a hybrid combination of pre-provisioning and on-demand provisioning of protection schemes. Extensive comparisons with traditional non-pre-provisioning and Shared Path Protection (SPP)-based pre-provisioning demonstrate BPreProvEON’s superior performance in bandwidth blocking rate and connection availability, highlighting its potential for enhancing the resilience and performance of optical networks.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"257 ","pages":"Article 110928"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01DOI: 10.1016/j.comnet.2025.111046
Mert Yağcıoğlu
As 5G mobile communication systems evolve to the growing demands for network capacity and coverage, innovative solutions are required to address challenges such as interference and spectrum efficiency. The provision of temporary cellular network coverage by unmanned aerial vehicles (UAVs), which have made significant progress in recent years, provides great advantages in industries like telecommunications, public safety, and disaster recovery. Instead of using traditional base stations, UAVs, which we call flying base stations or Dronecells, can reduce interference and costs. Drones are strategically positioned at the center of user clusters, determined using the widely adopted k-means clustering algorithm, an unsupervised machine learning technique. Additionally, we use the TOPSIS method to ascertain users' priorities in resource allocation. The main challenge in this work lies in determining the optimal location and the appropriate number for the Dronecells. The article introduces a Benefit-Based Resource Allocation Algorithm (BRSA), designed for dynamic resource sharing in dense heterogeneous urban networks with Dronecells. This algorithm aims to enhance spectrum efficiency, optimize user fairness and minimize intercell interference. The number of Dronecells varies based on user density, allowing adaptability to different scenarios. Another objective is to identify the optimal cell center and cell edge areas by utilizing Reference Signal Received Power (RSRP) threshold values to maximize throughput for both cell center and cell edge users. Extensive simulations show that the proposed BRSA method significantly improves performance, increasing average cell edge user throughput by up to 25% while also enhancing fairness across the entire cell.
{"title":"Machine learning based dynamic resource sharing and frequency reuse in 5G hetnets with dronecells","authors":"Mert Yağcıoğlu","doi":"10.1016/j.comnet.2025.111046","DOIUrl":"10.1016/j.comnet.2025.111046","url":null,"abstract":"<div><div>As 5G mobile communication systems evolve to the growing demands for network capacity and coverage, innovative solutions are required to address challenges such as interference and spectrum efficiency. The provision of temporary cellular network coverage by unmanned aerial vehicles (UAVs), which have made significant progress in recent years, provides great advantages in industries like telecommunications, public safety, and disaster recovery. Instead of using traditional base stations, UAVs, which we call flying base stations or Dronecells, can reduce interference and costs. Drones are strategically positioned at the center of user clusters, determined using the widely adopted k-means clustering algorithm, an unsupervised machine learning technique. Additionally, we use the TOPSIS method to ascertain users' priorities in resource allocation. The main challenge in this work lies in determining the optimal location and the appropriate number for the Dronecells. The article introduces a Benefit-Based Resource Allocation Algorithm (BRSA), designed for dynamic resource sharing in dense heterogeneous urban networks with Dronecells. This algorithm aims to enhance spectrum efficiency, optimize user fairness and minimize intercell interference. The number of Dronecells varies based on user density, allowing adaptability to different scenarios. Another objective is to identify the optimal cell center and cell edge areas by utilizing Reference Signal Received Power (RSRP) threshold values to maximize throughput for both cell center and cell edge users. Extensive simulations show that the proposed BRSA method significantly improves performance, increasing average cell edge user throughput by up to 25% while also enhancing fairness across the entire cell.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"258 ","pages":"Article 111046"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143176532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01DOI: 10.1016/j.comnet.2025.111037
Wenming Wang , Zhiquan Liu , Lingyan Xue , Haiping Huang , Nageswara Rao Lavuri
The advancement of the vehicular networks has significantly enhanced the capabilities of intelligent transportation systems. However, the transmission of malicious information (e.g., false emergency alerts or traffic congestion updates) by comprised vehicles poses a serious threat to emergency response operations, particularly in mountainous regions where signal quality is poor. Recent developments in integrating Unmanned Aerial Vehicles (UAVs) with vehicular networks have introduced new opportunities for enhancing network security, enhancing network security in these challenging environments. In response to these challenges, this paper proposes a UAV and vehicle cooperative authentication scheme for detecting malicious vehicles within vehicular networks. The proposed approach reduces overhead through cooperative authentication between vehicles, while UAVs monitor vehicle behavior in real-time. Additionally, Trusted Centers of Authority (TCAs) are employed to oversee UAV activities, ensuring the integrity of the system. The TCA dynamically allocates suitable UAVs for tasks based on real-time availability, optimizing resource utilization. Furthermore, the proposed scheme introduces a hierarchical TCAs structure, partitioning it into root_TCA and sub_TCA, which mitigates the risk of single points of failure and improves resource efficiency. Comparative analysis demonstrates that the proposed scheme offers superior performance in terms of computational and communication overhead compared to existing methods.
{"title":"Malicious vehicle detection scheme based on UAV and vehicle cooperative authentication in vehicular networks","authors":"Wenming Wang , Zhiquan Liu , Lingyan Xue , Haiping Huang , Nageswara Rao Lavuri","doi":"10.1016/j.comnet.2025.111037","DOIUrl":"10.1016/j.comnet.2025.111037","url":null,"abstract":"<div><div>The advancement of the vehicular networks has significantly enhanced the capabilities of intelligent transportation systems. However, the transmission of malicious information (<em>e.g.</em>, false emergency alerts or traffic congestion updates) by comprised vehicles poses a serious threat to emergency response operations, particularly in mountainous regions where signal quality is poor. Recent developments in integrating Unmanned Aerial Vehicles (UAVs) with vehicular networks have introduced new opportunities for enhancing network security, enhancing network security in these challenging environments. In response to these challenges, this paper proposes a UAV and vehicle cooperative authentication scheme for detecting malicious vehicles within vehicular networks. The proposed approach reduces overhead through cooperative authentication between vehicles, while UAVs monitor vehicle behavior in real-time. Additionally, Trusted Centers of Authority (TCAs) are employed to oversee UAV activities, ensuring the integrity of the system. The TCA dynamically allocates suitable UAVs for tasks based on real-time availability, optimizing resource utilization. Furthermore, the proposed scheme introduces a hierarchical TCAs structure, partitioning it into root_TCA and sub_TCA, which mitigates the risk of single points of failure and improves resource efficiency. Comparative analysis demonstrates that the proposed scheme offers superior performance in terms of computational and communication overhead compared to existing methods.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"258 ","pages":"Article 111037"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143176539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The increase in data paths and the resulting latency growth in Wireless Virtual Reality (WVR) can significantly affect user experience. Mobile Edge Computing emerges as an effective solution to address these issues. However, offloading methods based on Deep Reinforcement Learning (DRL) face hurdles like limited environmental exploration and prolonged user waiting time. To address the mentioned challenges in WVR edge computing, where computational offloading involves multiple devices and edge servers, we aim to minimize system latency and reduce energy consumption. Therefore, we introduce the Task Prediction and Multi-objective Optimization Algorithm (TPMOA). First, we reduce the time users wait for rendering results by predicting their viewpoints. Next, we apply an entropy-innovated DRL algorithm to the latent space for computation offloading. Through representation learning, we establish a reward function that includes latent objectives and optimizes the experience replay buffer. This approach allows us to train and select the optimal offloading strategy, thereby reducing rendering latency and system energy consumption. Our experiments show that our approach effectively tackles the challenges of limited environmental exploration ability and extended user waiting time. Specifically, our method outperforms the RNN-based AC method significantly, reducing latency by 11.39%.
{"title":"DRL-based latency-energy offloading optimization strategy in wireless VR networks with edge computing","authors":"Jieru Wang , Hui Xia , Lijuan Xu , Rui Zhang , Kunkun Jia","doi":"10.1016/j.comnet.2025.111034","DOIUrl":"10.1016/j.comnet.2025.111034","url":null,"abstract":"<div><div>The increase in data paths and the resulting latency growth in Wireless Virtual Reality (WVR) can significantly affect user experience. Mobile Edge Computing emerges as an effective solution to address these issues. However, offloading methods based on Deep Reinforcement Learning (DRL) face hurdles like limited environmental exploration and prolonged user waiting time. To address the mentioned challenges in WVR edge computing, where computational offloading involves multiple devices and edge servers, we aim to minimize system latency and reduce energy consumption. Therefore, we introduce the Task Prediction and Multi-objective Optimization Algorithm (TPMOA). First, we reduce the time users wait for rendering results by predicting their viewpoints. Next, we apply an entropy-innovated DRL algorithm to the latent space for computation offloading. Through representation learning, we establish a reward function that includes latent objectives and optimizes the experience replay buffer. This approach allows us to train and select the optimal offloading strategy, thereby reducing rendering latency and system energy consumption. Our experiments show that our approach effectively tackles the challenges of limited environmental exploration ability and extended user waiting time. Specifically, our method outperforms the RNN-based AC method significantly, reducing latency by 11.39%.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"258 ","pages":"Article 111034"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143177368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01DOI: 10.1016/j.comnet.2025.111062
Zhen Qian, Guanghui Li, Tao Qi, Chenglong Dai
As the 5G technology and mobile smart devices evolve rapidly, the federated learning-based edge video caching has become a key technology to mitigate the explosive growth of traffic. However, due to energy-limited edge mobile devices, it is unrealistic to keep the maximum computational power of all smart devices in each round of communication in federated learning. Moreover, users’ implicit feedback behavior poses challenges to predicting popular content. To tackle these challenges, we propose a Federated deep Reinforcement learning-based Proactive Video Caching scheme (FRPVC), which not only improves the cache hit rate while addressing user privacy and security, but also minimizes the total system cost in energy-constrained mobile edge computing networks. FRPVC utilizes the user’s local implicit feedback data for training denoised auto-encoder models based on federated learning. We further formulate the user computational resource allocation problem as a Markov Decision Process (MDP) to minimize the expected long-term system cost and propose a DDQN-based resource allocation method to solve the optimal resource allocation policy, which can efficiently allocate the computational resources of each federated training client to minimize the total cost of the federated learning process. By validating under three real datasets, the experiments show that the proposed scheme outperforms the baseline algorithm in terms of cache hit rate and is close to the optimal algorithm. In addition, the experiments also show that FRPVC is able to effectively reduce the system cost under local resource constraints.
{"title":"Federated deep reinforcement learning-based cost-efficient proactive video caching in energy-constrained mobile edge networks","authors":"Zhen Qian, Guanghui Li, Tao Qi, Chenglong Dai","doi":"10.1016/j.comnet.2025.111062","DOIUrl":"10.1016/j.comnet.2025.111062","url":null,"abstract":"<div><div>As the 5G technology and mobile smart devices evolve rapidly, the federated learning-based edge video caching has become a key technology to mitigate the explosive growth of traffic. However, due to energy-limited edge mobile devices, it is unrealistic to keep the maximum computational power of all smart devices in each round of communication in federated learning. Moreover, users’ implicit feedback behavior poses challenges to predicting popular content. To tackle these challenges, we propose a Federated deep Reinforcement learning-based Proactive Video Caching scheme (FRPVC), which not only improves the cache hit rate while addressing user privacy and security, but also minimizes the total system cost in energy-constrained mobile edge computing networks. FRPVC utilizes the user’s local implicit feedback data for training denoised auto-encoder models based on federated learning. We further formulate the user computational resource allocation problem as a Markov Decision Process (MDP) to minimize the expected long-term system cost and propose a DDQN-based resource allocation method to solve the optimal resource allocation policy, which can efficiently allocate the computational resources of each federated training client to minimize the total cost of the federated learning process. By validating under three real datasets, the experiments show that the proposed scheme outperforms the baseline algorithm in terms of cache hit rate and is close to the optimal algorithm. In addition, the experiments also show that FRPVC is able to effectively reduce the system cost under local resource constraints.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"258 ","pages":"Article 111062"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143177531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01DOI: 10.1016/j.comnet.2025.111039
Yang Yang , Jiaxing Zhang , Yanjiao Chen , Qing Huang , Fei Chen , Jing Chen
With more and more attention to the security of cloud storage, people are increasingly inclined to remotely store their encrypted data. In what follows, many searchable encryption (SE) schemes have been proposed. Unfortunately, the existing SE schemes under multi-user and multi-owner model are usually inefficient in owner-level and attribute-level access controls. Therefore, this paper aims to further improve the efficiency of the encrypted search with two-level access control. In our design, the owner-level permission is inspired by BLS signature and the attribute-level permission is based on CP-ABE, both of which are driven by simplifying the related keys as much as possible, allowing users to efficiently search data from multiple owners by using a single trapdoor. In addition, the proposed scheme can also efficiently support permission revocation, thanks to our simplified key for a user which can be independently updated without affecting other users. For the sake of security, we perform the random masking technique on encrypted index and searching trapdoor for hiding the keyword embedded in them, while keeping their matching relationship for keyword search. The proposed scheme is strictly proven to have the security properties of keyword secrecy, keyword irreplaceability, two-level controlled search and forward secrecy. Finally, we give plenty of theoretical analysis and experimental results to validate the superiority of the proposed scheme.
{"title":"An efficient encrypted search with owner-level and attribute-level access controls","authors":"Yang Yang , Jiaxing Zhang , Yanjiao Chen , Qing Huang , Fei Chen , Jing Chen","doi":"10.1016/j.comnet.2025.111039","DOIUrl":"10.1016/j.comnet.2025.111039","url":null,"abstract":"<div><div>With more and more attention to the security of cloud storage, people are increasingly inclined to remotely store their encrypted data. In what follows, many searchable encryption (SE) schemes have been proposed. Unfortunately, the existing SE schemes under multi-user and multi-owner model are usually inefficient in owner-level and attribute-level access controls. Therefore, this paper aims to further improve the efficiency of the encrypted search with two-level access control. In our design, the owner-level permission is inspired by BLS signature and the attribute-level permission is based on CP-ABE, both of which are driven by simplifying the related keys as much as possible, allowing users to efficiently search data from multiple owners by using a single trapdoor. In addition, the proposed scheme can also efficiently support permission revocation, thanks to our simplified key for a user which can be independently updated without affecting other users. For the sake of security, we perform the random masking technique on encrypted index and searching trapdoor for hiding the keyword embedded in them, while keeping their matching relationship for keyword search. The proposed scheme is strictly proven to have the security properties of keyword secrecy, keyword irreplaceability, two-level controlled search and forward secrecy. Finally, we give plenty of theoretical analysis and experimental results to validate the superiority of the proposed scheme.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"258 ","pages":"Article 111039"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143177532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01DOI: 10.1016/j.comnet.2024.110999
Jiang Xie , Shuhao Li , Xiaochun Yun , Chengxiang Si , Tao Yin
Network attacks pose serious threats to cybersecurity. Researchers provide well-known malicious sample datasets for evaluating methods to detect these attacks. However, we discover that these datasets exhibit a multi-label phenomenon, where a sample has multiple labels. Multi-label problems are ubiquitous, such as in malware detection, where different engines could assign different labels to the same unknown software. But multi-label phenomenon in computer network datasets is different from the traditional multi-label problem. These datasets, which are by default single-labeled, annotated, published, and utilized to evaluate various single-label detection methods. Researchers ignore the possibility that the samples within the datasets may be multi-labeled. Therefore, it is inappropriate to directly utilize these data for evaluating single-label detection methods.
In this paper, we focus on well-known malicious traffic and malware datasets with a comprehensive study, including sample analysis and multi-label classification: (1) We perform comprehensive statistics on 15 datasets, quantify the proportion of multi-label samples and the number of categories affected in them, and analyze the intrinsic connections between attacks. (2) We employ multiple classical multi-label algorithms to classify the multi-label samples in 9 datasets, and the experimental results show that they are superior to the single-label state-of-the-art (SOTA) method, and can improve accuracy and F1 by 39.6% and 57.69% on average.
We conclude that the multi-label phenomenon is ubiquitous in malicious traffic and malware datasets, and it should be considered in network attack detection.
{"title":"Sample analysis and multi-label classification for malicious sample datasets","authors":"Jiang Xie , Shuhao Li , Xiaochun Yun , Chengxiang Si , Tao Yin","doi":"10.1016/j.comnet.2024.110999","DOIUrl":"10.1016/j.comnet.2024.110999","url":null,"abstract":"<div><div>Network attacks pose serious threats to cybersecurity. Researchers provide well-known malicious sample datasets for evaluating methods to detect these attacks. However, we discover that these datasets exhibit a multi-label phenomenon, where a sample has multiple labels. Multi-label problems are ubiquitous, such as in malware detection, where different engines could assign different labels to the same unknown software. But multi-label phenomenon in computer network datasets is different from the traditional multi-label problem. These datasets, which are by default single-labeled, annotated, published, and utilized to evaluate various single-label detection methods. Researchers ignore the possibility that the samples within the datasets may be multi-labeled. Therefore, it is inappropriate to directly utilize these data for evaluating single-label detection methods.</div><div>In this paper, we focus on well-known malicious traffic and malware datasets with a comprehensive study, including sample analysis and multi-label classification: (1) We perform comprehensive statistics on 15 datasets, quantify the proportion of multi-label samples and the number of categories affected in them, and analyze the intrinsic connections between attacks. (2) We employ multiple classical multi-label algorithms to classify the multi-label samples in 9 datasets, and the experimental results show that they are superior to the single-label state-of-the-art (SOTA) method, and can improve accuracy and F1 by 39.6% and 57.69% on average.</div><div>We conclude that the multi-label phenomenon is ubiquitous in malicious traffic and malware datasets, and it should be considered in network attack detection.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"258 ","pages":"Article 110999"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143178146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01DOI: 10.1016/j.comnet.2024.110997
Xiaorong Liu , Danyang Zheng , Huanlai Xing , Li Feng , Chengzong Peng , Xiaojun Cao
Recently, the in-network computing (INC) technique has been widely adopted by various applications including the reliability-sensitive ones such as remote surgery, and autonomous vehicle systems. To deploy reliable INC-enabled services, redundant task replicas are hosted by network devices to meet a specified service reliability threshold such as 99.9%, 99.99%, and 99.999%. Most existing works assume that this threshold is solely impacted by the software reliability while neglecting the hardware reliability. This neglectedness likely leads to unexpected service interruptions when the software replicas are co-deployed over one unreliable hardware. This work jointly considers the heterogeneous reliability brought by both software and hardware and identifies a novel phenomenon called “Software-Reliability-Only Experience Degradation” (SRO-ED). To address this, we mathematically establish the INC-enabled services adoption with heterogeneous reliability (ISAHR) problem to optimize service costs and prove its NP-hardness. We introduce an effective Cost-Reliability (CR) measure to indicate the average cost needed to satisfy each reliability unit while considering both software and hardware reliabilities. Next, we propose an innovative algorithm called CR measure-based INC services deployment (CR-D), which is proved to be logarithm-approximate in cost optimization. Extensive simulation results validate the logarithmic approximation of CR-D, and show that it outperforms the benchmarks by an average of 29.42% and 35.77% in cost optimization.
{"title":"A cost-provable solution for reliable in-network computing-enabled services deployment","authors":"Xiaorong Liu , Danyang Zheng , Huanlai Xing , Li Feng , Chengzong Peng , Xiaojun Cao","doi":"10.1016/j.comnet.2024.110997","DOIUrl":"10.1016/j.comnet.2024.110997","url":null,"abstract":"<div><div>Recently, the in-network computing (INC) technique has been widely adopted by various applications including the reliability-sensitive ones such as remote surgery, and autonomous vehicle systems. To deploy reliable INC-enabled services, redundant task replicas are hosted by network devices to meet a specified service reliability threshold such as 99.9%, 99.99%, and 99.999%. Most existing works assume that this threshold is solely impacted by the software reliability while neglecting the hardware reliability. This neglectedness likely leads to unexpected service interruptions when the software replicas are co-deployed over one unreliable hardware. This work jointly considers the heterogeneous reliability brought by both software and hardware and identifies a novel phenomenon called “Software-Reliability-Only Experience Degradation” (SRO-ED). To address this, we mathematically establish the INC-enabled services adoption with heterogeneous reliability (ISAHR) problem to optimize service costs and prove its NP-hardness. We introduce an effective Cost-Reliability (CR) measure to indicate the average cost needed to satisfy each reliability unit while considering both software and hardware reliabilities. Next, we propose an innovative algorithm called CR measure-based INC services deployment (CR-D), which is proved to be logarithm-approximate in cost optimization. Extensive simulation results validate the logarithmic approximation of CR-D, and show that it outperforms the benchmarks by an average of 29.42% and 35.77% in cost optimization.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"257 ","pages":"Article 110997"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}