首页 > 最新文献

Computer Networks最新文献

英文 中文
Model collaboration framework design for space-air-ground integrated networks
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-02-01 DOI: 10.1016/j.comnet.2024.111013
Shuhang Zhang
The sixth-generation (6G) of wireless networks is expected to surpass its predecessors by offering ubiquitous coverage of sensing, communication, and computing by the deployment of space-air-ground integrated networks (SAGINs). In SAGINs, aerial facilities, such as unmanned aerial vehicles (UAVs), collect multi-modal sensory data to support diverse applications including surveillance and battlefield monitoring. However, these processing of the multi-domain inference tasks require large artificial intelligence (AI) models, demanding powerful computing capabilities and finely tuned inference models trained on rich datasets, thus posing significant challenges for UAVs. To provide ubiquitous powerful computation, we propose a SAGIN model collaboration framework, where LEO satellites with ubiquitous service coverage and ground servers with powerful computing capabilities work as edge nodes and cloud nodes, respectively, for the processing of sensory data from the UAVs. With limited communication bandwidth and computing capacity, the proposed framework faces the challenge of computing allocation among the edge nodes and the cloud nodes, together with the uplink-downlink resource allocation for the sensory data and model transmissions. To tackle this, we present joint edge-cloud task allocation, air-space-ground communication resource allocation, and sensory data quantization design to maximize the inference accuracy of the SAGIN model collaboration framework. The mixed integer programming problem is decomposed into two subproblems, and solved based on the propositions summarized from experimental studies. Simulations based on results from vision-based classification experiments consistently demonstrate that the inference accuracy of the SAGIN model collaboration framework outperforms both a centralized cloud model framework and a distributed edge model framework across various communication bandwidths and data sizes.
{"title":"Model collaboration framework design for space-air-ground integrated networks","authors":"Shuhang Zhang","doi":"10.1016/j.comnet.2024.111013","DOIUrl":"10.1016/j.comnet.2024.111013","url":null,"abstract":"<div><div>The sixth-generation (6G) of wireless networks is expected to surpass its predecessors by offering ubiquitous coverage of sensing, communication, and computing by the deployment of space-air-ground integrated networks (SAGINs). In SAGINs, aerial facilities, such as unmanned aerial vehicles (UAVs), collect multi-modal sensory data to support diverse applications including surveillance and battlefield monitoring. However, these processing of the multi-domain inference tasks require large artificial intelligence (AI) models, demanding powerful computing capabilities and finely tuned inference models trained on rich datasets, thus posing significant challenges for UAVs. To provide ubiquitous powerful computation, we propose a SAGIN model collaboration framework, where LEO satellites with ubiquitous service coverage and ground servers with powerful computing capabilities work as edge nodes and cloud nodes, respectively, for the processing of sensory data from the UAVs. With limited communication bandwidth and computing capacity, the proposed framework faces the challenge of computing allocation among the edge nodes and the cloud nodes, together with the uplink-downlink resource allocation for the sensory data and model transmissions. To tackle this, we present joint edge-cloud task allocation, air-space-ground communication resource allocation, and sensory data quantization design to maximize the inference accuracy of the SAGIN model collaboration framework. The mixed integer programming problem is decomposed into two subproblems, and solved based on the propositions summarized from experimental studies. Simulations based on results from vision-based classification experiments consistently demonstrate that the inference accuracy of the SAGIN model collaboration framework outperforms both a centralized cloud model framework and a distributed edge model framework across various communication bandwidths and data sizes.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"257 ","pages":"Article 111013"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint satellite platform and constellation sizing for instantaneous beam-hopping in 5G/6G Non-Terrestrial Networks
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-02-01 DOI: 10.1016/j.comnet.2024.110942
Samuel Martínez Zamacola , Ramón Martínez Rodríguez-Osorio , Miguel A. Salas-Natera
Existing research on resource allocation in satellite networks incorporating beam-hopping (BH) focuses predominantly on the performance analysis of diverse algorithms and techniques. However, studies evaluating the architectural and economic impacts of implemented BH-based technical solutions have not yet been addressed. Aiming to close this gap, this contribution quantifies the impact of BH and orbital parameters on the satellite platform and constellation size, considering specific traffic demand and service time indicators. The paper proposes a low complexity, instantaneous demand-based BH resource allocation technique, and presents a comprehensive analysis of LEO and VLEO scenarios using small platforms, lying on 5G/6G Non-Terrestrial Network (NTN) specifications. Given a joint set of traffic demand and time-to-serve indicators, and based on a feasible multibeam on-board antenna architecture, the paper compares the RF transmit power requirements in fixed and variable grid LEO schemes, and in VLEO with different minimum elevation angles, to assess the feasibility of these orbits. For a fixed minimum elevation and number of users, the RF transmit power and satellite platform requirements are significantly reduced when transitioning into lower altitudes with narrower satellite coverage areas. The relevant trade-off between the satellite platform and the size of the constellation required for global coverage is presented, to fulfill a set of traffic demand and time to serve indicators: approximately, 1156 3U satellites are required for VLEO constellation and 182 12U satellites for LEO. Once platform and constellation sizing trade-off is quantified, the paper estimates the economic costs for each of the deployments, showing a total cost of almost the double for the presented VLEO constellation compared to the LEO one. The article aims to provide system engineers and satellite operators with crucial information for satellite system design, dimensioning and cost assessment.
{"title":"Joint satellite platform and constellation sizing for instantaneous beam-hopping in 5G/6G Non-Terrestrial Networks","authors":"Samuel Martínez Zamacola ,&nbsp;Ramón Martínez Rodríguez-Osorio ,&nbsp;Miguel A. Salas-Natera","doi":"10.1016/j.comnet.2024.110942","DOIUrl":"10.1016/j.comnet.2024.110942","url":null,"abstract":"<div><div>Existing research on resource allocation in satellite networks incorporating beam-hopping (BH) focuses predominantly on the performance analysis of diverse algorithms and techniques. However, studies evaluating the architectural and economic impacts of implemented BH-based technical solutions have not yet been addressed. Aiming to close this gap, this contribution quantifies the impact of BH and orbital parameters on the satellite platform and constellation size, considering specific traffic demand and service time indicators. The paper proposes a low complexity, instantaneous demand-based BH resource allocation technique, and presents a comprehensive analysis of LEO and VLEO scenarios using small platforms, lying on 5G/6G Non-Terrestrial Network (NTN) specifications. Given a joint set of traffic demand and time-to-serve indicators, and based on a feasible multibeam on-board antenna architecture, the paper compares the RF transmit power requirements in fixed and variable grid LEO schemes, and in VLEO with different minimum elevation angles, to assess the feasibility of these orbits. For a fixed minimum elevation and number of users, the RF transmit power and satellite platform requirements are significantly reduced when transitioning into lower altitudes with narrower satellite coverage areas. The relevant trade-off between the satellite platform and the size of the constellation required for global coverage is presented, to fulfill a set of traffic demand and time to serve indicators: approximately, 1156 3U satellites are required for VLEO constellation and 182 12U satellites for LEO. Once platform and constellation sizing trade-off is quantified, the paper estimates the economic costs for each of the deployments, showing a total cost of almost the double for the presented VLEO constellation compared to the LEO one. The article aims to provide system engineers and satellite operators with crucial information for satellite system design, dimensioning and cost assessment.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"257 ","pages":"Article 110942"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
COREC: Concurrent non-blocking single-queue receive driver for low latency networking
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-02-01 DOI: 10.1016/j.comnet.2024.110982
Marco Faltelli , Giacomo Belocchi , Francesco Quaglia , Giuseppe Bianchi
Existing network stacks tackle performance and scalability aspects by relying on multiple receive queues. However, at software level, each queue is processed by a single thread, which prevents simultaneous work on the same queue and limits performance in terms of tail latency. To overcome this limitation, we introduce COREC, the first software implementation of a concurrent non-blocking single-queue receive driver. By sharing a single queue among multiple threads, workload distribution is improved, leading to a work-conserving policy for network stacks. On the technical side, instead of relying on traditional critical sections — which would sequentialize the operations by threads — COREC coordinates the threads that concurrently access the same receive queue in non-blocking manner via atomic machine instructions from the Read-Modify-Write (RMW) class. These instructions allow threads to access and update memory locations atomically, based on specific conditions, such as the matching of a target value selected by the thread. Also, they enable making any update globally visible in the memory hierarchy, bypassing interference on memory consistency caused by the CPU store buffers. Extensive evaluation results demonstrate that the possible additional reordering, which our approach may occasionally cause, is non-critical and has minimal impact on performance, even in the worst-case scenario of a single large TCP flow, with performance impairments accounting to at most 2-3 percent. Conversely, substantial latency gains are achieved when handling UDP traffic, real-world traffic mix, and multiple shorter TCP flows.
{"title":"COREC: Concurrent non-blocking single-queue receive driver for low latency networking","authors":"Marco Faltelli ,&nbsp;Giacomo Belocchi ,&nbsp;Francesco Quaglia ,&nbsp;Giuseppe Bianchi","doi":"10.1016/j.comnet.2024.110982","DOIUrl":"10.1016/j.comnet.2024.110982","url":null,"abstract":"<div><div>Existing network stacks tackle performance and scalability aspects by relying on multiple receive queues. However, at software level, each queue is processed by a single thread, which prevents simultaneous work on the same queue and limits performance in terms of tail latency. To overcome this limitation, we introduce COREC, the first software implementation of a concurrent non-blocking single-queue receive driver. By sharing a single queue among multiple threads, workload distribution is improved, leading to a work-conserving policy for network stacks. On the technical side, instead of relying on traditional critical sections — which would sequentialize the operations by threads — COREC coordinates the threads that concurrently access the same receive queue in non-blocking manner via atomic machine instructions from the Read-Modify-Write (RMW) class. These instructions allow threads to access and update memory locations atomically, based on specific conditions, such as the matching of a target value selected by the thread. Also, they enable making any update globally visible in the memory hierarchy, bypassing interference on memory consistency caused by the CPU store buffers. Extensive evaluation results demonstrate that the possible additional reordering, which our approach may occasionally cause, is non-critical and has minimal impact on performance, even in the worst-case scenario of a single large TCP flow, with performance impairments accounting to at most 2-3 percent. Conversely, substantial latency gains are achieved when handling UDP traffic, real-world traffic mix, and multiple shorter TCP flows.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"258 ","pages":"Article 110982"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143177140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Review of Federated Learning Applications in Intrusion Detection Systems
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-02-01 DOI: 10.1016/j.comnet.2024.111023
Aitor Belenguer, Jose A. Pascual, Javier Navaridas
Intrusion detection systems are evolving into sophisticated systems that perform data analysis while searching for anomalies in their environment. The development of deep learning technologies paved the way to build more complex and effective threat detection models. However, training those models may be computationally infeasible in most Internet of Things devices. Current approaches rely on powerful centralized servers that receive data from all their parties — substantially affecting response times and operational costs due to the huge communication overheads and violating basic privacy constraints. To mitigate these issues, Federated Learning emerged as a promising approach, where different agents collaboratively train a shared model, without exposing training data to others or requiring a compute-intensive centralized infrastructure. This paper focuses on the application of Federated Learning approaches in the field of Intrusion Detection. Both technologies are described in detail and current scientific progress is reviewed and taxonomized. Finally, the paper highlights the limitations present in recent works and proposes some future directions for this technology.
{"title":"A Review of Federated Learning Applications in Intrusion Detection Systems","authors":"Aitor Belenguer,&nbsp;Jose A. Pascual,&nbsp;Javier Navaridas","doi":"10.1016/j.comnet.2024.111023","DOIUrl":"10.1016/j.comnet.2024.111023","url":null,"abstract":"<div><div>Intrusion detection systems are evolving into sophisticated systems that perform data analysis while searching for anomalies in their environment. The development of deep learning technologies paved the way to build more complex and effective threat detection models. However, training those models may be computationally infeasible in most Internet of Things devices. Current approaches rely on powerful centralized servers that receive data from all their parties — substantially affecting response times and operational costs due to the huge communication overheads and violating basic privacy constraints. To mitigate these issues, Federated Learning emerged as a promising approach, where different agents collaboratively train a shared model, without exposing training data to others or requiring a compute-intensive centralized infrastructure. This paper focuses on the application of Federated Learning approaches in the field of Intrusion Detection. Both technologies are described in detail and current scientific progress is reviewed and taxonomized. Finally, the paper highlights the limitations present in recent works and proposes some future directions for this technology.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"258 ","pages":"Article 111023"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143177534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FOCCA: Fog–cloud continuum architecture for data imputation and load balancing in Smart Grids
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-02-01 DOI: 10.1016/j.comnet.2024.111031
Matheus T.M. Barbosa , Eric B.C. Barros , Vinícius F.S. Mota , Dionisio M. Leite Filho , Leobino N. Sampaio , Bruno T. Kuehne , Bruno G. Batista , Damla Turgut , Maycon L.M. Peixoto
A Smart Grid operates as an advanced electricity network that leverages digital communications technology to detect and respond to local changes in usage, generation, and system conditions in near-real-time. This capability enables two-way communication between utilities and customers, integrating renewable energy sources and energy storage systems to enhance energy efficiency. The primary objective of a Smart Grid is to optimize resource usage, reduce energy waste and costs, and improve the reliability and security of the electricity supply. Smart Meters play a critical role by automatically collecting energy data and transmitting it for processing and decision-making, thereby supporting the efficient operation of Smart Grids. However, relying solely on Cloud Computing for data pre-processing in Smart Grids can lead to increased response times due to the latency between cloud data centers and Smart Meters. To mitigate this, we proposed FOCCA (Fog–Cloud Continuum Architecture) to enhance data control in Smart Grids. FOCCA employs the Q-balance algorithm, a neural network-based load-balancing approach, to manage computational resources at the edge, significantly reducing service response times. Q-balance accurately estimates the time required for computational resources to process requests and balances the load across available resources, thereby minimizing average response times. Experimental evaluations demonstrated that Q-balance, integrated within FOCCA, outperformed traditional load balancing algorithms like Min-Load and Round-robin, reducing average response times by up to 8.1 seconds fog machines and 16.2 seconds cloud machines.
{"title":"FOCCA: Fog–cloud continuum architecture for data imputation and load balancing in Smart Grids","authors":"Matheus T.M. Barbosa ,&nbsp;Eric B.C. Barros ,&nbsp;Vinícius F.S. Mota ,&nbsp;Dionisio M. Leite Filho ,&nbsp;Leobino N. Sampaio ,&nbsp;Bruno T. Kuehne ,&nbsp;Bruno G. Batista ,&nbsp;Damla Turgut ,&nbsp;Maycon L.M. Peixoto","doi":"10.1016/j.comnet.2024.111031","DOIUrl":"10.1016/j.comnet.2024.111031","url":null,"abstract":"<div><div>A Smart Grid operates as an advanced electricity network that leverages digital communications technology to detect and respond to local changes in usage, generation, and system conditions in near-real-time. This capability enables two-way communication between utilities and customers, integrating renewable energy sources and energy storage systems to enhance energy efficiency. The primary objective of a Smart Grid is to optimize resource usage, reduce energy waste and costs, and improve the reliability and security of the electricity supply. Smart Meters play a critical role by automatically collecting energy data and transmitting it for processing and decision-making, thereby supporting the efficient operation of Smart Grids. However, relying solely on Cloud Computing for data pre-processing in Smart Grids can lead to increased response times due to the latency between cloud data centers and Smart Meters. To mitigate this, we proposed FOCCA (Fog–Cloud Continuum Architecture) to enhance data control in Smart Grids. FOCCA employs the Q-balance algorithm, a neural network-based load-balancing approach, to manage computational resources at the edge, significantly reducing service response times. Q-balance accurately estimates the time required for computational resources to process requests and balances the load across available resources, thereby minimizing average response times. Experimental evaluations demonstrated that Q-balance, integrated within FOCCA, outperformed traditional load balancing algorithms like Min-Load and Round-robin, reducing average response times by up to 8.1 seconds fog machines and 16.2 seconds cloud machines.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"258 ","pages":"Article 111031"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143178145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning-based co-resident attack detection for 5G clouded environments
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-02-01 DOI: 10.1016/j.comnet.2024.111032
MeiYan Jin , HongBo Tang , Hang Qiu , Jie Yang
The cloudification of fifth-generation (5G) networks enhances flexibility and scalability while simultaneously introducing new security challenges, especially co-resident threats. This type of attack exploits the virtualization environment, allowing attackers to deploy malicious Virtual Machines (VMs) on the same physical host as critical 5G network element VMs, thereby initiating an attack. Existing techniques for improving isolation and access control are costly, while methods that detect abnormal VM behavior have gained research attention. However, most existing methods rely on static features of VMs and fail to effectively capture the hidden behaviors of attackers, leading to low classification and detection accuracy, as well as a higher likelihood of misclassification. In this paper, we propose a co-resident attack detection method based on behavioral feature vectors and machine learning. The method constructs behavioral feature vectors by integrating attackers’ stealthy behavior patterns and applies K-means clustering for user classification and labeling, followed by manual verification and adjustment. A Random Forest (RF) algorithm optimized with Bayesian techniques is then employed for attack detection. Experimental results on the Microsoft Azure dataset demonstrate that this method outperforms static feature-based approaches, achieving an accuracy of 99.48% and significantly enhancing the detection of potential attackers. Future work could consider integrating this method into a broader 5G security framework to adapt to the ever-evolving threat environment, further enhancing the security and reliability of 5G networks.
{"title":"Machine learning-based co-resident attack detection for 5G clouded environments","authors":"MeiYan Jin ,&nbsp;HongBo Tang ,&nbsp;Hang Qiu ,&nbsp;Jie Yang","doi":"10.1016/j.comnet.2024.111032","DOIUrl":"10.1016/j.comnet.2024.111032","url":null,"abstract":"<div><div>The cloudification of fifth-generation (5G) networks enhances flexibility and scalability while simultaneously introducing new security challenges, especially co-resident threats. This type of attack exploits the virtualization environment, allowing attackers to deploy malicious Virtual Machines (VMs) on the same physical host as critical 5G network element VMs, thereby initiating an attack. Existing techniques for improving isolation and access control are costly, while methods that detect abnormal VM behavior have gained research attention. However, most existing methods rely on static features of VMs and fail to effectively capture the hidden behaviors of attackers, leading to low classification and detection accuracy, as well as a higher likelihood of misclassification. In this paper, we propose a co-resident attack detection method based on behavioral feature vectors and machine learning. The method constructs behavioral feature vectors by integrating attackers’ stealthy behavior patterns and applies K-means clustering for user classification and labeling, followed by manual verification and adjustment. A Random Forest (RF) algorithm optimized with Bayesian techniques is then employed for attack detection. Experimental results on the Microsoft Azure dataset demonstrate that this method outperforms static feature-based approaches, achieving an accuracy of 99.48% and significantly enhancing the detection of potential attackers. Future work could consider integrating this method into a broader 5G security framework to adapt to the ever-evolving threat environment, further enhancing the security and reliability of 5G networks.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"258 ","pages":"Article 111032"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143178149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel hybrid approach combining GCN and GAT for effective anomaly detection from firewall logs in campus networks
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-02-01 DOI: 10.1016/j.comnet.2025.111082
Ali Yılmaz , Resul Das
Anomaly detection is essential in domains like network monitoring, fraud detection, and cybersecurity, where it is vital to identify unusual patterns early on to avert possible harm. The complexity and scale of contemporary graph-structured networks are frequently too much for conventional anomaly detection techniques to handle. However, graph neural networks (GNNs), including graph convolutional networks (GCN), graph attention networks (GAT), and graph sample and aggregate (GraphSAGE), have become successful alternatives. This study obtains anomaly detection findings by independently using the GCN, GAT, and GraphSAGE models on the same dataset. In addition to the anomaly detection derived from separate models, we provide a novel hybrid anomaly detection model that combines the advantages of GCN and GAT. By utilizing GCN’s capacity to collect global structural data and GAT’s attention mechanism to enhance local node interactions, we aim to improve the accuracy of the hybrid model anomaly detection. Particularly in dynamic and expansive graph contexts, this combination enhances detection sensitivity and processing efficiency. According to our experimental findings, the hybrid model performs better than the separate GCN, GAT, and GraphSAGE models in terms of recall (0.9904%), accuracy (0.9904%), precision (0.9843%), and f1 score (0.9872%). The high success rate achieved in detecting various cyberattacks within the utilized dataset demonstrates that this method provides an especially effective solution in fields such as cybersecurity and financial fraud detection, where highly accurate anomaly detection systems are required for analyzing dynamic and large-scale graph data. The suggested method is a reliable option for real-time anomaly identification in intricate network environments since it demonstrates notable gains in identifying both local and global anomalies.
{"title":"A novel hybrid approach combining GCN and GAT for effective anomaly detection from firewall logs in campus networks","authors":"Ali Yılmaz ,&nbsp;Resul Das","doi":"10.1016/j.comnet.2025.111082","DOIUrl":"10.1016/j.comnet.2025.111082","url":null,"abstract":"<div><div>Anomaly detection is essential in domains like network monitoring, fraud detection, and cybersecurity, where it is vital to identify unusual patterns early on to avert possible harm. The complexity and scale of contemporary graph-structured networks are frequently too much for conventional anomaly detection techniques to handle. However, graph neural networks (GNNs), including graph convolutional networks (GCN), graph attention networks (GAT), and graph sample and aggregate (GraphSAGE), have become successful alternatives. This study obtains anomaly detection findings by independently using the GCN, GAT, and GraphSAGE models on the same dataset. In addition to the anomaly detection derived from separate models, we provide a novel hybrid anomaly detection model that combines the advantages of GCN and GAT. By utilizing GCN’s capacity to collect global structural data and GAT’s attention mechanism to enhance local node interactions, we aim to improve the accuracy of the hybrid model anomaly detection. Particularly in dynamic and expansive graph contexts, this combination enhances detection sensitivity and processing efficiency. According to our experimental findings, the hybrid model performs better than the separate GCN, GAT, and GraphSAGE models in terms of recall (0.9904%), accuracy (0.9904%), precision (0.9843%), and f1 score (0.9872%). The high success rate achieved in detecting various cyberattacks within the utilized dataset demonstrates that this method provides an especially effective solution in fields such as cybersecurity and financial fraud detection, where highly accurate anomaly detection systems are required for analyzing dynamic and large-scale graph data. The suggested method is a reliable option for real-time anomaly identification in intricate network environments since it demonstrates notable gains in identifying both local and global anomalies.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"259 ","pages":"Article 111082"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143295561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BeamSense: Rethinking Wireless Sensing with MU-MIMO Wi-Fi Beamforming Feedback
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-02-01 DOI: 10.1016/j.comnet.2024.111020
Khandaker Foysal Haque , Milin Zhang , Francesca Meneghello , Francesco Restuccia
In this paper, we propose BeamSense, a completely novel approach to implement standard-compliant Wi-Fi sensing applications. Existing work leverages the manual extraction of the uncompressed channel state information (CSI) from Wi-Fi chips, which is not supported by the 802.11 standards and hence requires the usage of specialized equipment. On the contrary, BeamSense leverages the standard-compliant compressed beamforming feedback information (BFI) (beamforming feedback angles (BFAs)) to characterize the propagation environment. Conversely from the uncompressed CSI, the compressed BFAs (i) can be recorded without any firmware modification, and (ii) simultaneously captures the channels between the access point and all the stations, thus providing much better sensitivity. BeamSense features a novel cross-domain few-shot learning (FSL) algorithm for human activity recognition to handle unseen environments and subjects with a few additional data samples. We evaluate BeamSense through an extensive data collection campaign with three subjects performing twenty different activities in three different environments. We show that our BFAs-based approach achieves about 10% more accuracy when compared to CSI-based prior work, while our FSL strategy improves accuracy by up to 30% when compared with state-of-the-art cross-domain algorithms. Additionally, to demonstrate its versatility, we apply BeamSense to another smart home application – gesture recognition – achieving over 98% accuracy across various orientations and subjects. We share the collected datasets and BeamSense implementation code for reproducibility – https://github.com/kfoysalhaque/BeamSense.
{"title":"BeamSense: Rethinking Wireless Sensing with MU-MIMO Wi-Fi Beamforming Feedback","authors":"Khandaker Foysal Haque ,&nbsp;Milin Zhang ,&nbsp;Francesca Meneghello ,&nbsp;Francesco Restuccia","doi":"10.1016/j.comnet.2024.111020","DOIUrl":"10.1016/j.comnet.2024.111020","url":null,"abstract":"<div><div>In this paper, we propose <span>BeamSense</span>, a completely novel approach to implement standard-compliant Wi-Fi sensing applications. Existing work leverages the manual extraction of the uncompressed channel state information (CSI) from Wi-Fi chips, which is not supported by the 802.11 standards and hence requires the usage of specialized equipment. On the contrary, <span>BeamSense</span> leverages the standard-compliant compressed beamforming feedback information (BFI) (beamforming feedback angles (BFAs)) to characterize the propagation environment. Conversely from the uncompressed CSI, the compressed BFAs (i) can be recorded without any firmware modification, and (ii) simultaneously captures the channels between the access point and all the stations, thus providing much better sensitivity. <span>BeamSense</span> features a novel cross-domain few-shot learning (FSL) algorithm for human activity recognition to handle unseen environments and subjects with a few additional data samples. We evaluate <span>BeamSense</span> through an extensive data collection campaign with three subjects performing twenty different activities in three different environments. We show that our BFAs-based approach achieves about 10% more accuracy when compared to CSI-based prior work, while our FSL strategy improves accuracy by up to 30% when compared with state-of-the-art cross-domain algorithms. Additionally, to demonstrate its versatility, we apply <span>BeamSense</span> to another smart home application – gesture recognition – achieving over 98% accuracy across various orientations and subjects. We share the collected datasets and <span>BeamSense</span> implementation code for reproducibility – <span><span>https://github.com/kfoysalhaque/BeamSense</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"258 ","pages":"Article 111020"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143176537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Defining and measuring the resilience of network services
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-02-01 DOI: 10.1016/j.comnet.2025.111036
Kewei Wang , Changzhen Hu , Chun Shan
Network services are becoming increasingly vital as they now support almost every aspect of society and human life. Due to the high-availability requirements of network service provisioning and the inevitability of the occurrences of security events, the ability of network services to adapt to and/or recover from adverse events and consistently maintain an acceptable level of operations, which is known as resilience, is of utmost importance. However, in information systems, there lacks consensus definition of resilience, and the measurement of which is also in its infancy. To fill this gap, by referring to the concept of resilience in the field of material science, we propose a definition of resilience of network services in terms of the energy released in recovery. Then, by applying neural networks to service status metrics, we construct the state space of network services, which is mathematically a product manifold of a couple of Riemannian manifolds. Finally, based on differential geometry principles, the resilience of network services can be quantified with the behavioral action of resilience mechanisms and the displacement it produces in the state space. Experiment results show that the proposed method is precise in characterizing the resilience of network services and outperforms existing solutions.
{"title":"Defining and measuring the resilience of network services","authors":"Kewei Wang ,&nbsp;Changzhen Hu ,&nbsp;Chun Shan","doi":"10.1016/j.comnet.2025.111036","DOIUrl":"10.1016/j.comnet.2025.111036","url":null,"abstract":"<div><div>Network services are becoming increasingly vital as they now support almost every aspect of society and human life. Due to the high-availability requirements of network service provisioning and the inevitability of the occurrences of security events, the ability of network services to adapt to and/or recover from adverse events and consistently maintain an acceptable level of operations, which is known as resilience, is of utmost importance. However, in information systems, there lacks consensus definition of resilience, and the measurement of which is also in its infancy. To fill this gap, by referring to the concept of resilience in the field of material science, we propose a definition of resilience of network services in terms of the energy released in recovery. Then, by applying neural networks to service status metrics, we construct the state space of network services, which is mathematically a product manifold of a couple of Riemannian manifolds. Finally, based on differential geometry principles, the resilience of network services can be quantified with the behavioral action of resilience mechanisms and the displacement it produces in the state space. Experiment results show that the proposed method is precise in characterizing the resilience of network services and outperforms existing solutions.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"258 ","pages":"Article 111036"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143177526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Expiration filter: Mining recent heavy flows in high-speed networks
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-02-01 DOI: 10.1016/j.comnet.2024.111010
Yifan Han , He Huang , Yu-E Sun , Jia Liu , Shigang Chen
Mining recent heavy flows, which indicate the latest trends in high-speed networks, is vital to network management and numerous practical applications such as anomaly detection and congestion resolution. However, existing network traffic measurement solutions fall short of capturing and analyzing network traffic combined with temporal information, leaving us unaware of the real-time status of network streams, such as those undergoing congestion or attacks. This paper proposes the Expiration Filter (EF), which focuses on mining recent heavy flows, enhancing our understanding of the current behavioral patterns within the data. Given the skewness in real-world data streams, EF first filters out small flows to improve accuracy and tracks only flows with large volumes that have recently emerged. The EF also incorporates a dynamically self-cleaning mechanism to evict outdated records and free up memory space for new flows, thus fitting into the constrained on-chip space. Additionally, the adopted multi-stage design ensures the hardware implementation of EF in emerging programmable switches for line-rate processing. Hence, we provide detailed insights into implementing EF in programmable hardware under strict programming and resource constraints. Extensive experiments on real-world datasets demonstrate that EF outperforms the benchmarks in terms of flow size estimation, identifying top-k recent flows and detecting heavy hitters. All source codes are available at Github (https://github.com/hanyifansuda/Expiration-Filter).
{"title":"Expiration filter: Mining recent heavy flows in high-speed networks","authors":"Yifan Han ,&nbsp;He Huang ,&nbsp;Yu-E Sun ,&nbsp;Jia Liu ,&nbsp;Shigang Chen","doi":"10.1016/j.comnet.2024.111010","DOIUrl":"10.1016/j.comnet.2024.111010","url":null,"abstract":"<div><div>Mining recent heavy flows, which indicate the latest trends in high-speed networks, is vital to network management and numerous practical applications such as anomaly detection and congestion resolution. However, existing network traffic measurement solutions fall short of capturing and analyzing network traffic combined with temporal information, leaving us unaware of the real-time status of network streams, such as those undergoing congestion or attacks. This paper proposes the Expiration Filter (EF), which focuses on mining recent heavy flows, enhancing our understanding of the current behavioral patterns within the data. Given the skewness in real-world data streams, EF first filters out small flows to improve accuracy and tracks only flows with large volumes that have recently emerged. The EF also incorporates a dynamically self-cleaning mechanism to evict outdated records and free up memory space for new flows, thus fitting into the constrained on-chip space. Additionally, the adopted multi-stage design ensures the hardware implementation of EF in emerging programmable switches for line-rate processing. Hence, we provide detailed insights into implementing EF in programmable hardware under strict programming and resource constraints. Extensive experiments on real-world datasets demonstrate that EF outperforms the benchmarks in terms of flow size estimation, identifying top-<em>k</em> recent flows and detecting heavy hitters. All source codes are available at Github (<span><span>https://github.com/hanyifansuda/Expiration-Filter</span><svg><path></path></svg></span>).</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"258 ","pages":"Article 111010"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143178143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Networks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1