首页 > 最新文献

Computer Networks最新文献

英文 中文
Joint power allocation for multiple nodes in serial relaying networks over underwater wireless optical channels
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-02-01 DOI: 10.1016/j.comnet.2024.110941
Fangyuan Xing , Yutian Tan , Zhenduo Wang , Yaxing Yue , Hongxi Yin
Underwater wireless optical communications (UWOC) suffer from severe path loss and limited communication range due to the multifarious oceanic channel impairments. Serial relaying could expand the end-to-end transmission distance, while its joint power allocation for multiple nodes is an intractable problem. To this end, this paper proposes a fitting-based joint power allocation algorithm for multiple nodes in serial relaying UWOC, which consists of two phases including the fitting-based global pre-allocation and bisection search method-based partial optimization. Specifically, the serial relaying for UWOC is modeled, where the combined effects of absorption, scattering, and oceanic turbulence are all taken into account. The joint power allocation algorithm for multiple nodes is formulated to minimize the outage probability under the constraint of total transmitted power. To solve this problem, a fitting-based scheme is proposed to pre-allocate the power for all nodes, which could implement the global pre-allocation. Subsequently, to further reduce the outage probability, we design a partial optimization scheme, where a bisection search method is proposed to optimize the partial two-hop links. Simulation results demonstrate that the proposed scheme achieves better performance in the decrease of the outage probability compared with the existing schemes for the serial relaying UWOC.
{"title":"Joint power allocation for multiple nodes in serial relaying networks over underwater wireless optical channels","authors":"Fangyuan Xing ,&nbsp;Yutian Tan ,&nbsp;Zhenduo Wang ,&nbsp;Yaxing Yue ,&nbsp;Hongxi Yin","doi":"10.1016/j.comnet.2024.110941","DOIUrl":"10.1016/j.comnet.2024.110941","url":null,"abstract":"<div><div>Underwater wireless optical communications (UWOC) suffer from severe path loss and limited communication range due to the multifarious oceanic channel impairments. Serial relaying could expand the end-to-end transmission distance, while its joint power allocation for multiple nodes is an intractable problem. To this end, this paper proposes a fitting-based joint power allocation algorithm for multiple nodes in serial relaying UWOC, which consists of two phases including the fitting-based global pre-allocation and bisection search method-based partial optimization. Specifically, the serial relaying for UWOC is modeled, where the combined effects of absorption, scattering, and oceanic turbulence are all taken into account. The joint power allocation algorithm for multiple nodes is formulated to minimize the outage probability under the constraint of total transmitted power. To solve this problem, a fitting-based scheme is proposed to pre-allocate the power for all nodes, which could implement the global pre-allocation. Subsequently, to further reduce the outage probability, we design a partial optimization scheme, where a bisection search method is proposed to optimize the partial two-hop links. Simulation results demonstrate that the proposed scheme achieves better performance in the decrease of the outage probability compared with the existing schemes for the serial relaying UWOC.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"257 ","pages":"Article 110941"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EAOS: Exposing attacks in smart contracts through analyzing opcode sequences with operands
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-02-01 DOI: 10.1016/j.comnet.2024.110959
Peiqiang Li, Guojun Wang, Xiaofei Xing, Jinyao Zhu, Wanyi Gu, Yuheng Zhang
Today, Ethereum is the world’s largest open-source platform. However, because smart contracts hold a large amount of money and cannot be changed once on the chain, they have become the target of attackers. Users will undoubtedly suffer significant financial losses. To counter these attacks, various methods have been proposed to scan smart contracts for vulnerabilities before deploying them on the blockchain, but few of them determine smart contracts to be vulnerable by examining transactions. In this paper, we propose a framework called EAOS to detect attacks through analyzing the opcode sequences executed by EVM. We first obtain the opcode sequences with operands of smart contracts during EVM execution by replaying the historical transactions of Ethereum, and then extract the feature opcodes from the opcode sequences to generate the feature opcode sequences. Next, we provide some very useful APIs to make it easier for users to get the data related to the opcode sequences. Based on the APIs, users can develop various algorithms to detect attacks. Finally, to verify the effectiveness of EAOS, five algorithms are developed to analyze the replayed transaction opcode sequences. The extensive experimental results demonstrate the effectiveness and efficiency of EAOS and our detection algorithms.
{"title":"EAOS: Exposing attacks in smart contracts through analyzing opcode sequences with operands","authors":"Peiqiang Li,&nbsp;Guojun Wang,&nbsp;Xiaofei Xing,&nbsp;Jinyao Zhu,&nbsp;Wanyi Gu,&nbsp;Yuheng Zhang","doi":"10.1016/j.comnet.2024.110959","DOIUrl":"10.1016/j.comnet.2024.110959","url":null,"abstract":"<div><div>Today, Ethereum is the world’s largest open-source platform. However, because smart contracts hold a large amount of money and cannot be changed once on the chain, they have become the target of attackers. Users will undoubtedly suffer significant financial losses. To counter these attacks, various methods have been proposed to scan smart contracts for vulnerabilities before deploying them on the blockchain, but few of them determine smart contracts to be vulnerable by examining transactions. In this paper, we propose a framework called EAOS to detect attacks through analyzing the opcode sequences executed by EVM. We first obtain the opcode sequences with operands of smart contracts during EVM execution by replaying the historical transactions of Ethereum, and then extract the feature opcodes from the opcode sequences to generate the feature opcode sequences. Next, we provide some very useful APIs to make it easier for users to get the data related to the opcode sequences. Based on the APIs, users can develop various algorithms to detect attacks. Finally, to verify the effectiveness of EAOS, five algorithms are developed to analyze the replayed transaction opcode sequences. The extensive experimental results demonstrate the effectiveness and efficiency of EAOS and our detection algorithms.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"257 ","pages":"Article 110959"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
STAR-RIS-aided NOMA communication for mobile edge computing using hybrid deep reinforcement learning
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-02-01 DOI: 10.1016/j.comnet.2024.110960
Boxuan Song, Fei Wang, Yujie Su
Reconfigurable intelligent surface (RIS) is expected to be able to significantly reduce task processing delay and energy consumptions of mobile users (MUs) in mobile edge computing (MEC) by intelligently adjusting its reflecting elements’ phase-shifts and amplitudes. Nevertheless, both the passive and active RISs have the disadvantage of only reflecting the received signals, which means that the transmitters and receivers must be located on the same side of the RIS. This may be unrealistic due to the movement of MUs. Simultaneously transmitting and reflecting (STAR) RIS, which can simultaneously transmit and reflect incident signals to achieve full-area coverage, has been recognized as a revolutionary technique to solve the above-mentioned problem. For the STAR-RIS-aided non-orthogonal multiple access (NOMA) communication MEC, we first formulate an optimization problem to minimize the sum of weighted delay and energy consumptions of all MUs which can move randomly at low speeds. Then, under the practical coupled phase-shift model of STAR-RIS, we propose a hybrid deep reinforcement learning (DRL) scheme, in which we determine the amplitudes and phase-shifts of STAR-RIS, task offloading decisions of MUs, and computation resource allocations of MEC servers by using the deep deterministic policy gradient (DDPG) and Dueling deep Q learning (DQN). Finally, we validate and evaluate the performances of our proposed scheme through extensive simulations, which show that our proposed scheme outperforms the existing baseline schemes and its performance can indeed be improved due to the use of STAR-RIS.
{"title":"STAR-RIS-aided NOMA communication for mobile edge computing using hybrid deep reinforcement learning","authors":"Boxuan Song,&nbsp;Fei Wang,&nbsp;Yujie Su","doi":"10.1016/j.comnet.2024.110960","DOIUrl":"10.1016/j.comnet.2024.110960","url":null,"abstract":"<div><div><em>Reconfigurable intelligent surface</em> (RIS) is expected to be able to significantly reduce task processing delay and energy consumptions of <em>mobile users</em> (MUs) in <em>mobile edge computing</em> (MEC) by intelligently adjusting its reflecting elements’ phase-shifts and amplitudes. Nevertheless, both the passive and active RISs have the disadvantage of only reflecting the received signals, which means that the transmitters and receivers must be located on the same side of the RIS. This may be unrealistic due to the movement of MUs. <em>Simultaneously transmitting and reflecting</em> (STAR) RIS, which can simultaneously transmit and reflect incident signals to achieve full-area coverage, has been recognized as a revolutionary technique to solve the above-mentioned problem. For the STAR-RIS-aided <em>non-orthogonal multiple access</em> (NOMA) communication MEC, we first formulate an optimization problem to minimize the sum of weighted delay and energy consumptions of all MUs which can move randomly at low speeds. Then, under the practical coupled phase-shift model of STAR-RIS, we propose a hybrid <em>deep reinforcement learning</em> (DRL) scheme, in which we determine the amplitudes and phase-shifts of STAR-RIS, task offloading decisions of MUs, and computation resource allocations of MEC servers by using the <em>deep deterministic policy gradient</em> (DDPG) and Dueling <em>deep Q learning</em> (DQN). Finally, we validate and evaluate the performances of our proposed scheme through extensive simulations, which show that our proposed scheme outperforms the existing baseline schemes and its performance can indeed be improved due to the use of STAR-RIS.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"257 ","pages":"Article 110960"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A survey on VPN: Taxonomy, roles, trends and future directions
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-02-01 DOI: 10.1016/j.comnet.2024.110964
Jianhua Li , Bohao Feng , Hui Zheng
The Virtual Private Network (VPN) originated as a cost-effective alternative to dedicated lines, primarily utilized by Internet Service Providers (ISPs) and large organizations. Evolving beyond mere connectivity, people expect that VPNs can provide myriads of services ranging from the protection of privacy to bypassing website blocking. Recent trends, including cloud migration, regulatory measures, and escalating cyber threats, have spurred increased adoption of VPNs for remote access. However, criticisms regarding their shortcomings have led some to advocate for their removal from the Internet. In this paper, we elucidate the VPN concept, taxonomy, roles, associated concerns, and future trends to demystify VPN. Different from existing survey, our analysis reveals VPNs as indispensable, providing an affordable, secure, privacy-enhanced, and flexible connection for individuals and businesses. Still, significant challenges must be addressed to unlock their full potential. Finally, we propose insightful initiatives for VPN design and highlighting ongoing challenges and future research directions.
{"title":"A survey on VPN: Taxonomy, roles, trends and future directions","authors":"Jianhua Li ,&nbsp;Bohao Feng ,&nbsp;Hui Zheng","doi":"10.1016/j.comnet.2024.110964","DOIUrl":"10.1016/j.comnet.2024.110964","url":null,"abstract":"<div><div>The Virtual Private Network (VPN) originated as a cost-effective alternative to dedicated lines, primarily utilized by Internet Service Providers (ISPs) and large organizations. Evolving beyond mere connectivity, people expect that VPNs can provide myriads of services ranging from the protection of privacy to bypassing website blocking. Recent trends, including cloud migration, regulatory measures, and escalating cyber threats, have spurred increased adoption of VPNs for remote access. However, criticisms regarding their shortcomings have led some to advocate for their removal from the Internet. In this paper, we elucidate the VPN concept, taxonomy, roles, associated concerns, and future trends to demystify VPN. Different from existing survey, our analysis reveals VPNs as indispensable, providing an affordable, secure, privacy-enhanced, and flexible connection for individuals and businesses. Still, significant challenges must be addressed to unlock their full potential. Finally, we propose insightful initiatives for VPN design and highlighting ongoing challenges and future research directions.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"257 ","pages":"Article 110964"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Model collaboration framework design for space-air-ground integrated networks
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-02-01 DOI: 10.1016/j.comnet.2024.111013
Shuhang Zhang
The sixth-generation (6G) of wireless networks is expected to surpass its predecessors by offering ubiquitous coverage of sensing, communication, and computing by the deployment of space-air-ground integrated networks (SAGINs). In SAGINs, aerial facilities, such as unmanned aerial vehicles (UAVs), collect multi-modal sensory data to support diverse applications including surveillance and battlefield monitoring. However, these processing of the multi-domain inference tasks require large artificial intelligence (AI) models, demanding powerful computing capabilities and finely tuned inference models trained on rich datasets, thus posing significant challenges for UAVs. To provide ubiquitous powerful computation, we propose a SAGIN model collaboration framework, where LEO satellites with ubiquitous service coverage and ground servers with powerful computing capabilities work as edge nodes and cloud nodes, respectively, for the processing of sensory data from the UAVs. With limited communication bandwidth and computing capacity, the proposed framework faces the challenge of computing allocation among the edge nodes and the cloud nodes, together with the uplink-downlink resource allocation for the sensory data and model transmissions. To tackle this, we present joint edge-cloud task allocation, air-space-ground communication resource allocation, and sensory data quantization design to maximize the inference accuracy of the SAGIN model collaboration framework. The mixed integer programming problem is decomposed into two subproblems, and solved based on the propositions summarized from experimental studies. Simulations based on results from vision-based classification experiments consistently demonstrate that the inference accuracy of the SAGIN model collaboration framework outperforms both a centralized cloud model framework and a distributed edge model framework across various communication bandwidths and data sizes.
{"title":"Model collaboration framework design for space-air-ground integrated networks","authors":"Shuhang Zhang","doi":"10.1016/j.comnet.2024.111013","DOIUrl":"10.1016/j.comnet.2024.111013","url":null,"abstract":"<div><div>The sixth-generation (6G) of wireless networks is expected to surpass its predecessors by offering ubiquitous coverage of sensing, communication, and computing by the deployment of space-air-ground integrated networks (SAGINs). In SAGINs, aerial facilities, such as unmanned aerial vehicles (UAVs), collect multi-modal sensory data to support diverse applications including surveillance and battlefield monitoring. However, these processing of the multi-domain inference tasks require large artificial intelligence (AI) models, demanding powerful computing capabilities and finely tuned inference models trained on rich datasets, thus posing significant challenges for UAVs. To provide ubiquitous powerful computation, we propose a SAGIN model collaboration framework, where LEO satellites with ubiquitous service coverage and ground servers with powerful computing capabilities work as edge nodes and cloud nodes, respectively, for the processing of sensory data from the UAVs. With limited communication bandwidth and computing capacity, the proposed framework faces the challenge of computing allocation among the edge nodes and the cloud nodes, together with the uplink-downlink resource allocation for the sensory data and model transmissions. To tackle this, we present joint edge-cloud task allocation, air-space-ground communication resource allocation, and sensory data quantization design to maximize the inference accuracy of the SAGIN model collaboration framework. The mixed integer programming problem is decomposed into two subproblems, and solved based on the propositions summarized from experimental studies. Simulations based on results from vision-based classification experiments consistently demonstrate that the inference accuracy of the SAGIN model collaboration framework outperforms both a centralized cloud model framework and a distributed edge model framework across various communication bandwidths and data sizes.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"257 ","pages":"Article 111013"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint satellite platform and constellation sizing for instantaneous beam-hopping in 5G/6G Non-Terrestrial Networks
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-02-01 DOI: 10.1016/j.comnet.2024.110942
Samuel Martínez Zamacola , Ramón Martínez Rodríguez-Osorio , Miguel A. Salas-Natera
Existing research on resource allocation in satellite networks incorporating beam-hopping (BH) focuses predominantly on the performance analysis of diverse algorithms and techniques. However, studies evaluating the architectural and economic impacts of implemented BH-based technical solutions have not yet been addressed. Aiming to close this gap, this contribution quantifies the impact of BH and orbital parameters on the satellite platform and constellation size, considering specific traffic demand and service time indicators. The paper proposes a low complexity, instantaneous demand-based BH resource allocation technique, and presents a comprehensive analysis of LEO and VLEO scenarios using small platforms, lying on 5G/6G Non-Terrestrial Network (NTN) specifications. Given a joint set of traffic demand and time-to-serve indicators, and based on a feasible multibeam on-board antenna architecture, the paper compares the RF transmit power requirements in fixed and variable grid LEO schemes, and in VLEO with different minimum elevation angles, to assess the feasibility of these orbits. For a fixed minimum elevation and number of users, the RF transmit power and satellite platform requirements are significantly reduced when transitioning into lower altitudes with narrower satellite coverage areas. The relevant trade-off between the satellite platform and the size of the constellation required for global coverage is presented, to fulfill a set of traffic demand and time to serve indicators: approximately, 1156 3U satellites are required for VLEO constellation and 182 12U satellites for LEO. Once platform and constellation sizing trade-off is quantified, the paper estimates the economic costs for each of the deployments, showing a total cost of almost the double for the presented VLEO constellation compared to the LEO one. The article aims to provide system engineers and satellite operators with crucial information for satellite system design, dimensioning and cost assessment.
{"title":"Joint satellite platform and constellation sizing for instantaneous beam-hopping in 5G/6G Non-Terrestrial Networks","authors":"Samuel Martínez Zamacola ,&nbsp;Ramón Martínez Rodríguez-Osorio ,&nbsp;Miguel A. Salas-Natera","doi":"10.1016/j.comnet.2024.110942","DOIUrl":"10.1016/j.comnet.2024.110942","url":null,"abstract":"<div><div>Existing research on resource allocation in satellite networks incorporating beam-hopping (BH) focuses predominantly on the performance analysis of diverse algorithms and techniques. However, studies evaluating the architectural and economic impacts of implemented BH-based technical solutions have not yet been addressed. Aiming to close this gap, this contribution quantifies the impact of BH and orbital parameters on the satellite platform and constellation size, considering specific traffic demand and service time indicators. The paper proposes a low complexity, instantaneous demand-based BH resource allocation technique, and presents a comprehensive analysis of LEO and VLEO scenarios using small platforms, lying on 5G/6G Non-Terrestrial Network (NTN) specifications. Given a joint set of traffic demand and time-to-serve indicators, and based on a feasible multibeam on-board antenna architecture, the paper compares the RF transmit power requirements in fixed and variable grid LEO schemes, and in VLEO with different minimum elevation angles, to assess the feasibility of these orbits. For a fixed minimum elevation and number of users, the RF transmit power and satellite platform requirements are significantly reduced when transitioning into lower altitudes with narrower satellite coverage areas. The relevant trade-off between the satellite platform and the size of the constellation required for global coverage is presented, to fulfill a set of traffic demand and time to serve indicators: approximately, 1156 3U satellites are required for VLEO constellation and 182 12U satellites for LEO. Once platform and constellation sizing trade-off is quantified, the paper estimates the economic costs for each of the deployments, showing a total cost of almost the double for the presented VLEO constellation compared to the LEO one. The article aims to provide system engineers and satellite operators with crucial information for satellite system design, dimensioning and cost assessment.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"257 ","pages":"Article 110942"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
COREC: Concurrent non-blocking single-queue receive driver for low latency networking
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-02-01 DOI: 10.1016/j.comnet.2024.110982
Marco Faltelli , Giacomo Belocchi , Francesco Quaglia , Giuseppe Bianchi
Existing network stacks tackle performance and scalability aspects by relying on multiple receive queues. However, at software level, each queue is processed by a single thread, which prevents simultaneous work on the same queue and limits performance in terms of tail latency. To overcome this limitation, we introduce COREC, the first software implementation of a concurrent non-blocking single-queue receive driver. By sharing a single queue among multiple threads, workload distribution is improved, leading to a work-conserving policy for network stacks. On the technical side, instead of relying on traditional critical sections — which would sequentialize the operations by threads — COREC coordinates the threads that concurrently access the same receive queue in non-blocking manner via atomic machine instructions from the Read-Modify-Write (RMW) class. These instructions allow threads to access and update memory locations atomically, based on specific conditions, such as the matching of a target value selected by the thread. Also, they enable making any update globally visible in the memory hierarchy, bypassing interference on memory consistency caused by the CPU store buffers. Extensive evaluation results demonstrate that the possible additional reordering, which our approach may occasionally cause, is non-critical and has minimal impact on performance, even in the worst-case scenario of a single large TCP flow, with performance impairments accounting to at most 2-3 percent. Conversely, substantial latency gains are achieved when handling UDP traffic, real-world traffic mix, and multiple shorter TCP flows.
{"title":"COREC: Concurrent non-blocking single-queue receive driver for low latency networking","authors":"Marco Faltelli ,&nbsp;Giacomo Belocchi ,&nbsp;Francesco Quaglia ,&nbsp;Giuseppe Bianchi","doi":"10.1016/j.comnet.2024.110982","DOIUrl":"10.1016/j.comnet.2024.110982","url":null,"abstract":"<div><div>Existing network stacks tackle performance and scalability aspects by relying on multiple receive queues. However, at software level, each queue is processed by a single thread, which prevents simultaneous work on the same queue and limits performance in terms of tail latency. To overcome this limitation, we introduce COREC, the first software implementation of a concurrent non-blocking single-queue receive driver. By sharing a single queue among multiple threads, workload distribution is improved, leading to a work-conserving policy for network stacks. On the technical side, instead of relying on traditional critical sections — which would sequentialize the operations by threads — COREC coordinates the threads that concurrently access the same receive queue in non-blocking manner via atomic machine instructions from the Read-Modify-Write (RMW) class. These instructions allow threads to access and update memory locations atomically, based on specific conditions, such as the matching of a target value selected by the thread. Also, they enable making any update globally visible in the memory hierarchy, bypassing interference on memory consistency caused by the CPU store buffers. Extensive evaluation results demonstrate that the possible additional reordering, which our approach may occasionally cause, is non-critical and has minimal impact on performance, even in the worst-case scenario of a single large TCP flow, with performance impairments accounting to at most 2-3 percent. Conversely, substantial latency gains are achieved when handling UDP traffic, real-world traffic mix, and multiple shorter TCP flows.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"258 ","pages":"Article 110982"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143177140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Review of Federated Learning Applications in Intrusion Detection Systems
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-02-01 DOI: 10.1016/j.comnet.2024.111023
Aitor Belenguer, Jose A. Pascual, Javier Navaridas
Intrusion detection systems are evolving into sophisticated systems that perform data analysis while searching for anomalies in their environment. The development of deep learning technologies paved the way to build more complex and effective threat detection models. However, training those models may be computationally infeasible in most Internet of Things devices. Current approaches rely on powerful centralized servers that receive data from all their parties — substantially affecting response times and operational costs due to the huge communication overheads and violating basic privacy constraints. To mitigate these issues, Federated Learning emerged as a promising approach, where different agents collaboratively train a shared model, without exposing training data to others or requiring a compute-intensive centralized infrastructure. This paper focuses on the application of Federated Learning approaches in the field of Intrusion Detection. Both technologies are described in detail and current scientific progress is reviewed and taxonomized. Finally, the paper highlights the limitations present in recent works and proposes some future directions for this technology.
{"title":"A Review of Federated Learning Applications in Intrusion Detection Systems","authors":"Aitor Belenguer,&nbsp;Jose A. Pascual,&nbsp;Javier Navaridas","doi":"10.1016/j.comnet.2024.111023","DOIUrl":"10.1016/j.comnet.2024.111023","url":null,"abstract":"<div><div>Intrusion detection systems are evolving into sophisticated systems that perform data analysis while searching for anomalies in their environment. The development of deep learning technologies paved the way to build more complex and effective threat detection models. However, training those models may be computationally infeasible in most Internet of Things devices. Current approaches rely on powerful centralized servers that receive data from all their parties — substantially affecting response times and operational costs due to the huge communication overheads and violating basic privacy constraints. To mitigate these issues, Federated Learning emerged as a promising approach, where different agents collaboratively train a shared model, without exposing training data to others or requiring a compute-intensive centralized infrastructure. This paper focuses on the application of Federated Learning approaches in the field of Intrusion Detection. Both technologies are described in detail and current scientific progress is reviewed and taxonomized. Finally, the paper highlights the limitations present in recent works and proposes some future directions for this technology.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"258 ","pages":"Article 111023"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143177534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FOCCA: Fog–cloud continuum architecture for data imputation and load balancing in Smart Grids
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-02-01 DOI: 10.1016/j.comnet.2024.111031
Matheus T.M. Barbosa , Eric B.C. Barros , Vinícius F.S. Mota , Dionisio M. Leite Filho , Leobino N. Sampaio , Bruno T. Kuehne , Bruno G. Batista , Damla Turgut , Maycon L.M. Peixoto
A Smart Grid operates as an advanced electricity network that leverages digital communications technology to detect and respond to local changes in usage, generation, and system conditions in near-real-time. This capability enables two-way communication between utilities and customers, integrating renewable energy sources and energy storage systems to enhance energy efficiency. The primary objective of a Smart Grid is to optimize resource usage, reduce energy waste and costs, and improve the reliability and security of the electricity supply. Smart Meters play a critical role by automatically collecting energy data and transmitting it for processing and decision-making, thereby supporting the efficient operation of Smart Grids. However, relying solely on Cloud Computing for data pre-processing in Smart Grids can lead to increased response times due to the latency between cloud data centers and Smart Meters. To mitigate this, we proposed FOCCA (Fog–Cloud Continuum Architecture) to enhance data control in Smart Grids. FOCCA employs the Q-balance algorithm, a neural network-based load-balancing approach, to manage computational resources at the edge, significantly reducing service response times. Q-balance accurately estimates the time required for computational resources to process requests and balances the load across available resources, thereby minimizing average response times. Experimental evaluations demonstrated that Q-balance, integrated within FOCCA, outperformed traditional load balancing algorithms like Min-Load and Round-robin, reducing average response times by up to 8.1 seconds fog machines and 16.2 seconds cloud machines.
{"title":"FOCCA: Fog–cloud continuum architecture for data imputation and load balancing in Smart Grids","authors":"Matheus T.M. Barbosa ,&nbsp;Eric B.C. Barros ,&nbsp;Vinícius F.S. Mota ,&nbsp;Dionisio M. Leite Filho ,&nbsp;Leobino N. Sampaio ,&nbsp;Bruno T. Kuehne ,&nbsp;Bruno G. Batista ,&nbsp;Damla Turgut ,&nbsp;Maycon L.M. Peixoto","doi":"10.1016/j.comnet.2024.111031","DOIUrl":"10.1016/j.comnet.2024.111031","url":null,"abstract":"<div><div>A Smart Grid operates as an advanced electricity network that leverages digital communications technology to detect and respond to local changes in usage, generation, and system conditions in near-real-time. This capability enables two-way communication between utilities and customers, integrating renewable energy sources and energy storage systems to enhance energy efficiency. The primary objective of a Smart Grid is to optimize resource usage, reduce energy waste and costs, and improve the reliability and security of the electricity supply. Smart Meters play a critical role by automatically collecting energy data and transmitting it for processing and decision-making, thereby supporting the efficient operation of Smart Grids. However, relying solely on Cloud Computing for data pre-processing in Smart Grids can lead to increased response times due to the latency between cloud data centers and Smart Meters. To mitigate this, we proposed FOCCA (Fog–Cloud Continuum Architecture) to enhance data control in Smart Grids. FOCCA employs the Q-balance algorithm, a neural network-based load-balancing approach, to manage computational resources at the edge, significantly reducing service response times. Q-balance accurately estimates the time required for computational resources to process requests and balances the load across available resources, thereby minimizing average response times. Experimental evaluations demonstrated that Q-balance, integrated within FOCCA, outperformed traditional load balancing algorithms like Min-Load and Round-robin, reducing average response times by up to 8.1 seconds fog machines and 16.2 seconds cloud machines.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"258 ","pages":"Article 111031"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143178145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning-based co-resident attack detection for 5G clouded environments
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-02-01 DOI: 10.1016/j.comnet.2024.111032
MeiYan Jin , HongBo Tang , Hang Qiu , Jie Yang
The cloudification of fifth-generation (5G) networks enhances flexibility and scalability while simultaneously introducing new security challenges, especially co-resident threats. This type of attack exploits the virtualization environment, allowing attackers to deploy malicious Virtual Machines (VMs) on the same physical host as critical 5G network element VMs, thereby initiating an attack. Existing techniques for improving isolation and access control are costly, while methods that detect abnormal VM behavior have gained research attention. However, most existing methods rely on static features of VMs and fail to effectively capture the hidden behaviors of attackers, leading to low classification and detection accuracy, as well as a higher likelihood of misclassification. In this paper, we propose a co-resident attack detection method based on behavioral feature vectors and machine learning. The method constructs behavioral feature vectors by integrating attackers’ stealthy behavior patterns and applies K-means clustering for user classification and labeling, followed by manual verification and adjustment. A Random Forest (RF) algorithm optimized with Bayesian techniques is then employed for attack detection. Experimental results on the Microsoft Azure dataset demonstrate that this method outperforms static feature-based approaches, achieving an accuracy of 99.48% and significantly enhancing the detection of potential attackers. Future work could consider integrating this method into a broader 5G security framework to adapt to the ever-evolving threat environment, further enhancing the security and reliability of 5G networks.
{"title":"Machine learning-based co-resident attack detection for 5G clouded environments","authors":"MeiYan Jin ,&nbsp;HongBo Tang ,&nbsp;Hang Qiu ,&nbsp;Jie Yang","doi":"10.1016/j.comnet.2024.111032","DOIUrl":"10.1016/j.comnet.2024.111032","url":null,"abstract":"<div><div>The cloudification of fifth-generation (5G) networks enhances flexibility and scalability while simultaneously introducing new security challenges, especially co-resident threats. This type of attack exploits the virtualization environment, allowing attackers to deploy malicious Virtual Machines (VMs) on the same physical host as critical 5G network element VMs, thereby initiating an attack. Existing techniques for improving isolation and access control are costly, while methods that detect abnormal VM behavior have gained research attention. However, most existing methods rely on static features of VMs and fail to effectively capture the hidden behaviors of attackers, leading to low classification and detection accuracy, as well as a higher likelihood of misclassification. In this paper, we propose a co-resident attack detection method based on behavioral feature vectors and machine learning. The method constructs behavioral feature vectors by integrating attackers’ stealthy behavior patterns and applies K-means clustering for user classification and labeling, followed by manual verification and adjustment. A Random Forest (RF) algorithm optimized with Bayesian techniques is then employed for attack detection. Experimental results on the Microsoft Azure dataset demonstrate that this method outperforms static feature-based approaches, achieving an accuracy of 99.48% and significantly enhancing the detection of potential attackers. Future work could consider integrating this method into a broader 5G security framework to adapt to the ever-evolving threat environment, further enhancing the security and reliability of 5G networks.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"258 ","pages":"Article 111032"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143178149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Networks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1