首页 > 最新文献

Computer Networks最新文献

英文 中文
NeighborGeo: IP geolocation based on neighbors
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-02-01 DOI: 10.1016/j.comnet.2024.110896
Xinye Wang , Dong Zhao , Xinran Liu , Zhaoxin Zhang , Tianzi Zhao
IP geolocation is crucial in fields such as cybersecurity, e-commerce, and social media. Current mainstream graph neural network methods have advanced localization accuracy by reframing the IP geolocation task as a node regression problem within an attribute graph, leveraging features to model the connectivity between nodes. However, in practical applications, landmarks are often scattered, irregular, and susceptible to outliers, which limits their accuracy due to the unreliability of landmark selection and relationship learning. To address these challenges, this paper introduces a novel IP geolocation model based on graph structure learning, termed NeighborGeo. This model employs reparameterization and supervised contrastive learning to precisely capture and selectively reinforce specific neighbor relationships between nodes in order to optimize structural representations. By accurately capturing and utilizing neighbors, this model achieves accurate predictions. Experimental results demonstrate that, on open-source datasets from New York, Los Angeles, and Shanghai, NeighborGeo achieves significantly higher localization accuracy compared to existing methods, particularly in scenarios with unevenly distributed landmarks.
{"title":"NeighborGeo: IP geolocation based on neighbors","authors":"Xinye Wang ,&nbsp;Dong Zhao ,&nbsp;Xinran Liu ,&nbsp;Zhaoxin Zhang ,&nbsp;Tianzi Zhao","doi":"10.1016/j.comnet.2024.110896","DOIUrl":"10.1016/j.comnet.2024.110896","url":null,"abstract":"<div><div>IP geolocation is crucial in fields such as cybersecurity, e-commerce, and social media. Current mainstream graph neural network methods have advanced localization accuracy by reframing the IP geolocation task as a node regression problem within an attribute graph, leveraging features to model the connectivity between nodes. However, in practical applications, landmarks are often scattered, irregular, and susceptible to outliers, which limits their accuracy due to the unreliability of landmark selection and relationship learning. To address these challenges, this paper introduces a novel IP geolocation model based on graph structure learning, termed NeighborGeo. This model employs reparameterization and supervised contrastive learning to precisely capture and selectively reinforce specific neighbor relationships between nodes in order to optimize structural representations. By accurately capturing and utilizing neighbors, this model achieves accurate predictions. Experimental results demonstrate that, on open-source datasets from New York, Los Angeles, and Shanghai, NeighborGeo achieves significantly higher localization accuracy compared to existing methods, particularly in scenarios with unevenly distributed landmarks.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"257 ","pages":"Article 110896"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AERO: An adaptive and efficient routing for off-chain payment channel networks
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-02-01 DOI: 10.1016/j.comnet.2024.111009
Longxia Huang , Changzhi Huo , Chengzhi Ge , Mengmeng Yang
Blockchain technology faces significant scalability challenges, characterized by low throughput and high transaction fees. Off-chain payment channel networks offer a promising solution by enabling faster transaction processing by offloading transactions away from the main blockchain. While existing research has primarily focused on enhancing instantaneous throughput, it often overlooks the critical issue of fund distribution imbalance at either end of the channel following transactions. This imbalance can negatively impact subsequent transactions, leading to reduced long-term throughput. Furthermore, temporary insufficiencies in channel balances may cause transaction requests to fail, further hindering overall payment channel network (PCN) performance. To address these limitations, this paper introduces an adaptive and efficient routing scheme AERO that leverages a balance coefficient to assess fund availability within channels. AERO facilitates optimal transaction path selection while incorporating probabilistic measures to evaluate channel transaction capacity, ensuring adaptive routing with minimal transaction losses and enhancements. Additionally, the proposed transaction scheduling algorithm in AERO incorporates a waiting queue at the transaction node, executing transactions only when the channel’s capacity meets predefined requirements. Simulation results show that under the same network environment, AERO effectively maintains a throughput of approximately 70 even as transaction volumes rapidly increase. Moreover, AERO demonstrates notable cost efficiency, with transaction fees exceeding those of competing schemes by at least 5% in the Lightning topology and 25% in the Ripple topology.
{"title":"AERO: An adaptive and efficient routing for off-chain payment channel networks","authors":"Longxia Huang ,&nbsp;Changzhi Huo ,&nbsp;Chengzhi Ge ,&nbsp;Mengmeng Yang","doi":"10.1016/j.comnet.2024.111009","DOIUrl":"10.1016/j.comnet.2024.111009","url":null,"abstract":"<div><div>Blockchain technology faces significant scalability challenges, characterized by low throughput and high transaction fees. Off-chain payment channel networks offer a promising solution by enabling faster transaction processing by offloading transactions away from the main blockchain. While existing research has primarily focused on enhancing instantaneous throughput, it often overlooks the critical issue of fund distribution imbalance at either end of the channel following transactions. This imbalance can negatively impact subsequent transactions, leading to reduced long-term throughput. Furthermore, temporary insufficiencies in channel balances may cause transaction requests to fail, further hindering overall payment channel network (PCN) performance. To address these limitations, this paper introduces an adaptive and efficient routing scheme AERO that leverages a balance coefficient to assess fund availability within channels. AERO facilitates optimal transaction path selection while incorporating probabilistic measures to evaluate channel transaction capacity, ensuring adaptive routing with minimal transaction losses and enhancements. Additionally, the proposed transaction scheduling algorithm in AERO incorporates a waiting queue at the transaction node, executing transactions only when the channel’s capacity meets predefined requirements. Simulation results show that under the same network environment, AERO effectively maintains a throughput of approximately 70 even as transaction volumes rapidly increase. Moreover, AERO demonstrates notable cost efficiency, with transaction fees exceeding those of competing schemes by at least 5% in the Lightning topology and 25% in the Ripple topology.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"257 ","pages":"Article 111009"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A survey on encrypted network traffic: A comprehensive survey of identification/classification techniques, challenges, and future directions
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-02-01 DOI: 10.1016/j.comnet.2024.110984
Adit Sharma, Arash Habibi Lashkari
Encrypted traffic detection and classification is a critical domain in network security, increasingly essential in an era of pervasive encryption. This survey paper delves into integrating advanced Machine Learning (ML) and Deep Learning (DL) techniques to address the challenges of robust encryption methods and dynamic network behaviors. Despite notable advancements, there remains a substantial gap in the operational application of these technologies, often constrained by scalability, efficiency, and adaptability to varied encryption standards. We critically review existing methodologies from 7 surveys and 82 related technical papers, highlight the shortcomings, and propose future research directions. Our analysis underscores the need to develop innovative, resource-efficient models that seamlessly adapt to new threats and encryption techniques without compromising performance. Additionally, we advocate for creating comprehensive datasets that merge encrypted and non-encrypted traffic to enhance model training and testing. This survey maps out the trajectory of recent developments and charts a course for future research that could significantly enhance encrypted traffic management and security capabilities.
{"title":"A survey on encrypted network traffic: A comprehensive survey of identification/classification techniques, challenges, and future directions","authors":"Adit Sharma,&nbsp;Arash Habibi Lashkari","doi":"10.1016/j.comnet.2024.110984","DOIUrl":"10.1016/j.comnet.2024.110984","url":null,"abstract":"<div><div>Encrypted traffic detection and classification is a critical domain in network security, increasingly essential in an era of pervasive encryption. This survey paper delves into integrating advanced Machine Learning (ML) and Deep Learning (DL) techniques to address the challenges of robust encryption methods and dynamic network behaviors. Despite notable advancements, there remains a substantial gap in the operational application of these technologies, often constrained by scalability, efficiency, and adaptability to varied encryption standards. We critically review existing methodologies from 7 surveys and 82 related technical papers, highlight the shortcomings, and propose future research directions. Our analysis underscores the need to develop innovative, resource-efficient models that seamlessly adapt to new threats and encryption techniques without compromising performance. Additionally, we advocate for creating comprehensive datasets that merge encrypted and non-encrypted traffic to enhance model training and testing. This survey maps out the trajectory of recent developments and charts a course for future research that could significantly enhance encrypted traffic management and security capabilities.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"257 ","pages":"Article 110984"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Probe-Optimizer: Discovering important nodes for proactive in-band network telemetry to achieve better probe orchestration
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-02-01 DOI: 10.1016/j.comnet.2024.110935
Deyu Zhao , Guang Cheng , Xuan Chen , Yuyu Zhao , Wei Zhang , Lu Lu , Siyuan Zhou , Yuexia Fu
By embedding the state data maintained by the programmable data plane into additional customizable probes, proactive in-band network telemetry (INT) can easily achieve flexible, full-coverage and fine-grained network measurement. However, a significant portion of these probes are invalid, failing to capture meaningful network event information, and instead increasing bandwidth occupancy as well as communication overhead between the control plane and the data plane. Furthermore, these invalid probes exacerbate controller overhead, forcing resource-limited CPUs to perform a large amount of meaningless computation and analysis. In this paper, we propose Probe-Optimizer, a novel framework tailored for proactive INT, which can reduce the introduction of invalid probes to comprehensively lower the various telemetry overheads mentioned above. Technically, Probe-Optimizer assigns a unique importance to each node in the telemetry scenario. The importance is significantly related to the probability of network events occurring, which can be used to select important nodes worth monitoring in the topology over a period of time. Then, Probe-Optimizer generates a dedicated set of probe paths for important nodes and another set for the remaining nodes/links, customizing a more appropriate probe frequency for each probe path. Extensive evaluations on both random and FatTree topologies with different scales are conducted. The results show that Probe-Optimizer introduces significantly fewer invalid probes. Benefiting from this, for the topology with a size of more than 200 nodes, compared to the state-of-art proactive INT methods, Probe-Optimizer achieves a higher proportion of probes carrying network events and at least 13%, 42%, and 26% lower communication overhead, CPU usage, and average bandwidth occupancy, respectively.
{"title":"Probe-Optimizer: Discovering important nodes for proactive in-band network telemetry to achieve better probe orchestration","authors":"Deyu Zhao ,&nbsp;Guang Cheng ,&nbsp;Xuan Chen ,&nbsp;Yuyu Zhao ,&nbsp;Wei Zhang ,&nbsp;Lu Lu ,&nbsp;Siyuan Zhou ,&nbsp;Yuexia Fu","doi":"10.1016/j.comnet.2024.110935","DOIUrl":"10.1016/j.comnet.2024.110935","url":null,"abstract":"<div><div>By embedding the state data maintained by the programmable data plane into additional customizable probes, proactive in-band network telemetry (INT) can easily achieve flexible, full-coverage and fine-grained network measurement. However, a significant portion of these probes are invalid, failing to capture meaningful network event information, and instead increasing bandwidth occupancy as well as communication overhead between the control plane and the data plane. Furthermore, these invalid probes exacerbate controller overhead, forcing resource-limited CPUs to perform a large amount of meaningless computation and analysis. In this paper, we propose Probe-Optimizer, a novel framework tailored for proactive INT, which can reduce the introduction of invalid probes to comprehensively lower the various telemetry overheads mentioned above. Technically, Probe-Optimizer assigns a unique importance to each node in the telemetry scenario. The importance is significantly related to the probability of network events occurring, which can be used to select important nodes worth monitoring in the topology over a period of time. Then, Probe-Optimizer generates a dedicated set of probe paths for important nodes and another set for the remaining nodes/links, customizing a more appropriate probe frequency for each probe path. Extensive evaluations on both random and FatTree topologies with different scales are conducted. The results show that Probe-Optimizer introduces significantly fewer invalid probes. Benefiting from this, for the topology with a size of more than 200 nodes, compared to the state-of-art proactive INT methods, Probe-Optimizer achieves a higher proportion of probes carrying network events and at least 13%, 42%, and 26% lower communication overhead, CPU usage, and average bandwidth occupancy, respectively.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"257 ","pages":"Article 110935"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using spanners to improve network performance
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-02-01 DOI: 10.1016/j.comnet.2024.110976
Guy Rozenberg, Michael Segal
In this paper we introduce a new, minimum-cuts based spanner algorithm, when the goal is twofold: (a) to decrease the number of active links in the network and (b) to maintain the ability of the SDN (Software-Defined Networking) controller to perform load balancing. The proposed spanner concept also can be used in order to reduce the running time of the SDN centralized routing algorithm. In addition, we show how to maintain the spanner under dynamic link insertion, deletion and changed weight. The validation of our solution is made through the analysis and simulation that show the superiority of our approach in many cases.
{"title":"Using spanners to improve network performance","authors":"Guy Rozenberg,&nbsp;Michael Segal","doi":"10.1016/j.comnet.2024.110976","DOIUrl":"10.1016/j.comnet.2024.110976","url":null,"abstract":"<div><div>In this paper we introduce a new, minimum-cuts based spanner algorithm, when the goal is twofold: (a) to decrease the number of active links in the network and (b) to maintain the ability of the SDN (Software-Defined Networking) controller to perform load balancing. The proposed spanner concept also can be used in order to reduce the running time of the SDN centralized routing algorithm. In addition, we show how to maintain the spanner under dynamic link insertion, deletion and changed weight. The validation of our solution is made through the analysis and simulation that show the superiority of our approach in many cases.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"257 ","pages":"Article 110976"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comparative measurement study of cross-layer 5G performance under different mobility scenarios
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-02-01 DOI: 10.1016/j.comnet.2024.110952
Jiahai Hu , Lin Wang , Jing Wu , Qiangyu Pei , Fangming Liu , Bo Li
The 5G technology is expected to revolutionize various applications with stringent latency and throughput requirements, such as augmented reality and cloud gaming. Despite the rapid 5G deployment, it is still a puzzle whether current commercial 5G networks can meet the strict requirements and deliver the expected quality of experience (QoE) of these applications. Especially in mobile scenarios, as user mobility (e.g., walking and driving) plays a critical role in both network performance and application QoE, it becomes more challenging to provide high performance stably and continuously. To solve this puzzle, in this paper, we present a comprehensive cross-layer measurement study of current commercial 5G networks under five mobility scenarios typically seen in our daily lives. Specifically, under these mobility scenarios, we cover (1) the impact of physical layer metrics on network performance, (2) general network performance at the network layer, (3) comparison of four congestion control algorithms at the transport layer, and (4) application QoE at the application layer. Our measurement results show that the achievable network performance and application QoE under current commercial 5G networks falls behind expectations. We further reveal some insights that could be leveraged to improve the QoE of these applications under mobility scenarios.
{"title":"A comparative measurement study of cross-layer 5G performance under different mobility scenarios","authors":"Jiahai Hu ,&nbsp;Lin Wang ,&nbsp;Jing Wu ,&nbsp;Qiangyu Pei ,&nbsp;Fangming Liu ,&nbsp;Bo Li","doi":"10.1016/j.comnet.2024.110952","DOIUrl":"10.1016/j.comnet.2024.110952","url":null,"abstract":"<div><div>The 5G technology is expected to revolutionize various applications with stringent latency and throughput requirements, such as augmented reality and cloud gaming. Despite the rapid 5G deployment, it is still a puzzle whether current commercial 5G networks can meet the strict requirements and deliver the expected quality of experience (QoE) of these applications. Especially in mobile scenarios, as user mobility (e.g., walking and driving) plays a critical role in both network performance and application QoE, it becomes more challenging to provide high performance stably and continuously. To solve this puzzle, in this paper, we present a comprehensive cross-layer measurement study of current commercial 5G networks under five mobility scenarios typically seen in our daily lives. Specifically, under these mobility scenarios, we cover (1) the impact of physical layer metrics on network performance, (2) general network performance at the network layer, (3) comparison of four congestion control algorithms at the transport layer, and (4) application QoE at the application layer. Our measurement results show that the achievable network performance and application QoE under current commercial 5G networks falls behind expectations. We further reveal some insights that could be leveraged to improve the QoE of these applications under mobility scenarios.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"257 ","pages":"Article 110952"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint service caching, computation offloading and resource allocation for dual-layer aerial Internet of Things
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-02-01 DOI: 10.1016/j.comnet.2024.110974
Yue Zhang , Zhenyu Na , Zihao Wen , Arumugam Nallanathan , Weidang Lu
The exponential growth of Internet of Things devices has triggered an unprecedented surge in mobile data traffic, posing significant challenges for latency-sensitive services. Mobile Edge Computing (MEC) has emerged as a promising solution by decentralizing computation and caching resources to the network edge. However, traditional terrestrial MEC systems struggle with limited coverage and flexibility. To overcome these issues, this paper proposes a novel dual-layer aerial MEC architecture, where multiple Unmanned Aerial Vehicles (UAVs) provide computation and caching support for resource-constrained terminal devices, and a high-altitude platform serves as a central hub for long-term service storage and retrieval. The system aims to minimize total latency by jointly optimizing service caching, task offloading, resource allocation, and 3D UAV deployment, formulated as a mixed-integer nonlinear programming problem and efficiently solved using an iterative algorithm based on linear relaxation and successive convex approximation. Simulation results demonstrate that the proposed scheme converges quickly across different scales and outperforms all baselines with minimal runtime increase, reducing total latency by 42.86% compared to the random UAV deployment.
{"title":"Joint service caching, computation offloading and resource allocation for dual-layer aerial Internet of Things","authors":"Yue Zhang ,&nbsp;Zhenyu Na ,&nbsp;Zihao Wen ,&nbsp;Arumugam Nallanathan ,&nbsp;Weidang Lu","doi":"10.1016/j.comnet.2024.110974","DOIUrl":"10.1016/j.comnet.2024.110974","url":null,"abstract":"<div><div>The exponential growth of Internet of Things devices has triggered an unprecedented surge in mobile data traffic, posing significant challenges for latency-sensitive services. Mobile Edge Computing (MEC) has emerged as a promising solution by decentralizing computation and caching resources to the network edge. However, traditional terrestrial MEC systems struggle with limited coverage and flexibility. To overcome these issues, this paper proposes a novel dual-layer aerial MEC architecture, where multiple Unmanned Aerial Vehicles (UAVs) provide computation and caching support for resource-constrained terminal devices, and a high-altitude platform serves as a central hub for long-term service storage and retrieval. The system aims to minimize total latency by jointly optimizing service caching, task offloading, resource allocation, and 3D UAV deployment, formulated as a mixed-integer nonlinear programming problem and efficiently solved using an iterative algorithm based on linear relaxation and successive convex approximation. Simulation results demonstrate that the proposed scheme converges quickly across different scales and outperforms all baselines with minimal runtime increase, reducing total latency by 42.86% compared to the random UAV deployment.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"257 ","pages":"Article 110974"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Minimizing active nodes in MEC environments: A distributed learning-driven framework for application placement
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-02-01 DOI: 10.1016/j.comnet.2024.111008
Claudia Torres-Pérez , Estefanía Coronado , Cristina Cervelló-Pastor , Javier Palomares , Estela Carmona-Cejudo , Muhammad Shuaib Siddiqui
Application placement in Multi-Access Edge Computing (MEC) must adhere to service level agreements (SLAs), minimize energy consumption, and optimize metrics based on specific service requirements. In distributed MEC system environments, the placement problem also requires consideration of various types of applications with different entry distribution rates and requirements, and the incorporation of varying numbers of hosts to enable the development of a scalable system. One possible way to achieve these objectives is to minimize the number of active nodes in order to avoid resource fragmentation and unnecessary energy consumption. This paper presents a Distributed Deep Reinforcement Learning-based Capacity-Aware Application Placement (DDRL-CAAP) approach aimed at reducing the number of active nodes in a multi-MEC system scenario that is managed by several orchestrators. Internet of Things (IoT) and Extended Reality (XR) applications are considered in order to evaluate close-to-real-world environments via simulation and on a real testbed. The proposed design is scalable for different numbers of nodes, MEC systems, and vertical applications. The performance results show that DDRL-CAAP achieves an average improvement of 98.3% in inference time compared with the benchmark Integer Linear Programming (ILP) algorithm, and a mean reduction of 4.35% in power consumption compared with a Random Selection (RS) algorithm.
{"title":"Minimizing active nodes in MEC environments: A distributed learning-driven framework for application placement","authors":"Claudia Torres-Pérez ,&nbsp;Estefanía Coronado ,&nbsp;Cristina Cervelló-Pastor ,&nbsp;Javier Palomares ,&nbsp;Estela Carmona-Cejudo ,&nbsp;Muhammad Shuaib Siddiqui","doi":"10.1016/j.comnet.2024.111008","DOIUrl":"10.1016/j.comnet.2024.111008","url":null,"abstract":"<div><div>Application placement in Multi-Access Edge Computing (MEC) must adhere to service level agreements (SLAs), minimize energy consumption, and optimize metrics based on specific service requirements. In distributed MEC system environments, the placement problem also requires consideration of various types of applications with different entry distribution rates and requirements, and the incorporation of varying numbers of hosts to enable the development of a scalable system. One possible way to achieve these objectives is to minimize the number of active nodes in order to avoid resource fragmentation and unnecessary energy consumption. This paper presents a Distributed Deep Reinforcement Learning-based Capacity-Aware Application Placement (DDRL-CAAP) approach aimed at reducing the number of active nodes in a multi-MEC system scenario that is managed by several orchestrators. Internet of Things (IoT) and Extended Reality (XR) applications are considered in order to evaluate close-to-real-world environments via simulation and on a real testbed. The proposed design is scalable for different numbers of nodes, MEC systems, and vertical applications. The performance results show that DDRL-CAAP achieves an average improvement of 98.3% in inference time compared with the benchmark Integer Linear Programming (ILP) algorithm, and a mean reduction of 4.35% in power consumption compared with a Random Selection (RS) algorithm.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"257 ","pages":"Article 111008"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced LR-FHSS receiver for headerless frame recovery in space–terrestrial integrated IoT networks
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-02-01 DOI: 10.1016/j.comnet.2024.111018
Diego Maldonado , Leonardo S. Cardoso , Juan A. Fraire , Alexandre Guitton , Oana Iova , Megumi Kaneko , Hervé Rivano
Long-Range Frequency Hopping Spread Spectrum (LR-FHSS) is a recent IoT modulation technique designed for communication between low-power ground end-devices and Low-Earth Orbit (LEO) satellites. To successfully decode a frame, an LR-FHSS gateway must receive at least one header replica and a substantial portion of the payload fragments. However, the likelihood of LR-FHSS header loss increases with the number of concurrent transmissions. Moreover, Doppler effects (such as the Doppler shift and the Doppler rate) distort the signals the satellites receive. This paper investigates advanced receiver techniques for recovering LR-FHSS frames with lost headers characterized by significant Doppler effects. This paper’s main contribution is specifying and validating a novel LR-FHSS receiver model for space–terrestrial integrated IoT environments. Obtained simulation results prove that our enhanced LR-FHSS receiver can decode a significant portion of the missing frames, improving the overall throughput achievable by using the legacy LR-FHSS receiver.
{"title":"Enhanced LR-FHSS receiver for headerless frame recovery in space–terrestrial integrated IoT networks","authors":"Diego Maldonado ,&nbsp;Leonardo S. Cardoso ,&nbsp;Juan A. Fraire ,&nbsp;Alexandre Guitton ,&nbsp;Oana Iova ,&nbsp;Megumi Kaneko ,&nbsp;Hervé Rivano","doi":"10.1016/j.comnet.2024.111018","DOIUrl":"10.1016/j.comnet.2024.111018","url":null,"abstract":"<div><div>Long-Range Frequency Hopping Spread Spectrum (LR-FHSS) is a recent IoT modulation technique designed for communication between low-power ground end-devices and Low-Earth Orbit (LEO) satellites. To successfully decode a frame, an LR-FHSS gateway must receive at least one header replica and a substantial portion of the payload fragments. However, the likelihood of LR-FHSS header loss increases with the number of concurrent transmissions. Moreover, Doppler effects (such as the Doppler shift and the Doppler rate) distort the signals the satellites receive. This paper investigates advanced receiver techniques for recovering LR-FHSS frames with lost headers characterized by significant Doppler effects. This paper’s main contribution is specifying and validating a novel LR-FHSS receiver model for space–terrestrial integrated IoT environments. Obtained simulation results prove that our enhanced LR-FHSS receiver can decode a significant portion of the missing frames, improving the overall throughput achievable by using the legacy LR-FHSS receiver.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"257 ","pages":"Article 111018"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Two enhanced schemes for coordinated spatial reuse in IEEE 802.11be: Adaptive and distributed approaches
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-02-01 DOI: 10.1016/j.comnet.2025.111060
Deqing Zhu , Lidong Wang , Genmei Pan , Shenji Luan
Coordinated spatial reuse (CSR) is a novel mechanism that has attracted significant discussion in the upcoming Wi-Fi standard. Once a sharing access point (AP) acquires a transmission opportunity (TXOP), it informs a neighboring AP, designated as a shared AP, of the maximum tolerable interference level. The shared AP then calculates its maximum transmit power accordingly, thereby reducing interference and ensuring successful parallel transmissions. However, the CSR has three drawbacks. Firstly, it lacks explicit criteria for selecting multiple shared APs, disregarding the cumulative interference that arises from multiple shared APs. Secondly, excessive signaling occurs due to the periodic update and exchange of received signal strength indicator (RSSI) information among all the APs. Thirdly, the area throughput may suffer due to the low signal to interference plus noise ratio (SINR) experienced by shared APs, as only the shared AP constrains transmit power while the sharing AP transmits at its maximum.
To address these drawbacks, we propose two enhanced schemes for CSR. The first is adaptive CSR (ACSR), which can easily assess whether a neighboring AP should participate in CSR and adaptively determine the desired number of shared APs. The final transmit powers determined for shared APs in ACSR ensure that both the sharing AP and the shared APs meet the SINR requirements, thereby enhancing the performance of CSR. Furthermore, we propose a distributed CSR (DCSR) scheme, which formulates area throughput as a convex optimization problem. In DCSR, each AP independently and concurrently solves its own local optimization problems, drastically reducing the signaling overhead for RSSI update and exchange while obtaining optimal transmit powers for all involved APs. The DCSR scheme can adaptively determine the optimal number of shared APs. The proposed DCSR effectively overcomes all the drawbacks of the original CSR.
Analysis results show that the ACSR and DCSR outperform the CSR, achieving 1.45 and 2.57 times higher area throughput, respectively, and increasing the number of successful parallel transmissions (NSPT) by 1.45 and 2.8 times, respectively.
{"title":"Two enhanced schemes for coordinated spatial reuse in IEEE 802.11be: Adaptive and distributed approaches","authors":"Deqing Zhu ,&nbsp;Lidong Wang ,&nbsp;Genmei Pan ,&nbsp;Shenji Luan","doi":"10.1016/j.comnet.2025.111060","DOIUrl":"10.1016/j.comnet.2025.111060","url":null,"abstract":"<div><div>Coordinated spatial reuse (CSR) is a novel mechanism that has attracted significant discussion in the upcoming Wi-Fi standard. Once a sharing access point (AP) acquires a transmission opportunity (TXOP), it informs a neighboring AP, designated as a shared AP, of the maximum tolerable interference level. The shared AP then calculates its maximum transmit power accordingly, thereby reducing interference and ensuring successful parallel transmissions. However, the CSR has three drawbacks. Firstly, it lacks explicit criteria for selecting multiple shared APs, disregarding the cumulative interference that arises from multiple shared APs. Secondly, excessive signaling occurs due to the periodic update and exchange of received signal strength indicator (RSSI) information among all the APs. Thirdly, the area throughput may suffer due to the low signal to interference plus noise ratio (SINR) experienced by shared APs, as only the shared AP constrains transmit power while the sharing AP transmits at its maximum.</div><div>To address these drawbacks, we propose two enhanced schemes for CSR. The first is adaptive CSR (ACSR), which can easily assess whether a neighboring AP should participate in CSR and adaptively determine the desired number of shared APs. The final transmit powers determined for shared APs in ACSR ensure that both the sharing AP and the shared APs meet the SINR requirements, thereby enhancing the performance of CSR. Furthermore, we propose a distributed CSR (DCSR) scheme, which formulates area throughput as a convex optimization problem. In DCSR, each AP independently and concurrently solves its own local optimization problems, drastically reducing the signaling overhead for RSSI update and exchange while obtaining optimal transmit powers for all involved APs. The DCSR scheme can adaptively determine the optimal number of shared APs. The proposed DCSR effectively overcomes all the drawbacks of the original CSR.</div><div>Analysis results show that the ACSR and DCSR outperform the CSR, achieving 1.45 and 2.57 times higher area throughput, respectively, and increasing the number of successful parallel transmissions (NSPT) by 1.45 and 2.8 times, respectively.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"258 ","pages":"Article 111060"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143176536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Networks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1