With the growing demand for latency-sensitive applications in 5G networks, edge computing has emerged as a promising solution. It enables instant response and dynamic resource allocation based on real-time network information by moving resources from the cloud to the network edge. Containers, known for their lightweight nature and ease of deployment, have been recognized as a valuable virtualization technology for service deployment. However, the prolonged startup time of containers can lead to long response time, particularly in edge computing scenarios characterized by long propagation time, frequent deployment, and migration. In this paper, we comprehensively consider image caching, container assignment, and registry selection problem in an edge system. To our best effort, there is no existing work that has taken all the above aspects into account. To address the problem, we propose a novel image caching strategy that employs partial caching, allowing local registries to cache either the least functional or complete version of application images. In addition, a container assignment and registry selection problem is solved by using an edge-based collaborative lazy pulling algorithm. To evaluate the performance of our proposed algorithms, we conduct experiments with real-world app usage data and popular images in a testbed environment. The experimental results demonstrate that our algorithms outperform traditional greedy algorithms in terms of average user response time and cache hit rate.
{"title":"Edge Computing Management With Collaborative Lazy Pulling for Accelerated Container Startup","authors":"Chiao-Cheng Chen;Yao Chiang;Yu-Chieh Lee;Hung-Yu Wei","doi":"10.1109/TNSM.2024.3462408","DOIUrl":"10.1109/TNSM.2024.3462408","url":null,"abstract":"With the growing demand for latency-sensitive applications in 5G networks, edge computing has emerged as a promising solution. It enables instant response and dynamic resource allocation based on real-time network information by moving resources from the cloud to the network edge. Containers, known for their lightweight nature and ease of deployment, have been recognized as a valuable virtualization technology for service deployment. However, the prolonged startup time of containers can lead to long response time, particularly in edge computing scenarios characterized by long propagation time, frequent deployment, and migration. In this paper, we comprehensively consider image caching, container assignment, and registry selection problem in an edge system. To our best effort, there is no existing work that has taken all the above aspects into account. To address the problem, we propose a novel image caching strategy that employs partial caching, allowing local registries to cache either the least functional or complete version of application images. In addition, a container assignment and registry selection problem is solved by using an edge-based collaborative lazy pulling algorithm. To evaluate the performance of our proposed algorithms, we conduct experiments with real-world app usage data and popular images in a testbed environment. The experimental results demonstrate that our algorithms outperform traditional greedy algorithms in terms of average user response time and cache hit rate.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"21 6","pages":"6437-6450"},"PeriodicalIF":4.7,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142264517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-16DOI: 10.1109/TNSM.2024.3461875
Yifei Lu;Jingqi Li;Shuren Li;Chanying Huang
Communication overhead is a significant challenge in distributed deep learning (DDL) training, often hindering efficiency. While existing solutions like gradient compression, compute/communication overlap, and layer-wise flow scheduling have been proposed, they are often coarse-grained and insufficient, especially under network congestion. These congestion-unaware methods can lead to long flow completion times, known as the tail latency, resulting in extended training time. In this paper, we argue that packet loss tolerance methods can mitigate the tail latency issue without sacrificing training accuracy, with the tolerance bound varying across different DDL model layers. We introduce PLOT, a fine-grained packet loss tolerance algorithm, which optimizes communication overhead by leveraging the layer-specific loss tolerance of the DNN model. PLOT employs a UDP-based transmission mechanism for gradient transfer, addressing the tail latency issue and maintaining training accuracy through packet loss tolerance. Our evaluations on both small-scale testbeds and large-scale simulations show that PLOT outperforms other congestion algorithms, effectively reducing tail latency and DDL training time.
{"title":"A Fine-Grained Packet Loss Tolerance Transmission Algorithm for Communication Optimization in Distributed Deep Learning","authors":"Yifei Lu;Jingqi Li;Shuren Li;Chanying Huang","doi":"10.1109/TNSM.2024.3461875","DOIUrl":"https://doi.org/10.1109/TNSM.2024.3461875","url":null,"abstract":"Communication overhead is a significant challenge in distributed deep learning (DDL) training, often hindering efficiency. While existing solutions like gradient compression, compute/communication overlap, and layer-wise flow scheduling have been proposed, they are often coarse-grained and insufficient, especially under network congestion. These congestion-unaware methods can lead to long flow completion times, known as the tail latency, resulting in extended training time. In this paper, we argue that packet loss tolerance methods can mitigate the tail latency issue without sacrificing training accuracy, with the tolerance bound varying across different DDL model layers. We introduce PLOT, a fine-grained packet loss tolerance algorithm, which optimizes communication overhead by leveraging the layer-specific loss tolerance of the DNN model. PLOT employs a UDP-based transmission mechanism for gradient transfer, addressing the tail latency issue and maintaining training accuracy through packet loss tolerance. Our evaluations on both small-scale testbeds and large-scale simulations show that PLOT outperforms other congestion algorithms, effectively reducing tail latency and DDL training time.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"21 6","pages":"6112-6125"},"PeriodicalIF":4.7,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142880296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-13DOI: 10.1109/TNSM.2024.3460082
Samuel Kopmann;Martina Zitterbart
Network infrastructures are critical and, therefore, subject to harmful attacks against their operation and the availability of their provided services. Detecting such attacks, especially in high-performance networks, is challenging considering the detection rate, reaction time, and scalability. Attack detection becomes even more demanding concerning networks of the future facing increasing data rates and flow counts. We thoroughly evaluate eMinD, an approach that scales well to high data rates and large amounts of data flows. eMinD investigates aggregated traffic data, i.e., it is not based on micro-flows and their inherent scalability problems. We evaluate eMinD with real-world traffic data, compare it to related work, and show that eMinD outperforms micro-flow-based approaches regarding the reaction time, scalability, and the detection performance. We reduce required state space by 99.97%. The average reaction time is reduced by 90%, while the detection performance is even increased, although highly aggregating arriving traffic. We further show the importance of micro-flow-overarching traffic features, e.g., IP address and port distributions, for detecting distributed network attacks, i.e., DDoS attacks and port scans.
{"title":"Importance Analysis of Micro-Flow Independent Features for Detecting Distributed Network Attacks","authors":"Samuel Kopmann;Martina Zitterbart","doi":"10.1109/TNSM.2024.3460082","DOIUrl":"10.1109/TNSM.2024.3460082","url":null,"abstract":"Network infrastructures are critical and, therefore, subject to harmful attacks against their operation and the availability of their provided services. Detecting such attacks, especially in high-performance networks, is challenging considering the detection rate, reaction time, and scalability. Attack detection becomes even more demanding concerning networks of the future facing increasing data rates and flow counts. We thoroughly evaluate eMinD, an approach that scales well to high data rates and large amounts of data flows. eMinD investigates aggregated traffic data, i.e., it is not based on micro-flows and their inherent scalability problems. We evaluate eMinD with real-world traffic data, compare it to related work, and show that eMinD outperforms micro-flow-based approaches regarding the reaction time, scalability, and the detection performance. We reduce required state space by 99.97%. The average reaction time is reduced by 90%, while the detection performance is even increased, although highly aggregating arriving traffic. We further show the importance of micro-flow-overarching traffic features, e.g., IP address and port distributions, for detecting distributed network attacks, i.e., DDoS attacks and port scans.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"21 6","pages":"5947-5957"},"PeriodicalIF":4.7,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142264522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-12DOI: 10.1109/TNSM.2024.3459634
Jun Liu;Paulo Renato da Costa Mendes;Andreas Wirsen;Daniel Görges
The development of 5G enables communication systems to satisfy heterogeneous service requirements of novel applications. For instance, ultra-reliable low latency communication (uRLLC) is applicable for many safety-critical and latency-sensitive scenarios. Many research papers aim to convert the stringent reliability and latency factors to a static data rate requirement. However, in most industrial scenarios, the communication traffic presents short-term/long-term dependency, burst, and non-stationary characteristics. This makes it more challenging to obtain a tight upper bound for the rate requirement of uRLLC. In this work, we introduce a novel solution based on decentralized model predictive control (MPC), where the dynamic incoming communication traffic and the users’ quality of service (QoS) requirements are reformulated into an up-to-date data rate constraint. Under such assumptions, we consider a use case of the resource allocation problem for a single uRLLC network slice. The allocation task is solved by the successive convex approximation (SCA) algorithm for a more in-depth analysis. The simulation results show that the proposed algorithm can deal with non-stationary communication traffic in real-time, as well as provide good performance with guaranteed delay and reliability requirements.
{"title":"MPC-Based 5G uRLLC Rate Calculation","authors":"Jun Liu;Paulo Renato da Costa Mendes;Andreas Wirsen;Daniel Görges","doi":"10.1109/TNSM.2024.3459634","DOIUrl":"10.1109/TNSM.2024.3459634","url":null,"abstract":"The development of 5G enables communication systems to satisfy heterogeneous service requirements of novel applications. For instance, ultra-reliable low latency communication (uRLLC) is applicable for many safety-critical and latency-sensitive scenarios. Many research papers aim to convert the stringent reliability and latency factors to a static data rate requirement. However, in most industrial scenarios, the communication traffic presents short-term/long-term dependency, burst, and non-stationary characteristics. This makes it more challenging to obtain a tight upper bound for the rate requirement of uRLLC. In this work, we introduce a novel solution based on decentralized model predictive control (MPC), where the dynamic incoming communication traffic and the users’ quality of service (QoS) requirements are reformulated into an up-to-date data rate constraint. Under such assumptions, we consider a use case of the resource allocation problem for a single uRLLC network slice. The allocation task is solved by the successive convex approximation (SCA) algorithm for a more in-depth analysis. The simulation results show that the proposed algorithm can deal with non-stationary communication traffic in real-time, as well as provide good performance with guaranteed delay and reliability requirements.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"21 6","pages":"6770-6795"},"PeriodicalIF":4.7,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10679265","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142187249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-12DOI: 10.1109/tnsm.2024.3459796
Md Ibrahim Ibne Alam, Anindo Mahmood, Prasun K. Dey, Murat Yuksel, Koushik Kar
{"title":"Meta-Peering: Automating ISP Peering Decision Process","authors":"Md Ibrahim Ibne Alam, Anindo Mahmood, Prasun K. Dey, Murat Yuksel, Koushik Kar","doi":"10.1109/tnsm.2024.3459796","DOIUrl":"https://doi.org/10.1109/tnsm.2024.3459796","url":null,"abstract":"","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"11 1","pages":""},"PeriodicalIF":5.3,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142187250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Next-generation offshore wind farms are increasingly adopting vendor-agnostic software-defined networking (SDN) to oversee their Industrial Internet of Things Edge (IIoT-Edge) networks. The SDN-enabled IIoT-Edge networks present a promising solution for high availability and consistent performance-demanding environments such as offshore wind farm critical infrastructure monitoring, operation, and maintenance. Inevitably, these networks encounter stochastic failures such as random component malfunctions, software malfunctions, CPU overconsumption, and memory leakages. These stochastic failures result in intermittent network service interruptions, disrupting the real-time exchange of critical, latency-sensitive data essential for offshore wind farm operations. Given the criticality of data transfer in offshore wind farms, this paper investigates the dependability of the SDN-enabled IIoT-Edge networks amid the highlighted stochastic failures using a two-pronged approach to: (i) observe the transient behavior using a proof-of-concept simulation testbed and (ii) quantitatively assess the steady-state behavior using a probabilistic Homogeneous Continuous Time Markov Model (HCTMM) under varying failure and repair conditions. The study finds that network throughput decreases during failures in the transient behavior analysis. After quantitatively analyzing 15 case scenarios with varying failure and repair combinations, steady-state availability ranged from 93% to 98%, nearing the industry-standard SLA of 99.999%, guaranteeing up to 3 years of uninterrupted network service.
{"title":"Investigating the Dependability of Software-Defined IIoT-Edge Networks for Next-Generation Offshore Wind Farms","authors":"Agrippina Mwangi;Nadine Kabbara;Patrick Coudray;Mikkel Gryning;Madeleine Gibescu","doi":"10.1109/TNSM.2024.3458447","DOIUrl":"10.1109/TNSM.2024.3458447","url":null,"abstract":"Next-generation offshore wind farms are increasingly adopting vendor-agnostic software-defined networking (SDN) to oversee their Industrial Internet of Things Edge (IIoT-Edge) networks. The SDN-enabled IIoT-Edge networks present a promising solution for high availability and consistent performance-demanding environments such as offshore wind farm critical infrastructure monitoring, operation, and maintenance. Inevitably, these networks encounter stochastic failures such as random component malfunctions, software malfunctions, CPU overconsumption, and memory leakages. These stochastic failures result in intermittent network service interruptions, disrupting the real-time exchange of critical, latency-sensitive data essential for offshore wind farm operations. Given the criticality of data transfer in offshore wind farms, this paper investigates the dependability of the SDN-enabled IIoT-Edge networks amid the highlighted stochastic failures using a two-pronged approach to: (i) observe the transient behavior using a proof-of-concept simulation testbed and (ii) quantitatively assess the steady-state behavior using a probabilistic Homogeneous Continuous Time Markov Model (HCTMM) under varying failure and repair conditions. The study finds that network throughput decreases during failures in the transient behavior analysis. After quantitatively analyzing 15 case scenarios with varying failure and repair combinations, steady-state availability ranged from 93% to 98%, nearing the industry-standard SLA of 99.999%, guaranteeing up to 3 years of uninterrupted network service.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"21 6","pages":"6126-6139"},"PeriodicalIF":4.7,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10677450","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142187258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-11DOI: 10.1109/TNSM.2024.3457858
Y A Joarder;Carol Fung
QUIC is a modern transport protocol aiming to improve Web connection performance and security. It is the transport layer for HTTP/3. QUIC offers numerous advantages over traditional transport layer protocols, such as TCP and UDP, including reduced latency, improved congestion control, connection migration and encryption by default. However, these benefits introduce new security and privacy challenges that need to be addressed, as cyber attackers can exploit weaknesses in the protocol. QUIC’s security and privacy issues have been largely unexplored, as existing research on QUIC primarily focuses on performance upgrades. This survey paper addresses the knowledge gap in QUIC’s security and privacy challenges while proposing directions for future research to enhance its security and privacy. Our comprehensive analysis covers QUIC’s history, architecture, core mechanisms (such as cryptographic design and handshaking process), security model, and threat landscape. We examine QUIC’s significant vulnerabilities, critical security and privacy attacks, emerging threats, advanced security and privacy challenges, and mitigation strategies. Furthermore, we outline future research directions to improve QUIC’s security and privacy. By exploring the protocol’s security and privacy implications, this paper informs decision-making processes and enhances online safety for users and professionals. Our research identifies key risks, vulnerabilities, threats, and attacks targeting QUIC, providing actionable insights to strengthen the protocol. Through this comprehensive analysis, we contribute to developing and deploying a faster, more secure next-generation Internet infrastructure. We hope this investigation serves as a foundation for future Internet security and privacy innovations, ensuring robust protection for modern digital communications.
{"title":"Exploring QUIC Security and Privacy: A Comprehensive Survey on QUIC Security and Privacy Vulnerabilities, Threats, Attacks, and Future Research Directions","authors":"Y A Joarder;Carol Fung","doi":"10.1109/TNSM.2024.3457858","DOIUrl":"10.1109/TNSM.2024.3457858","url":null,"abstract":"QUIC is a modern transport protocol aiming to improve Web connection performance and security. It is the transport layer for HTTP/3. QUIC offers numerous advantages over traditional transport layer protocols, such as TCP and UDP, including reduced latency, improved congestion control, connection migration and encryption by default. However, these benefits introduce new security and privacy challenges that need to be addressed, as cyber attackers can exploit weaknesses in the protocol. QUIC’s security and privacy issues have been largely unexplored, as existing research on QUIC primarily focuses on performance upgrades. This survey paper addresses the knowledge gap in QUIC’s security and privacy challenges while proposing directions for future research to enhance its security and privacy. Our comprehensive analysis covers QUIC’s history, architecture, core mechanisms (such as cryptographic design and handshaking process), security model, and threat landscape. We examine QUIC’s significant vulnerabilities, critical security and privacy attacks, emerging threats, advanced security and privacy challenges, and mitigation strategies. Furthermore, we outline future research directions to improve QUIC’s security and privacy. By exploring the protocol’s security and privacy implications, this paper informs decision-making processes and enhances online safety for users and professionals. Our research identifies key risks, vulnerabilities, threats, and attacks targeting QUIC, providing actionable insights to strengthen the protocol. Through this comprehensive analysis, we contribute to developing and deploying a faster, more secure next-generation Internet infrastructure. We hope this investigation serves as a foundation for future Internet security and privacy innovations, ensuring robust protection for modern digital communications.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"21 6","pages":"6953-6973"},"PeriodicalIF":4.7,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142187264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-11DOI: 10.1109/TNSM.2024.3458390
Mohammed Abdullah;Salah Eddine Elayoubi;Tijani Chahed
We propose a novel resource allocation framework for latency-critical traffic, namely Ultra Reliable Low Latency Communications (URLLC), in mobile networks which meets stringent latency and reliability requirements while minimizing the allocated resources. The Quality of Service (QoS) requirement is formulated in terms of the probability that the latency exceeds a maximal allowed budget. We develop a discrete-time queuing model for the system, in the case where the URLLC reservation is fully-flexible, and when the reservation is made on a slot basis while URLLC packets arrive in mini-slots. We then exploit this model to propose a control scheme that dynamically updates the amount of resources to be allocated per time slot so as to meet the QoS requirement. We formulate an optimization framework that derives the policy which achieves the QoS target while minimizing resource consumption and propose offline algorithms that converge to the quasi optimal reservation policy. In the case when traffic is unknown, we propose online algorithms based on stochastic bandits to achieve this aim. Numerical experiments validate our model and confirm the efficiency of our algorithms in terms of meeting the delay violation target at minimal cost.
{"title":"Efficient Queue Control Policies for Latency-Critical Traffic in Mobile Networks","authors":"Mohammed Abdullah;Salah Eddine Elayoubi;Tijani Chahed","doi":"10.1109/TNSM.2024.3458390","DOIUrl":"10.1109/TNSM.2024.3458390","url":null,"abstract":"We propose a novel resource allocation framework for latency-critical traffic, namely Ultra Reliable Low Latency Communications (URLLC), in mobile networks which meets stringent latency and reliability requirements while minimizing the allocated resources. The Quality of Service (QoS) requirement is formulated in terms of the probability that the latency exceeds a maximal allowed budget. We develop a discrete-time queuing model for the system, in the case where the URLLC reservation is fully-flexible, and when the reservation is made on a slot basis while URLLC packets arrive in mini-slots. We then exploit this model to propose a control scheme that dynamically updates the amount of resources to be allocated per time slot so as to meet the QoS requirement. We formulate an optimization framework that derives the policy which achieves the QoS target while minimizing resource consumption and propose offline algorithms that converge to the quasi optimal reservation policy. In the case when traffic is unknown, we propose online algorithms based on stochastic bandits to achieve this aim. Numerical experiments validate our model and confirm the efficiency of our algorithms in terms of meeting the delay violation target at minimal cost.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"21 5","pages":"5076-5090"},"PeriodicalIF":4.7,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142187251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Payment channel networks (PCNs) are a leading method to scale the transaction throughput in cryptocurrencies. Two participants can use a bidirectional payment channel for making multiple mutual payments without committing them to the blockchain. Opening a payment channel is a slow operation that involves an on-chain transaction locking a certain amount of funds. These aspects limit the number of channels that can be opened or maintained. Users may route payments through a multi-hop path and thus avoid opening and maintaining a channel for each new destination. Unlike regular networks, in PCNs capacity depends on the usage patterns and, moreover, channels may become unidirectional. Since payments often fail due to channel depletion, a protection scheme to overcome failures is of interest. We define the stopping time of a payment channel as the time at which the channel becomes depleted. We analyze the mean stopping time of a channel as well as that of a network with a set of channels and examine the stopping time of channels in particular topologies. We then propose a scheme for optimizing the capacity distribution among the channels in order to increase the minimal stopping time in the network. We conduct experiments and demonstrate the accuracy of our model and the efficiency of the proposed optimization scheme.
{"title":"Survivable Payment Channel Networks","authors":"Yekaterina Podiatchev;Ariel Orda;Ori Rottenstreich","doi":"10.1109/TNSM.2024.3456229","DOIUrl":"10.1109/TNSM.2024.3456229","url":null,"abstract":"Payment channel networks (PCNs) are a leading method to scale the transaction throughput in cryptocurrencies. Two participants can use a bidirectional payment channel for making multiple mutual payments without committing them to the blockchain. Opening a payment channel is a slow operation that involves an on-chain transaction locking a certain amount of funds. These aspects limit the number of channels that can be opened or maintained. Users may route payments through a multi-hop path and thus avoid opening and maintaining a channel for each new destination. Unlike regular networks, in PCNs capacity depends on the usage patterns and, moreover, channels may become unidirectional. Since payments often fail due to channel depletion, a protection scheme to overcome failures is of interest. We define the stopping time of a payment channel as the time at which the channel becomes depleted. We analyze the mean stopping time of a channel as well as that of a network with a set of channels and examine the stopping time of channels in particular topologies. We then propose a scheme for optimizing the capacity distribution among the channels in order to increase the minimal stopping time in the network. We conduct experiments and demonstrate the accuracy of our model and the efficiency of the proposed optimization scheme.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"21 6","pages":"6218-6232"},"PeriodicalIF":4.7,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142187252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deep learning-based network traffic classification (NTC) techniques, including conventional and class-of-service (CoS) classifiers, are a popular tool that aids in the quality of service (QoS) and radio resource management for the Internet of Things (IoT) network. Holistic temporal features consist of inter-, intra-, and pseudo-temporal features within packets, between packets, and among flows, providing the maximum information on network services without depending on defined classes in a problem. Conventional spatio-temporal features in the current solutions extract only space and time information between packets and flows, ignoring the information within packets and flow for IoT traffic. Therefore, we propose a new, efficient, holistic feature extraction method for deep-learning-based NTC using time-distributed feature learning to maximize the accuracy of the NTC. We apply a time-distributed wrapper on deep-learning layers to help extract pseudo-temporal features and spatio-temporal features. Pseudo-temporal features are mathematically complex to explain since, in deep learning, a black box extracts them. However, the features are temporal because of the time-distributed wrapper; therefore, we call them pseudo-temporal features. Since our method is efficient in learning holistic-temporal features, we can extend our method to both conventional and CoS NTC. Our solution proves that pseudo-temporal and spatial-temporal features can significantly improve the robustness and performance of any NTC. We analyze the solution theoretically and experimentally on different real-world datasets. The experimental results show that the holistic-temporal time-distributed feature learning method, on average, is 13.5% more accurate than the state-of-the-art conventional and CoS classifiers.
{"title":"Time-Distributed Feature Learning for Internet of Things Network Traffic Classification","authors":"Yoga Suhas Kuruba Manjunath;Sihao Zhao;Xiao-Ping Zhang;Lian Zhao","doi":"10.1109/TNSM.2024.3457579","DOIUrl":"10.1109/TNSM.2024.3457579","url":null,"abstract":"Deep learning-based network traffic classification (NTC) techniques, including conventional and class-of-service (CoS) classifiers, are a popular tool that aids in the quality of service (QoS) and radio resource management for the Internet of Things (IoT) network. Holistic temporal features consist of inter-, intra-, and pseudo-temporal features within packets, between packets, and among flows, providing the maximum information on network services without depending on defined classes in a problem. Conventional spatio-temporal features in the current solutions extract only space and time information between packets and flows, ignoring the information within packets and flow for IoT traffic. Therefore, we propose a new, efficient, holistic feature extraction method for deep-learning-based NTC using time-distributed feature learning to maximize the accuracy of the NTC. We apply a time-distributed wrapper on deep-learning layers to help extract pseudo-temporal features and spatio-temporal features. Pseudo-temporal features are mathematically complex to explain since, in deep learning, a black box extracts them. However, the features are temporal because of the time-distributed wrapper; therefore, we call them pseudo-temporal features. Since our method is efficient in learning holistic-temporal features, we can extend our method to both conventional and CoS NTC. Our solution proves that pseudo-temporal and spatial-temporal features can significantly improve the robustness and performance of any NTC. We analyze the solution theoretically and experimentally on different real-world datasets. The experimental results show that the holistic-temporal time-distributed feature learning method, on average, is 13.5% more accurate than the state-of-the-art conventional and CoS classifiers.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"21 6","pages":"6566-6581"},"PeriodicalIF":4.7,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142187253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}