首页 > 最新文献

IEEE Transactions on Network and Service Management最新文献

英文 中文
Flow Update Model Based on Probability Distribution of Migration Time in Software-Defined Networks
IF 4.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-24 DOI: 10.1109/TNSM.2024.3485753
Reo Uneyama;Takehiro Sato;Eiji Oki
In a software-defined network (SDN), routes of packet flows need to be updated in situations such as maintenance and router replacement. Each flow is migrated from its old path to new path. The SDN update has an asynchronous nature; the time when the switches process commands by the controller varies depending on flows. Therefore, it is difficult to control an order of flow migrations, and packets can be lost by congestion. Existing models divide the time axis into rounds and assign migrations to these rounds. However, congestion caused by multiple migrations in the same round is uncontrollable. Based on the probability distribution of time required for each migration, congestion can occur. This paper proposes a flow update model which minimizes the expected amount of excessive traffic by shifting the probability distributions. The time axis is divided into time slots which are fine-grained than rounds, so that each probability distribution is shifted. The proposed model assigns the time when the controller injects a command of flow migration to time slots. The proposed model is formulated as an optimization problem to determine the command times to minimize the expected amount. This paper introduces two methods to compute the expected amount. This paper also introduces a two-stage scheduling scheme (2SS) that divides the optimization problem into two stages. 2SS suppresses the computation time from $mathcal {O}(|T|^{|F|-1})$ to $mathcal {O}left ({{|T|^{{}frac {|F|-1}{2}}}}right)$ at the cost of including at most 0.12% error. 2SS suppresses the amount of excessive traffic than an existing model by at most 71.2%.
在软件定义网络(SDN)中,数据包流的路由需要在维护和更换路由器等情况下进行更新。每个数据流都要从旧路径迁移到新路径。SDN 更新具有异步性;交换机处理控制器命令的时间因流量而异。因此,很难控制流迁移的顺序,数据包也可能因拥堵而丢失。现有模型将时间轴划分为若干轮,并将迁移分配给这些轮。然而,同一轮次中的多次迁移造成的拥堵是无法控制的。根据每次迁移所需时间的概率分布,可能会发生拥塞。本文提出了一种流量更新模型,该模型通过移动概率分布,最大限度地减少了预期的过量流量。时间轴被划分为比轮更细的时隙,因此每个概率分布都会发生偏移。提议的模型将控制器注入流量迁移指令的时间分配到时间段。所提出的模型被表述为一个优化问题,即确定指令时间以最小化预期量。本文介绍了两种计算预期量的方法。本文还介绍了一种将优化问题分为两个阶段的两阶段调度方案(2SS)。2SS 将计算时间从 $mathcal {O}(|T|^{|F|-1})$ 减少到 $mathcal {O}left ({{|T|^{}frac {|F|-1}{2}}}}right)$ ,代价是最多包含 0.12% 的误差。与现有模型相比,2SS 最多减少了 71.2% 的过量流量。
{"title":"Flow Update Model Based on Probability Distribution of Migration Time in Software-Defined Networks","authors":"Reo Uneyama;Takehiro Sato;Eiji Oki","doi":"10.1109/TNSM.2024.3485753","DOIUrl":"https://doi.org/10.1109/TNSM.2024.3485753","url":null,"abstract":"In a software-defined network (SDN), routes of packet flows need to be updated in situations such as maintenance and router replacement. Each flow is migrated from its old path to new path. The SDN update has an asynchronous nature; the time when the switches process commands by the controller varies depending on flows. Therefore, it is difficult to control an order of flow migrations, and packets can be lost by congestion. Existing models divide the time axis into rounds and assign migrations to these rounds. However, congestion caused by multiple migrations in the same round is uncontrollable. Based on the probability distribution of time required for each migration, congestion can occur. This paper proposes a flow update model which minimizes the expected amount of excessive traffic by shifting the probability distributions. The time axis is divided into time slots which are fine-grained than rounds, so that each probability distribution is shifted. The proposed model assigns the time when the controller injects a command of flow migration to time slots. The proposed model is formulated as an optimization problem to determine the command times to minimize the expected amount. This paper introduces two methods to compute the expected amount. This paper also introduces a two-stage scheduling scheme (2SS) that divides the optimization problem into two stages. 2SS suppresses the computation time from <inline-formula> <tex-math>$mathcal {O}(|T|^{|F|-1})$ </tex-math></inline-formula> to <inline-formula> <tex-math>$mathcal {O}left ({{|T|^{{}frac {|F|-1}{2}}}}right)$ </tex-math></inline-formula> at the cost of including at most 0.12% error. 2SS suppresses the amount of excessive traffic than an existing model by at most 71.2%.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 1","pages":"744-759"},"PeriodicalIF":4.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143621597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
QSCCP: A QoS-Aware Congestion Control Protocol for Information-Centric Networking
IF 4.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-24 DOI: 10.1109/TNSM.2024.3486052
He Bai;Hui Li;Jianming Que;Abla Smahi;Minglong Zhang;Peter Han Joo Chong;Shuo-Yen Robert Li;Xiyu Wang;Ping Lu
Information-Centric Networking (ICN) is a promising future network architecture that shifts the host-based network paradigm to a content-oriented one. Over the past decade, numerous ICN congestion control (CC) schemes have been proposed, tailored to address congestion issues based on ICN’s transmission characteristics. However, several key challenges still need to be addressed. One critical issue is that most existing CC studies for ICN do not consider the diverse Quality of Service (QoS) requirements of modern network applications. This limitation hinders their applicability across various applications with different network performance preferences. Another ongoing challenge lies in improving transmission performance, particularly considering how to appropriately coordinate congestion control participants to enhance content retrieval efficiency and ensure reasonable resource allocation, especially in multipath scenarios. To tackle these challenges, we propose QSCCP, a QoS-aware congestion control protocol built upon NDN (Named Data Networking), a well-known ICN architecture. In QSCCP, diverse QoS preferences of various traffic are supported within a collaborative congestion control framework. A novel multi-level, class-based scheduling and forwarding mechanism is designed to ensure varied and fine-grained QoS guarantees. A distributed congestion notification and precise feedback mechanism is also provided, which efficiently collaborates with an adaptive multipath forwarding strategy and consumer rate adjustment to rationally allocate network resources and improve transmission efficiency, particularly in multipath scenarios. Extensive experimental results demonstrate that QSCCP satisfies diverse QoS requirements while achieving outstanding transmission performance. It outperforms existing schemes in throughput, fairness, delay, and packet loss, with a rapid convergence rate and excellent stability.
{"title":"QSCCP: A QoS-Aware Congestion Control Protocol for Information-Centric Networking","authors":"He Bai;Hui Li;Jianming Que;Abla Smahi;Minglong Zhang;Peter Han Joo Chong;Shuo-Yen Robert Li;Xiyu Wang;Ping Lu","doi":"10.1109/TNSM.2024.3486052","DOIUrl":"https://doi.org/10.1109/TNSM.2024.3486052","url":null,"abstract":"Information-Centric Networking (ICN) is a promising future network architecture that shifts the host-based network paradigm to a content-oriented one. Over the past decade, numerous ICN congestion control (CC) schemes have been proposed, tailored to address congestion issues based on ICN’s transmission characteristics. However, several key challenges still need to be addressed. One critical issue is that most existing CC studies for ICN do not consider the diverse Quality of Service (QoS) requirements of modern network applications. This limitation hinders their applicability across various applications with different network performance preferences. Another ongoing challenge lies in improving transmission performance, particularly considering how to appropriately coordinate congestion control participants to enhance content retrieval efficiency and ensure reasonable resource allocation, especially in multipath scenarios. To tackle these challenges, we propose QSCCP, a QoS-aware congestion control protocol built upon NDN (Named Data Networking), a well-known ICN architecture. In QSCCP, diverse QoS preferences of various traffic are supported within a collaborative congestion control framework. A novel multi-level, class-based scheduling and forwarding mechanism is designed to ensure varied and fine-grained QoS guarantees. A distributed congestion notification and precise feedback mechanism is also provided, which efficiently collaborates with an adaptive multipath forwarding strategy and consumer rate adjustment to rationally allocate network resources and improve transmission efficiency, particularly in multipath scenarios. Extensive experimental results demonstrate that QSCCP satisfies diverse QoS requirements while achieving outstanding transmission performance. It outperforms existing schemes in throughput, fairness, delay, and packet loss, with a rapid convergence rate and excellent stability.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 1","pages":"532-556"},"PeriodicalIF":4.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143621755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TFD-Net: Transformer Deviation Network for Weakly Supervised Anomaly Detection
IF 4.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-23 DOI: 10.1109/TNSM.2024.3485545
Hongping Gan;Hejie Zheng;Zhangfa Wu;Chunyan Ma;Jie Liu
Deep Learning (DL)-based weakly supervised anomaly detection methods enhance the security and performance of communication and networks by promptly identifying and addressing anomalies within imbalanced samples, thus ensuring reliable communication and smooth network operations. However, existing DL-based methods often overly emphasize the local feature representations of samples, thereby neglecting the long-range dependencies and the prior knowledge of the samples, which imposes potential limitations on anomaly detection with a limited number of abnormal samples. To mitigate these challenges, we propose a Transformer deviation network for weakly supervised anomaly detection, called TFD-Net, which can effectively leverage the interdependencies and data priors of samples, yielding enhanced anomaly detection performance. Specifically, we first use a Transformer-based feature extraction module that proficiently captures the dependencies of global features in the samples. Subsequently, TFD-Net employs an anomaly score generation module to obtain corresponding anomaly scores. Finally, we introduce an innovative loss function for TFD-Net, named Transformer Deviation Loss Function (TFD-Loss), which can adequately incorporate prior knowledge of samples into the network training process, addressing the issue of imbalanced samples, and thereby enhancing the detection efficiency. Experimental results on public benchmark datasets demonstrate that TFD-Net substantially outperforms other DL-based methods in weakly supervised anomaly detection task.
{"title":"TFD-Net: Transformer Deviation Network for Weakly Supervised Anomaly Detection","authors":"Hongping Gan;Hejie Zheng;Zhangfa Wu;Chunyan Ma;Jie Liu","doi":"10.1109/TNSM.2024.3485545","DOIUrl":"https://doi.org/10.1109/TNSM.2024.3485545","url":null,"abstract":"Deep Learning (DL)-based weakly supervised anomaly detection methods enhance the security and performance of communication and networks by promptly identifying and addressing anomalies within imbalanced samples, thus ensuring reliable communication and smooth network operations. However, existing DL-based methods often overly emphasize the local feature representations of samples, thereby neglecting the long-range dependencies and the prior knowledge of the samples, which imposes potential limitations on anomaly detection with a limited number of abnormal samples. To mitigate these challenges, we propose a Transformer deviation network for weakly supervised anomaly detection, called TFD-Net, which can effectively leverage the interdependencies and data priors of samples, yielding enhanced anomaly detection performance. Specifically, we first use a Transformer-based feature extraction module that proficiently captures the dependencies of global features in the samples. Subsequently, TFD-Net employs an anomaly score generation module to obtain corresponding anomaly scores. Finally, we introduce an innovative loss function for TFD-Net, named Transformer Deviation Loss Function (TFD-Loss), which can adequately incorporate prior knowledge of samples into the network training process, addressing the issue of imbalanced samples, and thereby enhancing the detection efficiency. Experimental results on public benchmark datasets demonstrate that TFD-Net substantially outperforms other DL-based methods in weakly supervised anomaly detection task.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 1","pages":"941-954"},"PeriodicalIF":4.7,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143621967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
O-DQR: A Multi-Agent Deep Reinforcement Learning for Multihop Routing in Overlay Networks
IF 4.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-23 DOI: 10.1109/TNSM.2024.3485196
Redha A. Alliche;Ramón Aparicio Pardo;Lucile Sassatelli
This paper addresses the problem of dynamic packet routing in overlay networks using a fully decentralized Multi-Agent Deep Reinforcement Learning (MA-DRL). Overlay networks are built by having a virtual topology on top of an Internet Service Provider (ISP) underlay network, where those nodes are running a fixed, single path routing policy decided by the ISP. In such a scenario, the underlay topology and the traffic are unknown by the overlay network. In this setting, we propose O-DQR, which is an MA-DRL framework working under Distributed Training Decentralized Execution (DTDE), where the agents are allowed to communicate only with their immediate overlay neighbors during both training and inference. We address three fundamental aspects for deploying such a solution: (i) performance (delay, loss rate), where the framework can achieve near-optimal performance, (ii) control overhead, which is reduced by enabling the agents to send control packets only when needed dynamically; and (iii) training convergence stability, which is improved by proposing a guided reward mechanism for dynamically learning the penalty applied when a packet is lost. Finally, we evaluate our solution through extensive experimentation in a realistic network simulation in both offline training and continual learning settings.
{"title":"O-DQR: A Multi-Agent Deep Reinforcement Learning for Multihop Routing in Overlay Networks","authors":"Redha A. Alliche;Ramón Aparicio Pardo;Lucile Sassatelli","doi":"10.1109/TNSM.2024.3485196","DOIUrl":"https://doi.org/10.1109/TNSM.2024.3485196","url":null,"abstract":"This paper addresses the problem of dynamic packet routing in overlay networks using a fully decentralized Multi-Agent Deep Reinforcement Learning (MA-DRL). Overlay networks are built by having a virtual topology on top of an Internet Service Provider (ISP) underlay network, where those nodes are running a fixed, single path routing policy decided by the ISP. In such a scenario, the underlay topology and the traffic are unknown by the overlay network. In this setting, we propose O-DQR, which is an MA-DRL framework working under Distributed Training Decentralized Execution (DTDE), where the agents are allowed to communicate only with their immediate overlay neighbors during both training and inference. We address three fundamental aspects for deploying such a solution: (i) performance (delay, loss rate), where the framework can achieve near-optimal performance, (ii) control overhead, which is reduced by enabling the agents to send control packets only when needed dynamically; and (iii) training convergence stability, which is improved by proposing a guided reward mechanism for dynamically learning the penalty applied when a packet is lost. Finally, we evaluate our solution through extensive experimentation in a realistic network simulation in both offline training and continual learning settings.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 1","pages":"439-455"},"PeriodicalIF":4.7,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143621599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning-Based Two-Tiered Online Optimization of Region-Wide Datacenter Resource Allocation
IF 4.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-21 DOI: 10.1109/TNSM.2024.3484213
Chang-Lin Chen;Hanhan Zhou;Jiayu Chen;Mohammad Pedramfar;Tian Lan;Zheqing Zhu;Chi Zhou;Pol Mauri Ruiz;Neeraj Kumar;Hongbo Dong;Vaneet Aggarwal
Online optimization of resource management for large-scale data centers and infrastructures to meet dynamic capacity reservation demands and various practical constraints (e.g., feasibility and robustness) is a very challenging problem. Mixed Integer Programming (MIP) approaches suffer from recognized limitations in such a dynamic environment, while learning-based approaches may face with prohibitively large state/action spaces. To this end, this paper presents a novel two-tiered online optimization to enable a learning-based Resource Allowance System (RAS). To solve optimal server-to-reservation assignment in RAS in an online fashion, the proposed solution leverages a reinforcement learning (RL) agent to make high-level decisions, e.g., how much resource to select from the Main Switch Boards (MSBs), and then a low-level Mixed Integer Linear Programming (MILP) solver to generate the local server-to-reservation mapping, conditioned on the RL decisions. We take into account fault tolerance, server movement minimization, and network affinity requirements and apply the proposed solution to large-scale RAS problems. To provide interpretability, we further train a decision tree model to explain the learned policies and to prune unreasonable corner cases at the low-level MILP solver, resulting in further performance improvement. Extensive evaluations show that our two-tiered solution outperforms baselines such as pure MIP solver by over 15% while delivering $100times $ speedup in computation.
{"title":"Learning-Based Two-Tiered Online Optimization of Region-Wide Datacenter Resource Allocation","authors":"Chang-Lin Chen;Hanhan Zhou;Jiayu Chen;Mohammad Pedramfar;Tian Lan;Zheqing Zhu;Chi Zhou;Pol Mauri Ruiz;Neeraj Kumar;Hongbo Dong;Vaneet Aggarwal","doi":"10.1109/TNSM.2024.3484213","DOIUrl":"https://doi.org/10.1109/TNSM.2024.3484213","url":null,"abstract":"Online optimization of resource management for large-scale data centers and infrastructures to meet dynamic capacity reservation demands and various practical constraints (e.g., feasibility and robustness) is a very challenging problem. Mixed Integer Programming (MIP) approaches suffer from recognized limitations in such a dynamic environment, while learning-based approaches may face with prohibitively large state/action spaces. To this end, this paper presents a novel two-tiered online optimization to enable a learning-based Resource Allowance System (RAS). To solve optimal server-to-reservation assignment in RAS in an online fashion, the proposed solution leverages a reinforcement learning (RL) agent to make high-level decisions, e.g., how much resource to select from the Main Switch Boards (MSBs), and then a low-level Mixed Integer Linear Programming (MILP) solver to generate the local server-to-reservation mapping, conditioned on the RL decisions. We take into account fault tolerance, server movement minimization, and network affinity requirements and apply the proposed solution to large-scale RAS problems. To provide interpretability, we further train a decision tree model to explain the learned policies and to prune unreasonable corner cases at the low-level MILP solver, resulting in further performance improvement. Extensive evaluations show that our two-tiered solution outperforms baselines such as pure MIP solver by over 15% while delivering <inline-formula> <tex-math>$100times $ </tex-math></inline-formula> speedup in computation.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 1","pages":"572-581"},"PeriodicalIF":4.7,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143621977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TP-MDU: A Two-Phase Microservice Deployment Based on Minimal Deployment Unit in Edge Computing Environment
IF 4.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-18 DOI: 10.1109/TNSM.2024.3483634
Bing Tang;Zhikang Wu;Wei Xu;Buqing Cao;Mingdong Tang;Qing Yang
In mobile edge computing (MEC) environment, effective microservices deployment significantly reduces vendor costs and minimizes application latency. However, existing literatures overlook the impact of dynamic characteristics such as the frequency of user requests and geographical location, and lack in-depth consideration of the types of microservices and their interaction frequencies. To address these issues, we propose TP-MDU, a novel two-stage deployment framework for microservices. This framework is designed to learn users’ dynamic behaviors and introduces, for the first time, a minimal deployment unit. Initially, TP-MDU generates minimal deployment units online, tailored to the types of microservices and their interaction frequencies. In the initial deployment phase, aiming for load balancing, it employs a simulated annealing algorithm to achieve a superior deployment plan. During the optimization scheduling phase, it utilizes reinforcement learning algorithms and introduces dynamic information and new optimization objectives. Previous deployment plans serve as the initial state for policy learning, thus facilitating more optimal deployment decisions. This paper evaluates the performance of TP-MDU using a real dataset from Australia’s EUA and some related synthetic data. The experimental results indicate that TP-MDU outperforms other representative algorithms in performance.
{"title":"TP-MDU: A Two-Phase Microservice Deployment Based on Minimal Deployment Unit in Edge Computing Environment","authors":"Bing Tang;Zhikang Wu;Wei Xu;Buqing Cao;Mingdong Tang;Qing Yang","doi":"10.1109/TNSM.2024.3483634","DOIUrl":"https://doi.org/10.1109/TNSM.2024.3483634","url":null,"abstract":"In mobile edge computing (MEC) environment, effective microservices deployment significantly reduces vendor costs and minimizes application latency. However, existing literatures overlook the impact of dynamic characteristics such as the frequency of user requests and geographical location, and lack in-depth consideration of the types of microservices and their interaction frequencies. To address these issues, we propose TP-MDU, a novel two-stage deployment framework for microservices. This framework is designed to learn users’ dynamic behaviors and introduces, for the first time, a minimal deployment unit. Initially, TP-MDU generates minimal deployment units online, tailored to the types of microservices and their interaction frequencies. In the initial deployment phase, aiming for load balancing, it employs a simulated annealing algorithm to achieve a superior deployment plan. During the optimization scheduling phase, it utilizes reinforcement learning algorithms and introduces dynamic information and new optimization objectives. Previous deployment plans serve as the initial state for policy learning, thus facilitating more optimal deployment decisions. This paper evaluates the performance of TP-MDU using a real dataset from Australia’s EUA and some related synthetic data. The experimental results indicate that TP-MDU outperforms other representative algorithms in performance.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 1","pages":"718-731"},"PeriodicalIF":4.7,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143620027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Intelligent Scheme for Energy-Efficient Uplink Resource Allocation With QoS Constraints in 6G Networks 6G 网络中具有 QoS 约束条件的高能效上行链路资源分配智能方案
IF 4.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-17 DOI: 10.1109/TNSM.2024.3482549
Yujie Zhao;Tao Peng;Yichen Guo;Yijing Niu;Wenbo Wang
In sixth-generation (6G) networks, the dense deployment of femtocells will result in significant co-channel interference. However, current studies encounter difficulties in obtaining precise interference information, which poses a challenge in improving the performance of the resource allocation (RA) strategy. This paper proposes an intelligent scheme aimed at achieving energy-efficient RA in uplink scenarios with unknown interference. Firstly, a novel interference-inference-based RA (IIBRA) framework is proposed to support this scheme. In the framework, the interference relationship between users is precisely modeled by processing the historical operation data of the network. Based on the modeled interference relationship, accurate performance feedback to the RA algorithm is provided. Secondly, a joint double deep Q-network and optimization RA (DORA) algorithm is developed, which decomposes the joint allocation problem into two parts: resource block assignment and power allocation. The two parts continuously interact throughout the allocation process, leading to improved solutions. Thirdly, a new metric called effective energy efficiency (EEE) is provided, which is defined as the product of energy efficiency and average user satisfaction with quality of service (QoS). EEE is used to help train the neural networks, resulting in a superior level of user QoS satisfaction. Numerical results demonstrate that the DORA algorithm achieves a clear enhancement in interference efficiency, surpassing well-known existing algorithms with a maximum improvement of over 50%. Additionally, it achieves a maximum EEE improvement exceeding 25%.
{"title":"An Intelligent Scheme for Energy-Efficient Uplink Resource Allocation With QoS Constraints in 6G Networks","authors":"Yujie Zhao;Tao Peng;Yichen Guo;Yijing Niu;Wenbo Wang","doi":"10.1109/TNSM.2024.3482549","DOIUrl":"https://doi.org/10.1109/TNSM.2024.3482549","url":null,"abstract":"In sixth-generation (6G) networks, the dense deployment of femtocells will result in significant co-channel interference. However, current studies encounter difficulties in obtaining precise interference information, which poses a challenge in improving the performance of the resource allocation (RA) strategy. This paper proposes an intelligent scheme aimed at achieving energy-efficient RA in uplink scenarios with unknown interference. Firstly, a novel interference-inference-based RA (IIBRA) framework is proposed to support this scheme. In the framework, the interference relationship between users is precisely modeled by processing the historical operation data of the network. Based on the modeled interference relationship, accurate performance feedback to the RA algorithm is provided. Secondly, a joint double deep Q-network and optimization RA (DORA) algorithm is developed, which decomposes the joint allocation problem into two parts: resource block assignment and power allocation. The two parts continuously interact throughout the allocation process, leading to improved solutions. Thirdly, a new metric called effective energy efficiency (EEE) is provided, which is defined as the product of energy efficiency and average user satisfaction with quality of service (QoS). EEE is used to help train the neural networks, resulting in a superior level of user QoS satisfaction. Numerical results demonstrate that the DORA algorithm achieves a clear enhancement in interference efficiency, surpassing well-known existing algorithms with a maximum improvement of over 50%. Additionally, it achieves a maximum EEE improvement exceeding 25%.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 1","pages":"255-269"},"PeriodicalIF":4.7,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143621907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On Routing Optimization in Networks With Embedded Computational Services
IF 4.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-17 DOI: 10.1109/TNSM.2024.3483088
Lifan Mei;Jinrui Gou;Jingrui Yang;Yujin Cai;Yong Liu
Modern communication networks are increasingly equipped with in-network computational capabilities and services. Routing in such networks is significantly more complicated than the traditional routing. A legitimate route for a flow not only needs to have enough communication and computation resources, but also has to conform to various application-specific routing constraints. This paper presents a comprehensive study on routing optimization problems in networks with embedded computational services. We develop a set of routing optimization models and derive low-complexity heuristic routing algorithms for diverse computation scenarios. For dynamic demands, we also develop an online routing algorithm with performance guarantees. Through evaluations over emerging applications on real topologies, we demonstrate that our models can be flexibly customized to meet the diverse routing requirements of different computation applications. Our proposed heuristic algorithms significantly outperform baseline algorithms and can achieve close-to-optimal performance in various scenarios.
现代通信网络越来越多地配备了网内计算能力和服务。此类网络中的路由选择比传统路由选择复杂得多。流量的合法路由不仅需要有足够的通信和计算资源,还必须符合各种特定应用的路由约束。本文全面研究了嵌入计算服务网络中的路由优化问题。我们开发了一系列路由优化模型,并针对不同的计算场景推导出了低复杂度的启发式路由算法。针对动态需求,我们还开发了一种具有性能保证的在线路由算法。通过在真实拓扑结构上对新兴应用的评估,我们证明了我们的模型可以灵活定制,以满足不同计算应用的各种路由要求。我们提出的启发式算法明显优于基线算法,并能在各种场景中实现接近最优的性能。
{"title":"On Routing Optimization in Networks With Embedded Computational Services","authors":"Lifan Mei;Jinrui Gou;Jingrui Yang;Yujin Cai;Yong Liu","doi":"10.1109/TNSM.2024.3483088","DOIUrl":"https://doi.org/10.1109/TNSM.2024.3483088","url":null,"abstract":"Modern communication networks are increasingly equipped with in-network computational capabilities and services. Routing in such networks is significantly more complicated than the traditional routing. A legitimate route for a flow not only needs to have enough communication and computation resources, but also has to conform to various application-specific routing constraints. This paper presents a comprehensive study on routing optimization problems in networks with embedded computational services. We develop a set of routing optimization models and derive low-complexity heuristic routing algorithms for diverse computation scenarios. For dynamic demands, we also develop an online routing algorithm with performance guarantees. Through evaluations over emerging applications on real topologies, we demonstrate that our models can be flexibly customized to meet the diverse routing requirements of different computation applications. Our proposed heuristic algorithms significantly outperform baseline algorithms and can achieve close-to-optimal performance in various scenarios.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 1","pages":"456-469"},"PeriodicalIF":4.7,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143621748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic Policy Decision/Enforcement Security Zoning Through Stochastic Games and Meta Learning
IF 4.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-16 DOI: 10.1109/TNSM.2024.3481662
Yahuza Bello;Ahmed Refaey Hussein
Securing Next Generation Networks (NGNs) remains a prominent topic of discussion in academia and industries alike, driven by the rapid evolution of cyber attacks. As these attacks become increasingly complex and dynamic, it is crucial to develop sophisticated security strategies with automated dynamic policy enforcement. In this paper, we propose a security strategy based on the zero-trust model, incorporating dynamic policy decisions through the utilization of stochastic games and Reinforcement Learning (RL). Our approach involves the development of an attack and defense strategy evolution model, specifically tailored to combat cyber attacks in NGNs. To achieve this, we employ RL techniques to update and adapt dynamic policies. To train the agents, we utilize the Generalized Proximal Policy Optimization with sample reuse (GePPO) algorithm, including its modified version, GePPO-ML, which incorporates meta-learning to initialize the agent’s policy and parameters. Additionally, we employ the Sample Dropout PPO with meta-learning (SDPPO-ML), a modified version of the SD-PPO algorithm, to train the agents. To evaluate the performance of these algorithms, we conduct a comparative analysis against the REINFORCE and PPO algorithms. The results illustrate the superior performance of both GePPO-ML and SDPPO-ML when compared to these baseline algorithms, with GePPO-ML exhibiting the best performance.
{"title":"Dynamic Policy Decision/Enforcement Security Zoning Through Stochastic Games and Meta Learning","authors":"Yahuza Bello;Ahmed Refaey Hussein","doi":"10.1109/TNSM.2024.3481662","DOIUrl":"https://doi.org/10.1109/TNSM.2024.3481662","url":null,"abstract":"Securing Next Generation Networks (NGNs) remains a prominent topic of discussion in academia and industries alike, driven by the rapid evolution of cyber attacks. As these attacks become increasingly complex and dynamic, it is crucial to develop sophisticated security strategies with automated dynamic policy enforcement. In this paper, we propose a security strategy based on the zero-trust model, incorporating dynamic policy decisions through the utilization of stochastic games and Reinforcement Learning (RL). Our approach involves the development of an attack and defense strategy evolution model, specifically tailored to combat cyber attacks in NGNs. To achieve this, we employ RL techniques to update and adapt dynamic policies. To train the agents, we utilize the Generalized Proximal Policy Optimization with sample reuse (GePPO) algorithm, including its modified version, GePPO-ML, which incorporates meta-learning to initialize the agent’s policy and parameters. Additionally, we employ the Sample Dropout PPO with meta-learning (SDPPO-ML), a modified version of the SD-PPO algorithm, to train the agents. To evaluate the performance of these algorithms, we conduct a comparative analysis against the REINFORCE and PPO algorithms. The results illustrate the superior performance of both GePPO-ML and SDPPO-ML when compared to these baseline algorithms, with GePPO-ML exhibiting the best performance.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 1","pages":"807-821"},"PeriodicalIF":4.7,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143621966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modeling and Analysis of mMTC Traffic in 5G Core Networks
IF 4.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-15 DOI: 10.1109/TNSM.2024.3481240
Endri Goshi;Fidan Mehmeti;Thomas F. La Porta;Wolfgang Kellerer
Massive Machine-Type Communications (mMTC) are one of the three main use cases powered by 5G and beyond networks. These are distinguished by the need to serve a large number of devices which are characterized by non-intensive traffic and low energy consumption. While the sporadic nature of the mMTC traffic does not pose an exertion on the efficient operation of the network, multiplexing the traffic from a large number of these devices within the cell certainly does. This traffic from the Base Station (BS) is then transported further towards the Core Network (CN), where it is combined with the traffic from other BSs. Therefore, planning carefully the network resources, both on the Radio Access Network (RAN) and the CN, for this type of traffic is of paramount importance. To do this, the statistics of the traffic pattern that arrives at the BS and the CN should be known. To this end, in this paper, we derive first the distribution of the inter-arrival times of the traffic at the BS from a general number of mMTC users within the cell, assuming a generic distribution of the traffic pattern by individual users. Then, using the previous result we derive the distribution of the traffic pattern at the CN. Further, we validate our results on traces for channel conditions and by performing measurements in our testbed. Results show that adding more mMTC users in the cell and more BSs in the network in the long term does not increase the variability of the traffic pattern at the BS and at the CN. Furthermore, this arrival process at all points of our interest in the network is shown to be Poisson both for homogeneous and heterogeneous traffic. However, the empirical observations show that a huge number of packets is needed for this process to converge, and this number of packets increases with the number of users and/or BSs.
{"title":"Modeling and Analysis of mMTC Traffic in 5G Core Networks","authors":"Endri Goshi;Fidan Mehmeti;Thomas F. La Porta;Wolfgang Kellerer","doi":"10.1109/TNSM.2024.3481240","DOIUrl":"https://doi.org/10.1109/TNSM.2024.3481240","url":null,"abstract":"Massive Machine-Type Communications (mMTC) are one of the three main use cases powered by 5G and beyond networks. These are distinguished by the need to serve a large number of devices which are characterized by non-intensive traffic and low energy consumption. While the sporadic nature of the mMTC traffic does not pose an exertion on the efficient operation of the network, multiplexing the traffic from a large number of these devices within the cell certainly does. This traffic from the Base Station (BS) is then transported further towards the Core Network (CN), where it is combined with the traffic from other BSs. Therefore, planning carefully the network resources, both on the Radio Access Network (RAN) and the CN, for this type of traffic is of paramount importance. To do this, the statistics of the traffic pattern that arrives at the BS and the CN should be known. To this end, in this paper, we derive first the distribution of the inter-arrival times of the traffic at the BS from a general number of mMTC users within the cell, assuming a generic distribution of the traffic pattern by individual users. Then, using the previous result we derive the distribution of the traffic pattern at the CN. Further, we validate our results on traces for channel conditions and by performing measurements in our testbed. Results show that adding more mMTC users in the cell and more BSs in the network in the long term does not increase the variability of the traffic pattern at the BS and at the CN. Furthermore, this arrival process at all points of our interest in the network is shown to be Poisson both for homogeneous and heterogeneous traffic. However, the empirical observations show that a huge number of packets is needed for this process to converge, and this number of packets increases with the number of users and/or BSs.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 1","pages":"409-425"},"PeriodicalIF":4.7,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10716797","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143621876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Network and Service Management
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1