首页 > 最新文献

IEEE Transactions on Network and Service Management最新文献

英文 中文
Enhancing Throughput for TTEthernet via Co-Optimizing Routing and Scheduling: An Online Time-Varying Graph-Based Method 通过协同优化路由和调度来提高以太网吞吐量:一种基于在线时变图的方法
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-06-04 DOI: 10.1109/TNSM.2025.3576578
Yaoxu He;Hongyan Li;Peng Wang
Time-Triggered Ethernet (TTEthernet) has been widely applied in many scenarios such as industrial Internet, automotive electronics, and aerospace, where offline routing and scheduling for TTEthernet has been largely investigated. However, predetermined routes and schedules cannot meet the demands in some agile scenarios, such as smart factories, autonomous driving, and satellite network switching, where the transmission requests join in and leave the network frequently. Thus, we study the online joint routing and scheduling problem for TTEthernet. However, balancing efficient and effective routing and scheduling in an online environment can be quite challenging. To ensure high-quality and fast routing and scheduling, we first design a time-slot expanded graph (TSEG) to model the available resources of TTEthernet over time. The fine-grained representation of TSEG allows us to select a time slot via selecting an edge, thus transforming the scheduling problem into a simple routing problem. Next, we design a dynamic weighting method for each edge in TSEG and further propose an algorithm to co-optimize the routing and scheduling. Our scheme enhances the TTEthernet throughput by co-optimizing the routing and scheduling to eliminate potential conflicts among flow requests, as compared to existing methods. The extensive simulation results show that our scheme runs >400 times faster than standard solutions (i.e., ILP solver), while the gap is only 2% to the optimally scheduled number of flow requests. Besides, as compared to existing schemes, our method can improve the successfully scheduled number of flows by more than 18%.
时间触发以太网(Time-Triggered Ethernet, tteethernet)在工业互联网、汽车电子、航空航天等领域得到了广泛的应用,在这些领域对tteethernet的离线路由和调度进行了大量的研究。但是,在智能工厂、自动驾驶、卫星网切换等敏捷场景下,由于传输请求频繁进出网络,预先确定的路由和调度无法满足需求。因此,我们研究了以太网的在线联合路由和调度问题。然而,在在线环境中平衡高效和有效的路由和调度可能相当具有挑战性。为了确保高质量和快速的路由和调度,我们首先设计了一个时隙扩展图(TSEG)来模拟以太网随时间的可用资源。TSEG的细粒度表示允许我们通过选择边缘来选择时隙,从而将调度问题转化为简单的路由问题。其次,我们设计了TSEG中每条边的动态加权方法,并进一步提出了一种路由和调度协同优化算法。与现有方法相比,我们的方案通过共同优化路由和调度来消除流请求之间的潜在冲突,从而提高了以太网吞吐量。广泛的仿真结果表明,我们的方案运行速度比标准解决方案(即ILP求解器)快400倍,而与最佳调度的流请求数相比,差距仅为2%。此外,与现有方案相比,我们的方法可以将成功调度的流量数量提高18%以上。
{"title":"Enhancing Throughput for TTEthernet via Co-Optimizing Routing and Scheduling: An Online Time-Varying Graph-Based Method","authors":"Yaoxu He;Hongyan Li;Peng Wang","doi":"10.1109/TNSM.2025.3576578","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3576578","url":null,"abstract":"Time-Triggered Ethernet (TTEthernet) has been widely applied in many scenarios such as industrial Internet, automotive electronics, and aerospace, where offline routing and scheduling for TTEthernet has been largely investigated. However, predetermined routes and schedules cannot meet the demands in some agile scenarios, such as smart factories, autonomous driving, and satellite network switching, where the transmission requests join in and leave the network frequently. Thus, we study the online joint routing and scheduling problem for TTEthernet. However, balancing efficient and effective routing and scheduling in an online environment can be quite challenging. To ensure high-quality and fast routing and scheduling, we first design a time-slot expanded graph (TSEG) to model the available resources of TTEthernet over time. The fine-grained representation of TSEG allows us to select a time slot via selecting an edge, thus transforming the scheduling problem into a simple routing problem. Next, we design a dynamic weighting method for each edge in TSEG and further propose an algorithm to co-optimize the routing and scheduling. Our scheme enhances the TTEthernet throughput by co-optimizing the routing and scheduling to eliminate potential conflicts among flow requests, as compared to existing methods. The extensive simulation results show that our scheme runs >400 times faster than standard solutions (i.e., ILP solver), while the gap is only 2% to the optimally scheduled number of flow requests. Besides, as compared to existing schemes, our method can improve the successfully scheduled number of flows by more than 18%.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 5","pages":"4933-4949"},"PeriodicalIF":5.4,"publicationDate":"2025-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145230089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
XGS-PON-Standard Compliant DBA Algorithm for Option 7.x Functional Split-Based 5G C-RAN 选项7的xgs - pon标准兼容DBA算法。x基于功能分裂的5G C-RAN
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-06-02 DOI: 10.1109/TNSM.2025.3575938
Md Shahbaz Akhtar;Mohit Kumar;Md Iftekhar Alam;Aneek Adhya
A 10-Gigabit Capable Symmetrical Passive Optical Network (XGS-PON) is considered as a cost-efficient fronthaul network solution for the Fifth Generation (5G) Centralized Radio Access Network (C-RAN). However, meeting the stringent latency requirements of C-RAN fronthaul with XGS-PON is challenging, as its upstream capacity is shared in the time-domain, and Dynamic Bandwidth Allocation (DBA) mechanism is employed to manage upstream traffic. The major issue with conventional DBA algorithms is that data arriving in the Optical Network Unit (ONU) buffer must wait for at least one DBA cycle before being scheduled, leading to poor delay performance. To address this, we propose a novel DBA algorithm named Traffic Prediction-based Enhanced Residual Bandwidth Utilization (TP-ERBU) that integrates a traffic prediction mechanism with enhanced residual bandwidth utilization to optimize delay performance in Option 7.x functional split-based C-RAN fronthaul over XGS-PON. The algorithm predicts future traffic to reduce delays in ONUs and reallocates residual bandwidth from lightly loaded ONUs to heavily loaded ones. Additionally, we develop an XGS-PON-based C-RAN simulation module named xCRAN-SimModule, using the OMNeT++ network simulator. Simulation results demonstrate that TP-ERBU improves packet delay by 20.59%, upstream channel utilization by 38.33%, packet loss by 25.00%, jitter by 5.71%, and throughput by 15.56% compared to existing algorithms.
10千兆对称无源光网络(XGS-PON)被认为是第五代(5G)集中式无线接入网(C-RAN)的一种经济高效的前传网络解决方案。但是,由于XGS-PON的上游容量在时域内是共享的,并且采用动态带宽分配(Dynamic Bandwidth Allocation, DBA)机制对上游流量进行管理,因此满足C-RAN前传严格的时延要求具有一定的挑战性。传统DBA算法的主要问题是到达光网络单元(ONU)缓冲区的数据必须等待至少一个DBA周期才能被调度,从而导致较差的延迟性能。为了解决这个问题,我们提出了一种新的DBA算法,称为基于流量预测的增强剩余带宽利用率(TP-ERBU),该算法集成了流量预测机制和增强剩余带宽利用率,以优化选项7中的延迟性能。基于XGS-PON的C-RAN前传。该算法预测未来的流量,以减少onu的延迟,并将剩余带宽从轻负载的onu重新分配给重负载的onu。此外,我们使用omnet++网络模拟器开发了基于xgs - pon的C-RAN仿真模块xCRAN-SimModule。仿真结果表明,TP-ERBU算法与现有算法相比,数据包延迟提高20.59%,上游信道利用率提高38.33%,丢包率提高25.00%,抖动降低5.71%,吞吐量提高15.56%。
{"title":"XGS-PON-Standard Compliant DBA Algorithm for Option 7.x Functional Split-Based 5G C-RAN","authors":"Md Shahbaz Akhtar;Mohit Kumar;Md Iftekhar Alam;Aneek Adhya","doi":"10.1109/TNSM.2025.3575938","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3575938","url":null,"abstract":"A 10-Gigabit Capable Symmetrical Passive Optical Network (XGS-PON) is considered as a cost-efficient fronthaul network solution for the Fifth Generation (5G) Centralized Radio Access Network (C-RAN). However, meeting the stringent latency requirements of C-RAN fronthaul with XGS-PON is challenging, as its upstream capacity is shared in the time-domain, and Dynamic Bandwidth Allocation (DBA) mechanism is employed to manage upstream traffic. The major issue with conventional DBA algorithms is that data arriving in the Optical Network Unit (ONU) buffer must wait for at least one DBA cycle before being scheduled, leading to poor delay performance. To address this, we propose a novel DBA algorithm named Traffic Prediction-based Enhanced Residual Bandwidth Utilization (TP-ERBU) that integrates a traffic prediction mechanism with enhanced residual bandwidth utilization to optimize delay performance in Option 7.x functional split-based C-RAN fronthaul over XGS-PON. The algorithm predicts future traffic to reduce delays in ONUs and reallocates residual bandwidth from lightly loaded ONUs to heavily loaded ones. Additionally, we develop an XGS-PON-based C-RAN simulation module named <sc>xCRAN-SimModule</small>, using the OMNeT++ network simulator. Simulation results demonstrate that TP-ERBU improves packet delay by 20.59%, upstream channel utilization by 38.33%, packet loss by 25.00%, jitter by 5.71%, and throughput by 15.56% compared to existing algorithms.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 5","pages":"5048-5061"},"PeriodicalIF":5.4,"publicationDate":"2025-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145230018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint DNN Partitioning and Task Offloading Based on Attention Mechanism-Aided Reinforcement Learning 基于注意机制辅助强化学习的DNN联合划分和任务卸载
IF 4.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-04-17 DOI: 10.1109/TNSM.2025.3561739
Mengyuan Zhang;Juan Fang;Ziyi Teng;Yaqi Liu;Shen Wu
The rapid advancement of artificial intelligence applications has resulted in the deployment of a growing number of deep neural networks (DNNs) on mobile devices. Given the limited computational capabilities and small battery capacity of these devices, supporting efficient DNN inference presents a significant challenge. This paper considers the joint design of DNN model partitioning and offloading under high-concurrent tasks scenarios. The primary objective is to accelerate DNN task inference and reduce computational delay. Firstly, we propose an innovative adaptive inference framework that partitions DNN models into interdependent sub-tasks through a hierarchical partitioning method. Secondly, we develop a delay prediction model based on a Random Forest (RF) regression algorithm to estimate the computational delay of each sub-task on different devices. Finally, we designed a high-performance DNN partitioning and task offloading method based on an attention mechanism-aided Soft Actor-Critic (AMSAC) algorithm. The bandwidth allocation for each user is determined by the attention mechanism based on the characteristics of the DNN tasks, and the Soft Actor-Critic algorithm is used for adaptive layer-level partitioning and offloading of the DNN model, reducing collaborative inference delay. Extensive experiments demonstrate that our proposed AMSAC algorithm effectively reduces DNN task inference latency cost and improves service quality.
人工智能应用的快速发展导致了越来越多的深度神经网络(dnn)在移动设备上的部署。考虑到这些设备有限的计算能力和小电池容量,支持有效的DNN推理提出了一个重大挑战。本文考虑了高并发任务场景下深度神经网络模型划分与卸载的联合设计。主要目标是加速深度神经网络任务推理和减少计算延迟。首先,我们提出了一种创新的自适应推理框架,该框架通过分层划分方法将DNN模型划分为相互依赖的子任务。其次,我们建立了一个基于随机森林(RF)回归算法的延迟预测模型来估计每个子任务在不同设备上的计算延迟。最后,我们设计了一种基于注意力机制辅助的软行为者批评家(Soft Actor-Critic, AMSAC)算法的高性能深度神经网络划分和任务卸载方法。根据DNN任务的特点,采用注意机制确定每个用户的带宽分配,采用软Actor-Critic算法对DNN模型进行自适应分层划分和卸载,减少协同推理延迟。大量实验表明,我们提出的AMSAC算法有效地降低了DNN任务推理的延迟成本,提高了服务质量。
{"title":"Joint DNN Partitioning and Task Offloading Based on Attention Mechanism-Aided Reinforcement Learning","authors":"Mengyuan Zhang;Juan Fang;Ziyi Teng;Yaqi Liu;Shen Wu","doi":"10.1109/TNSM.2025.3561739","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3561739","url":null,"abstract":"The rapid advancement of artificial intelligence applications has resulted in the deployment of a growing number of deep neural networks (DNNs) on mobile devices. Given the limited computational capabilities and small battery capacity of these devices, supporting efficient DNN inference presents a significant challenge. This paper considers the joint design of DNN model partitioning and offloading under high-concurrent tasks scenarios. The primary objective is to accelerate DNN task inference and reduce computational delay. Firstly, we propose an innovative adaptive inference framework that partitions DNN models into interdependent sub-tasks through a hierarchical partitioning method. Secondly, we develop a delay prediction model based on a Random Forest (RF) regression algorithm to estimate the computational delay of each sub-task on different devices. Finally, we designed a high-performance DNN partitioning and task offloading method based on an attention mechanism-aided Soft Actor-Critic (AMSAC) algorithm. The bandwidth allocation for each user is determined by the attention mechanism based on the characteristics of the DNN tasks, and the Soft Actor-Critic algorithm is used for adaptive layer-level partitioning and offloading of the DNN model, reducing collaborative inference delay. Extensive experiments demonstrate that our proposed AMSAC algorithm effectively reduces DNN task inference latency cost and improves service quality.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 3","pages":"2914-2927"},"PeriodicalIF":4.7,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10969114","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144232002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint Power Allocation and Task Scheduling for Data Offloading in Non-Geostationary Orbit Satellite Networks 非地球静止轨道卫星网络数据卸载联合功率分配与任务调度
IF 4.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-04-16 DOI: 10.1109/TNSM.2025.3561266
Lijun He;Ziye Jia;Juncheng Wang;Erick Lansard;Zhu Han;Chau Yuen
In Non-Geostationary Orbit Satellite Networks (NGOSNs) with a large number of battery-carrying satellites, proper power allocation and task scheduling are crucial to improving data offloading efficiency. In this work, we jointly optimize power allocation and task scheduling to achieve energy-efficient data offloading in NGOSNs. Our goal is to properly balance the minimization of the total energy consumption and the maximization of the sum weights of tasks. Due to the tight coupling between power allocation and task scheduling, we first derive the optimal power allocation solution to the joint optimization problem with any given task scheduling policy. We then leverage the conflict graph model to transform the joint optimization problem into an Integer Linear Programming (ILP) problem with any given power allocation strategy. We explore the unique structure of the ILP problem to derive an efficient semidefinite relaxation-based solution. Finally, we utilize the genetic framework to combine the above special solutions as a two-layer solution for the original joint optimization problem. Simulation results demonstrate that our proposed solution can properly balance the reduction of total energy consumption and the improvement of the sum weights of tasks, thus achieving superior system performance over the current literature.
在具有大量载电池卫星的非地球静止轨道卫星网络中,合理的功率分配和任务调度是提高数据卸载效率的关键。在这项工作中,我们共同优化了非政府组织网络的功率分配和任务调度,以实现节能的数据卸载。我们的目标是适当地平衡总能耗的最小化和任务总权重的最大化。由于功率分配与任务调度之间的紧密耦合,我们首先推导了任意给定任务调度策略的联合优化问题的最优功率分配解。然后利用冲突图模型将联合优化问题转化为具有任意给定功率分配策略的整数线性规划(ILP)问题。我们探索了ILP问题的独特结构,得出了一个有效的基于半定松弛的解。最后,利用遗传框架将上述特殊解组合为原联合优化问题的两层解。仿真结果表明,我们提出的方案能够很好地平衡总能耗的降低和任务总权值的提高,从而获得优于现有文献的系统性能。
{"title":"Joint Power Allocation and Task Scheduling for Data Offloading in Non-Geostationary Orbit Satellite Networks","authors":"Lijun He;Ziye Jia;Juncheng Wang;Erick Lansard;Zhu Han;Chau Yuen","doi":"10.1109/TNSM.2025.3561266","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3561266","url":null,"abstract":"In Non-Geostationary Orbit Satellite Networks (NGOSNs) with a large number of battery-carrying satellites, proper power allocation and task scheduling are crucial to improving data offloading efficiency. In this work, we jointly optimize power allocation and task scheduling to achieve energy-efficient data offloading in NGOSNs. Our goal is to properly balance the minimization of the total energy consumption and the maximization of the sum weights of tasks. Due to the tight coupling between power allocation and task scheduling, we first derive the optimal power allocation solution to the joint optimization problem with any given task scheduling policy. We then leverage the conflict graph model to transform the joint optimization problem into an Integer Linear Programming (ILP) problem with any given power allocation strategy. We explore the unique structure of the ILP problem to derive an efficient semidefinite relaxation-based solution. Finally, we utilize the genetic framework to combine the above special solutions as a two-layer solution for the original joint optimization problem. Simulation results demonstrate that our proposed solution can properly balance the reduction of total energy consumption and the improvement of the sum weights of tasks, thus achieving superior system performance over the current literature.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 3","pages":"2882-2896"},"PeriodicalIF":4.7,"publicationDate":"2025-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144231990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy-Efficient Node Localization in Time-Varying UAV-RIS-Assisted and Cluster-Based IoT Networks 时变无人机- ris辅助和基于集群的物联网网络节能节点定位
IF 4.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-04-16 DOI: 10.1109/TNSM.2025.3561269
Vikash Kumar Bhardwaj;Aagat Shukla;Om Jee Pandey
This paper proposes a novel method for energy-efficient node localization in time-varying Internet of Things (IoT) networks. The method utilizes Unmanned Aerial Vehicles (UAVs) and Reconfigurable Intelligent Surfaces (RISs) over cluster-based IoT networks, resulting in improved localization accuracy and Signal-to-Interference plus Noise Ratio (SINR) at the Base Station (BS). First, the proposed method computes the approximate coordinates of the User Equipments (UEs) through trilateration, utilizing a dataset comprising the coordinates of anchor nodes and Received Signal Strength (RSS) between UE-RIS pairs. Subsequently, K-means clustering is applied to efficiently group UEs based on their spatial proximity, leading to optimal RIS requirements. To further enhance the localization precision of the UEs, a Reinforcement Learning (RL) algorithm with a collision avoidance mechanism is employed over UAVs mounted with RIS. This innovative approach dynamically relocates a UAV-RIS pair to a maximum SINR position over the cluster. To compute the SINR value over a spatial location in the network, a novel approach is proposed herein, which utilizes a radio map of the network. Subsequently, the relocation of the UAV-RIS pair is followed by a novel method for computing the optimal phases of RIS elements, maximizing SINR at the BS. The final step involves Capon beamforming, strategically applied to antenna elements at the BS, resulting in further SINR improvement at the BS. The holistic integration of trilateration, clustering, RL, and beamforming collectively contributes to a system that achieves energy-efficiency, accurate localization, and enhanced SINR at BS. Experimental results demonstrate the effectiveness of the proposed methods, showcasing their potential for application in real-world scenarios where energy consumption and localization accuracy are critical considerations. To validate the significance of the proposed methods’ utilization, the proposed methods’ performance is also compared with that of existing methods.
提出了一种时变物联网(IoT)网络中节能节点定位的新方法。该方法在基于集群的物联网网络上利用无人机(uav)和可重构智能表面(RISs),从而提高了基站(BS)的定位精度和信噪比(SINR)。首先,该方法利用包含锚节点坐标和UE-RIS对之间接收信号强度(RSS)的数据集,通过三边测量计算用户设备(ue)的近似坐标。随后,应用K-means聚类,根据ue的空间接近度对其进行有效分组,从而获得最优的RIS需求。为了进一步提高ue的定位精度,在安装了RIS的无人机上采用了一种带有避碰机制的强化学习(RL)算法。这种创新的方法动态地将无人机- ris对重新定位到集群上的最大SINR位置。为了计算网络中空间位置上的SINR值,本文提出了一种新的方法,该方法利用网络的无线电地图。随后,在UAV-RIS对的重新定位之后,采用了一种新的方法来计算RIS元素的最优相位,从而最大化BS处的SINR。最后一步涉及Capon波束成形,策略性地应用于BS的天线元件,从而进一步提高BS的SINR。三边测量、聚类、RL和波束形成的整体集成共同促成了一个系统,实现了能源效率、精确定位和增强的SINR。实验结果证明了所提出方法的有效性,展示了它们在能源消耗和定位精度是关键考虑因素的现实场景中的应用潜力。为了验证所提方法应用的意义,还将所提方法的性能与现有方法进行了比较。
{"title":"Energy-Efficient Node Localization in Time-Varying UAV-RIS-Assisted and Cluster-Based IoT Networks","authors":"Vikash Kumar Bhardwaj;Aagat Shukla;Om Jee Pandey","doi":"10.1109/TNSM.2025.3561269","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3561269","url":null,"abstract":"This paper proposes a novel method for energy-efficient node localization in time-varying Internet of Things (IoT) networks. The method utilizes Unmanned Aerial Vehicles (UAVs) and Reconfigurable Intelligent Surfaces (RISs) over cluster-based IoT networks, resulting in improved localization accuracy and Signal-to-Interference plus Noise Ratio (SINR) at the Base Station (BS). First, the proposed method computes the approximate coordinates of the User Equipments (UEs) through trilateration, utilizing a dataset comprising the coordinates of anchor nodes and Received Signal Strength (RSS) between UE-RIS pairs. Subsequently, K-means clustering is applied to efficiently group UEs based on their spatial proximity, leading to optimal RIS requirements. To further enhance the localization precision of the UEs, a Reinforcement Learning (RL) algorithm with a collision avoidance mechanism is employed over UAVs mounted with RIS. This innovative approach dynamically relocates a UAV-RIS pair to a maximum SINR position over the cluster. To compute the SINR value over a spatial location in the network, a novel approach is proposed herein, which utilizes a radio map of the network. Subsequently, the relocation of the UAV-RIS pair is followed by a novel method for computing the optimal phases of RIS elements, maximizing SINR at the BS. The final step involves Capon beamforming, strategically applied to antenna elements at the BS, resulting in further SINR improvement at the BS. The holistic integration of trilateration, clustering, RL, and beamforming collectively contributes to a system that achieves energy-efficiency, accurate localization, and enhanced SINR at BS. Experimental results demonstrate the effectiveness of the proposed methods, showcasing their potential for application in real-world scenarios where energy consumption and localization accuracy are critical considerations. To validate the significance of the proposed methods’ utilization, the proposed methods’ performance is also compared with that of existing methods.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 3","pages":"2897-2913"},"PeriodicalIF":4.7,"publicationDate":"2025-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144232173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MOHFL: Multi-Level One-Shot Hierarchical Federated Learning With Enhanced Model Aggregation Over Non-IID Data MOHFL:基于非iid数据的增强模型聚合的多级单次分层联邦学习
IF 4.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-04-15 DOI: 10.1109/TNSM.2025.3560629
Huili Liu;Yinglong Ma;Chenqi Guo;Xiaofeng Liu;Tingdong Wang
Hierarchical federated learning (HFL) is a privacy-preserving distributed machine learning framework with a client-edge-cloud hierarchy, where multiple edge servers perform partial model aggregation to reduce costly communication with the cloud server. Nevertheless, most existing HFL methods require extensive iterative communication and public datasets, which not only increase communication overhead but also raise privacy and security concerns. Moreover, non-independent and identically distributed (non-IID) data among devices can significantly impact the accuracy of the global model in HFL. To address these challenges, we propose a multi-level one-shot HFL framework (MOHFL), which aims to improve the performance of the global model in a single communication round. Specifically, we employ conditional variational autoencoders (CVAEs) as local models and use the aggregated decoders to generate an IID training set for the global model, thereby mitigating the negative impact of non-IID data. We improve the performance of CVAEs under different levels of data heterogeneity through a dominant class-based data selection method. Subsequently, an edge aggregation scheme based on multi-teacher knowledge distillation and contrastive learning is proposed to aggregate the knowledge from local decoders to edge decoders. Extensive experiments on four real-world datasets demonstrate that MOHFL is very competitive against four state-of-the-art baselines under various settings.
分层联邦学习(HFL)是一种具有客户端-云层次结构的隐私保护分布式机器学习框架,其中多个边缘服务器执行部分模型聚合,以减少与云服务器的昂贵通信。然而,大多数现有的HFL方法需要大量的迭代通信和公共数据集,这不仅增加了通信开销,而且引起了隐私和安全问题。此外,设备之间的非独立和同分布(non-IID)数据会显著影响HFL中全局模型的准确性。为了解决这些挑战,我们提出了一个多层次的单次HFL框架(MOHFL),旨在提高全局模型在单轮通信中的性能。具体来说,我们使用条件变分自编码器(CVAEs)作为局部模型,并使用聚合的解码器为全局模型生成IID训练集,从而减轻了非IID数据的负面影响。我们通过一种主要的基于类的数据选择方法提高了CVAEs在不同数据异质性水平下的性能。随后,提出了一种基于多教师知识蒸馏和对比学习的边缘聚合方案,将局部解码器的知识聚合到边缘解码器上。在四个真实数据集上进行的大量实验表明,在各种设置下,MOHFL与四个最先进的基线相比非常具有竞争力。
{"title":"MOHFL: Multi-Level One-Shot Hierarchical Federated Learning With Enhanced Model Aggregation Over Non-IID Data","authors":"Huili Liu;Yinglong Ma;Chenqi Guo;Xiaofeng Liu;Tingdong Wang","doi":"10.1109/TNSM.2025.3560629","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3560629","url":null,"abstract":"Hierarchical federated learning (HFL) is a privacy-preserving distributed machine learning framework with a client-edge-cloud hierarchy, where multiple edge servers perform partial model aggregation to reduce costly communication with the cloud server. Nevertheless, most existing HFL methods require extensive iterative communication and public datasets, which not only increase communication overhead but also raise privacy and security concerns. Moreover, non-independent and identically distributed (non-IID) data among devices can significantly impact the accuracy of the global model in HFL. To address these challenges, we propose a multi-level one-shot HFL framework (MOHFL), which aims to improve the performance of the global model in a single communication round. Specifically, we employ conditional variational autoencoders (CVAEs) as local models and use the aggregated decoders to generate an IID training set for the global model, thereby mitigating the negative impact of non-IID data. We improve the performance of CVAEs under different levels of data heterogeneity through a dominant class-based data selection method. Subsequently, an edge aggregation scheme based on multi-teacher knowledge distillation and contrastive learning is proposed to aggregate the knowledge from local decoders to edge decoders. Extensive experiments on four real-world datasets demonstrate that MOHFL is very competitive against four state-of-the-art baselines under various settings.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 3","pages":"2853-2865"},"PeriodicalIF":4.7,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144232188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Asynchronous Federated Caching Strategy for Multi-Satellite Collaboration Based on Deep Reinforcement Learning 基于深度强化学习的多卫星协同异步联邦缓存策略
IF 4.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-04-15 DOI: 10.1109/TNSM.2025.3560833
Min Jia;Liang Zhang;Jian Wu;Qing Guo;Xuemai Gu
By incorporating caching functions into Low Earth Orbit (LEO) satellites, users worldwide can benefit from caching services. However, satellite caching faces the following challenges: 1) The continuous mobility of satellites introduces dynamic shifts in user distribution, resulting in unpredictable variations in interested content over time. 2) The cached content is susceptible to becoming obsolete due to the brief connection times established between satellites and clients. 3) Significant concerns arise regarding data privacy and security. Users may exhibit reluctance to transmit local data for privacy protection. To address the abovementioned challenges, we propose an asynchronous federated caching strategy (AFCS) consisting of an access satellite and several collaboration satellites. Clients employ an asynchronous federated learning methodology to collaboratively train a global model for predicting content popularity. Concerning privacy protection, clients are not required to upload local data. Instead, they only need to transmit the model hyperparameters. This approach significantly diminishes the risk of data leakage, thereby safeguarding data privacy effectively. We propose a novel strategy for client selection participating in global model training. Through model training, we can get a preliminary caching strategy. To further improve caching performance, we propose a multiple-satellites collaboration based on deep reinforcement learning. This collaborative approach enhances the cache hit ratio and diminishes content request delay.
通过将缓存功能集成到低地球轨道(LEO)卫星中,全球用户都可以从缓存服务中受益。然而,卫星缓存面临以下挑战:1)卫星的持续移动引入了用户分布的动态变化,导致感兴趣的内容随着时间的推移发生不可预测的变化。2)缓存的内容很容易过时,因为卫星和客户端之间建立的连接时间很短。3)数据隐私和安全方面出现了重大问题。出于隐私保护,用户可能不愿传输本地数据。为了解决上述挑战,我们提出了一种异步联邦缓存策略(AFCS),该策略由一个接入卫星和几个协作卫星组成。客户端采用异步联邦学习方法来协作训练一个全局模型,以预测内容的流行程度。出于隐私保护的考虑,客户端不需要上传本地数据。相反,它们只需要传输模型超参数。这种方法大大降低了数据泄露的风险,有效地保护了数据隐私。我们提出了一种新的参与全球模型培训的客户选择策略。通过模型训练,我们可以得到一个初步的缓存策略。为了进一步提高缓存性能,我们提出了一种基于深度强化学习的多卫星协作。这种协作方法提高了缓存命中率,减少了内容请求延迟。
{"title":"Asynchronous Federated Caching Strategy for Multi-Satellite Collaboration Based on Deep Reinforcement Learning","authors":"Min Jia;Liang Zhang;Jian Wu;Qing Guo;Xuemai Gu","doi":"10.1109/TNSM.2025.3560833","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3560833","url":null,"abstract":"By incorporating caching functions into Low Earth Orbit (LEO) satellites, users worldwide can benefit from caching services. However, satellite caching faces the following challenges: 1) The continuous mobility of satellites introduces dynamic shifts in user distribution, resulting in unpredictable variations in interested content over time. 2) The cached content is susceptible to becoming obsolete due to the brief connection times established between satellites and clients. 3) Significant concerns arise regarding data privacy and security. Users may exhibit reluctance to transmit local data for privacy protection. To address the abovementioned challenges, we propose an asynchronous federated caching strategy (AFCS) consisting of an access satellite and several collaboration satellites. Clients employ an asynchronous federated learning methodology to collaboratively train a global model for predicting content popularity. Concerning privacy protection, clients are not required to upload local data. Instead, they only need to transmit the model hyperparameters. This approach significantly diminishes the risk of data leakage, thereby safeguarding data privacy effectively. We propose a novel strategy for client selection participating in global model training. Through model training, we can get a preliminary caching strategy. To further improve caching performance, we propose a multiple-satellites collaboration based on deep reinforcement learning. This collaborative approach enhances the cache hit ratio and diminishes content request delay.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 3","pages":"2866-2881"},"PeriodicalIF":4.7,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144232005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DRL-Based Time-Varying Workload Scheduling With Priority and Resource Awareness 基于drl的具有优先级和资源感知的时变工作负载调度
IF 4.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-04-10 DOI: 10.1109/TNSM.2025.3559610
Qifeng Liu;Qilin Fan;Xu Zhang;Xiuhua Li;Kai Wang;Qingyu Xiong
With the proliferation of cloud services and the continuous growth in enterprises’ demand for dynamic multi-dimensional resources, the implementation of effective strategy for time-varying workload scheduling has become increasingly significant. In this paper, we propose a deep reinforcement learning (DRL)-based method for time-varying workload scheduling, aiming to allocate resources efficiently across servers in the cluster. Specifically, we integrate a classifier and queue scorer to construct a priority queue that exploits temporal resource utilization patterns across different workload classes. Then, we design parallel graph attention layers to capture the dimensional features and temporal dynamics of cloud server cluster. Moreover, we propose a DRL algorithm to generate scheduling strategies that can adapt to dynamic environments. Validation on real-world traces from Google cluster demonstrates that our method outperforms existing approaches in key metrics of cloud server cluster management.
随着云服务的激增和企业对动态多维资源需求的不断增长,实施有效的时变工作负载调度策略变得越来越重要。在本文中,我们提出了一种基于深度强化学习(DRL)的时变工作负载调度方法,旨在有效地在集群中的服务器之间分配资源。具体来说,我们集成了一个分类器和队列评分器来构建一个优先级队列,该队列利用了跨不同工作负载类的临时资源利用模式。然后,我们设计并行图关注层来捕捉云服务器集群的维度特征和时间动态。此外,我们还提出了一种DRL算法来生成适应动态环境的调度策略。对谷歌集群的真实跟踪验证表明,我们的方法在云服务器集群管理的关键指标上优于现有方法。
{"title":"DRL-Based Time-Varying Workload Scheduling With Priority and Resource Awareness","authors":"Qifeng Liu;Qilin Fan;Xu Zhang;Xiuhua Li;Kai Wang;Qingyu Xiong","doi":"10.1109/TNSM.2025.3559610","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3559610","url":null,"abstract":"With the proliferation of cloud services and the continuous growth in enterprises’ demand for dynamic multi-dimensional resources, the implementation of effective strategy for time-varying workload scheduling has become increasingly significant. In this paper, we propose a deep reinforcement learning (DRL)-based method for time-varying workload scheduling, aiming to allocate resources efficiently across servers in the cluster. Specifically, we integrate a classifier and queue scorer to construct a priority queue that exploits temporal resource utilization patterns across different workload classes. Then, we design parallel graph attention layers to capture the dimensional features and temporal dynamics of cloud server cluster. Moreover, we propose a DRL algorithm to generate scheduling strategies that can adapt to dynamic environments. Validation on real-world traces from Google cluster demonstrates that our method outperforms existing approaches in key metrics of cloud server cluster management.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 3","pages":"2838-2852"},"PeriodicalIF":4.7,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144232148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ArchSentry: Enhanced Android Malware Detection via Hierarchical Semantic Extraction ArchSentry:通过分层语义提取增强Android恶意软件检测
IF 4.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-04-10 DOI: 10.1109/TNSM.2025.3559255
Tianbo Wang;Mengyao Liu;Huacheng Li;Lei Zhao;Changnan Jiang;Chunhe Xia;Baojiang Cui
Android malware poses a significant challenge for mobile platforms. To evade detection, contemporary malware variants use API substitution or obfuscation techniques to hide malicious activities and mask their shallow semantic characteristics. However, existing research lacks analysis of the hierarchical semantic associated with Android apps. To address this problem, we propose ArchSentry, an enhanced Android malware detection via hierarchical semantic extraction. First, we select entities and their relationships relevant to Android software behavior through the software architecture and represent them using a heterogeneous graph. Then, we structure meta-paths to represent rich semantic information to achieve semantic enhancement and improve efficiency. Next, we design a meta-path semantic selection method based on KL Divergence to identify and eliminate redundant features. To achieve a comprehensive representation of the overall software semantics and improve performance, we construct a feature fusion approach based on Restricted Boltzmann Machines (RBM) and AutoEncoder (AE) during the pre-training phase, while preserving the probability distribution characteristics of various meta-paths. Finally, Deep Neural Networks (DNN) process fusion features for comprehensive feature sets. Experimental results on real-world application samples indicate that ArchSentry achieves a remarkable 99.2% detection rate for Android malware, with a low false positive rate below 1%. These results surpass the performance of current state-of-the-art approaches.
Android恶意软件对移动平台构成了重大挑战。为了逃避检测,当代恶意软件变体使用API替代或混淆技术来隐藏恶意活动并掩盖其浅层语义特征。然而,现有的研究缺乏对Android应用的层次语义分析。为了解决这个问题,我们提出了ArchSentry,一个通过分层语义提取增强的Android恶意软件检测。首先,我们通过软件架构选择与Android软件行为相关的实体及其关系,并使用异构图表示它们。然后,我们构建元路径来表示丰富的语义信息,以实现语义增强和提高效率。其次,我们设计了一种基于KL散度的元路径语义选择方法来识别和消除冗余特征。为了全面表征软件整体语义并提高性能,我们在预训练阶段构建了一种基于受限玻尔兹曼机(RBM)和自动编码器(AE)的特征融合方法,同时保留了各种元路径的概率分布特征。最后,深度神经网络(DNN)处理综合特征集的融合特征。真实应用样本的实验结果表明,ArchSentry对Android恶意软件的检测率达到了99.2%,假阳性率低于1%。这些结果超越了目前最先进的方法的性能。
{"title":"ArchSentry: Enhanced Android Malware Detection via Hierarchical Semantic Extraction","authors":"Tianbo Wang;Mengyao Liu;Huacheng Li;Lei Zhao;Changnan Jiang;Chunhe Xia;Baojiang Cui","doi":"10.1109/TNSM.2025.3559255","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3559255","url":null,"abstract":"Android malware poses a significant challenge for mobile platforms. To evade detection, contemporary malware variants use API substitution or obfuscation techniques to hide malicious activities and mask their shallow semantic characteristics. However, existing research lacks analysis of the hierarchical semantic associated with Android apps. To address this problem, we propose ArchSentry, an enhanced Android malware detection via hierarchical semantic extraction. First, we select entities and their relationships relevant to Android software behavior through the software architecture and represent them using a heterogeneous graph. Then, we structure meta-paths to represent rich semantic information to achieve semantic enhancement and improve efficiency. Next, we design a meta-path semantic selection method based on KL Divergence to identify and eliminate redundant features. To achieve a comprehensive representation of the overall software semantics and improve performance, we construct a feature fusion approach based on Restricted Boltzmann Machines (RBM) and AutoEncoder (AE) during the pre-training phase, while preserving the probability distribution characteristics of various meta-paths. Finally, Deep Neural Networks (DNN) process fusion features for comprehensive feature sets. Experimental results on real-world application samples indicate that ArchSentry achieves a remarkable 99.2% detection rate for Android malware, with a low false positive rate below 1%. These results surpass the performance of current state-of-the-art approaches.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 3","pages":"2822-2837"},"PeriodicalIF":4.7,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144229457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint Controller Placement and TDMA Scheduling in Software Defined Wireless Multihop Networks 软件定义无线多跳网络中的联合控制器布置与TDMA调度
IF 4.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-04-09 DOI: 10.1109/TNSM.2025.3559104
Yiannis Papageorgiou;Merkouris Karaliopoulos;Kostas Choumas;Iordanis Koutsopoulos
We study TDMA-scheduled Software Defined Wireless Multihop Networks (SDWMNs), whereby the data traffic and SDN control messages share the same network links and TDMA resources. Since the topology of WMNs dynamically changes, maintaining a responsive SDN plane is essential for meeting data traffic rate requirements. Placing more SDN controllers reduces communication delays at the SDN layer and increases its responsiveness. However, it demands more TDMA resources and reduces the available ones for data traffic. We analyze this trade-off between data traffic performance and SDN layer responsiveness by delving into two distinct resource allocation mechanisms in the WMN, the SDN controller placement and TDMA scheduling. We capture their interaction into an optimization problem formulation, which aims at maximizing the SDN-responsiveness subject to data traffic rate requirements, topology conditions, and the available TDMA resources. We propose a novel heuristic for the hard-to-solve problem that leverages the network state information gathered at the SDN layer. We find that our heuristic can increase the SDN-responsiveness by 44% when varying the rate reserved for rate-elastic data traffic within 40% of what is nominally requested. The heuristic is modular in accommodating different controller placement algorithms and robust to different alternative for the SDN software implementation.
我们研究了TDMA调度的软件定义无线多跳网络(SDWMNs),其中数据流量和SDN控制消息共享相同的网络链路和TDMA资源。由于wmn的拓扑结构是动态变化的,因此维护响应式SDN平面对于满足数据流量速率要求至关重要。放置更多的SDN控制器可以减少SDN层的通信延迟,并提高其响应能力。然而,它需要更多的TDMA资源,并减少了可用的数据流量。我们通过深入研究WMN中的两种不同的资源分配机制,即SDN控制器放置和TDMA调度,来分析数据流量性能和SDN层响应性之间的权衡。我们将它们的相互作用捕获到一个优化问题公式中,该公式旨在根据数据流量速率要求、拓扑条件和可用的TDMA资源最大化sdn响应性。我们提出了一种新的启发式方法,利用在SDN层收集的网络状态信息来解决这个难以解决的问题。我们发现,当在名义请求的40%以内改变为速率弹性数据流量保留的速率时,我们的启发式方法可以将sdn响应性提高44%。启发式是模块化的,可以适应不同的控制器放置算法,并且对SDN软件实现的不同替代方案具有鲁棒性。
{"title":"Joint Controller Placement and TDMA Scheduling in Software Defined Wireless Multihop Networks","authors":"Yiannis Papageorgiou;Merkouris Karaliopoulos;Kostas Choumas;Iordanis Koutsopoulos","doi":"10.1109/TNSM.2025.3559104","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3559104","url":null,"abstract":"We study TDMA-scheduled Software Defined Wireless Multihop Networks (SDWMNs), whereby the data traffic and SDN control messages share the same network links and TDMA resources. Since the topology of WMNs dynamically changes, maintaining a responsive SDN plane is essential for meeting data traffic rate requirements. Placing more SDN controllers reduces communication delays at the SDN layer and increases its responsiveness. However, it demands more TDMA resources and reduces the available ones for data traffic. We analyze this trade-off between data traffic performance and SDN layer responsiveness by delving into two distinct resource allocation mechanisms in the WMN, the SDN controller placement and TDMA scheduling. We capture their interaction into an optimization problem formulation, which aims at maximizing the SDN-responsiveness subject to data traffic rate requirements, topology conditions, and the available TDMA resources. We propose a novel heuristic for the hard-to-solve problem that leverages the network state information gathered at the SDN layer. We find that our heuristic can increase the SDN-responsiveness by 44% when varying the rate reserved for rate-elastic data traffic within 40% of what is nominally requested. The heuristic is modular in accommodating different controller placement algorithms and robust to different alternative for the SDN software implementation.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 3","pages":"2807-2821"},"PeriodicalIF":4.7,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144232191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Network and Service Management
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1