首页 > 最新文献

IEEE Transactions on Network and Service Management最新文献

英文 中文
sketchPro: Identifying Top-k Items Based on Probabilistic Update on Programmable Data Plane 基于可编程数据平面概率更新的Top-k项识别
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-11-19 DOI: 10.1109/TNSM.2025.3634742
Keke Zheng;Mai Zhang;Mimi Qian;Waiming Lau;Lin Cui
Detecting the top-k heaviest items in network traffic is fundamental to traffic engineering, congestion control, and security analytics. Controller-side solutions suffer from high communication latency and heavy resource overhead, motivating the migration of this task to programmable data planes (PDP). However, PDP hardware (e.g., Tofino ASIC) offers only a few megabytes of on-chip SRAM per pipeline stage and supports neither loops nor complex arithmetic, making accurate top-k detection highly challenging. This paper proposes sketchPro, a novel sketch-based solution that employs a probabilistic update scheme to retain large items, enabling accurate top-k identification on PDP with minimal memory. sketchPro dynamically adjusts the probability of updates based on the current statistical size of the items and the frequency of hash collisions, thus allowing sketchPro to effectively detect top-k items. We have implemented sketchPro on PDP, including P4 software switch (i.e., BMv2) and hardware switch (Intel Tofino ASIC). Extensive evaluation results demonstrate that sketchPro can achieve more than 95% precision with only 10KB of memory.
检测网络流量中最重要的k项是流量工程、拥塞控制和安全分析的基础。控制器端解决方案受到通信延迟高和资源开销大的困扰,促使该任务迁移到可编程数据平面(PDP)。然而,PDP硬件(例如,Tofino ASIC)每个管道级只提供几兆字节的片上SRAM,既不支持循环也不支持复杂的算法,这使得精确的top-k检测非常具有挑战性。本文提出了一种新颖的基于草图的解决方案sketchPro,它采用概率更新方案来保留大项目,从而在PDP上以最小的内存实现精确的top-k识别。sketchPro根据项目的当前统计大小和哈希冲突的频率动态调整更新的概率,从而允许sketchPro有效地检测top-k项。我们在PDP上实现了sketchPro,包括P4软件开关(即BMv2)和硬件开关(Intel Tofino ASIC)。广泛的评估结果表明,sketchPro可以在仅10KB内存的情况下实现95%以上的精度。
{"title":"sketchPro: Identifying Top-k Items Based on Probabilistic Update on Programmable Data Plane","authors":"Keke Zheng;Mai Zhang;Mimi Qian;Waiming Lau;Lin Cui","doi":"10.1109/TNSM.2025.3634742","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3634742","url":null,"abstract":"Detecting the top-k heaviest items in network traffic is fundamental to traffic engineering, congestion control, and security analytics. Controller-side solutions suffer from high communication latency and heavy resource overhead, motivating the migration of this task to programmable data planes (PDP). However, PDP hardware (e.g., Tofino ASIC) offers only a few megabytes of on-chip SRAM per pipeline stage and supports neither loops nor complex arithmetic, making accurate top-k detection highly challenging. This paper proposes sketchPro, a novel sketch-based solution that employs a probabilistic update scheme to retain large items, enabling accurate top-k identification on PDP with minimal memory. sketchPro dynamically adjusts the probability of updates based on the current statistical size of the items and the frequency of hash collisions, thus allowing sketchPro to effectively detect top-k items. We have implemented sketchPro on PDP, including P4 software switch (i.e., BMv2) and hardware switch (Intel Tofino ASIC). Extensive evaluation results demonstrate that sketchPro can achieve more than 95% precision with only 10KB of memory.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"23 ","pages":"803-813"},"PeriodicalIF":5.4,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145852515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic Task Scheduling and Adaptive GPU Resource Allocation in the Cloud 云环境下动态任务调度与GPU资源自适应分配
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-11-19 DOI: 10.1109/TNSM.2025.3635529
Hoda Sedighi;Fetahi Wuhib;Roch H. Glitho
The growing demand for computational power in cloud computing has made Graphics Processing Units (GPUs) essential for providing substantial computational capacity. Efficiently allocating GPU resources is crucial due to their high cost. Additionally, it’s necessary to consider cloud environment characteristics, such as dynamic workloads, multi-tenancy, and requirements like isolation. One key challenge is efficiently allocating GPU resources while maintaining isolation and adapting to dynamic workload fluctuations. Another challenge is ensuring scheduling maintains fairness between tenants while meeting task requirements (e.g., completion deadlines). While existing approaches have addressed each challenge individually, none have tackled both challenges simultaneously. This is especially important in dynamic environments where applications continuously request and release GPU resources. This paper introduces a new dynamic GPU resource allocation method, incorporating fair and requirement-aware task scheduling. We present a novel algorithm that leverages the multitasking capabilities of GPUs supported by both hardware and software. The algorithm schedules tasks and continuously reassesses resource allocation as new tasks arrive to ensure fairness. Simultaneously, it adjusts allocations to maintain isolation and satisfy task requirements. Experimental results indicate that our proposed algorithm offers several advantages over existing state-of-the-art solutions. It reduces GPU resource usage by 88% and significantly decreases task completion times.
云计算对计算能力日益增长的需求使得图形处理单元(gpu)成为提供大量计算能力的关键。高效地分配GPU资源是至关重要的,因为它们的高成本。此外,有必要考虑云环境特征,例如动态工作负载、多租户和隔离等需求。一个关键的挑战是在保持隔离和适应动态工作负载波动的同时有效地分配GPU资源。另一个挑战是确保调度在满足任务要求(例如,完成期限)的同时保持租户之间的公平性。虽然现有的方法分别解决了每一项挑战,但没有一种方法能同时解决这两项挑战。这在应用程序不断请求和释放GPU资源的动态环境中尤为重要。本文介绍了一种新的GPU资源动态分配方法,该方法结合了公平和需求感知任务调度。我们提出了一种利用硬件和软件支持的gpu多任务处理能力的新算法。该算法调度任务,并在新任务到达时不断重新评估资源分配,以确保公平性。同时,它调整分配以保持隔离并满足任务需求。实验结果表明,我们提出的算法比现有的最先进的解决方案有几个优点。它减少了88%的GPU资源使用,并显著减少了任务完成时间。
{"title":"Dynamic Task Scheduling and Adaptive GPU Resource Allocation in the Cloud","authors":"Hoda Sedighi;Fetahi Wuhib;Roch H. Glitho","doi":"10.1109/TNSM.2025.3635529","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3635529","url":null,"abstract":"The growing demand for computational power in cloud computing has made Graphics Processing Units (GPUs) essential for providing substantial computational capacity. Efficiently allocating GPU resources is crucial due to their high cost. Additionally, it’s necessary to consider cloud environment characteristics, such as dynamic workloads, multi-tenancy, and requirements like isolation. One key challenge is efficiently allocating GPU resources while maintaining isolation and adapting to dynamic workload fluctuations. Another challenge is ensuring scheduling maintains fairness between tenants while meeting task requirements (e.g., completion deadlines). While existing approaches have addressed each challenge individually, none have tackled both challenges simultaneously. This is especially important in dynamic environments where applications continuously request and release GPU resources. This paper introduces a new dynamic GPU resource allocation method, incorporating fair and requirement-aware task scheduling. We present a novel algorithm that leverages the multitasking capabilities of GPUs supported by both hardware and software. The algorithm schedules tasks and continuously reassesses resource allocation as new tasks arrive to ensure fairness. Simultaneously, it adjusts allocations to maintain isolation and satisfy task requirements. Experimental results indicate that our proposed algorithm offers several advantages over existing state-of-the-art solutions. It reduces GPU resource usage by 88% and significantly decreases task completion times.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"23 ","pages":"1112-1127"},"PeriodicalIF":5.4,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145929297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Proactive Service Assurance in 5G and B5G Networks: A Closed-Loop Algorithm for End-to-End Network Slices 5G和B5G网络主动业务保障:端到端网络切片的闭环算法
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-11-19 DOI: 10.1109/TNSM.2025.3635028
Nguyen Phuc Tran;Oscar Delgado;Brigitte Jaumard
Ensuring the highest levels of performance and reliability for customized services in fifth-generation (5G) and beyond (B5G) networks requires the automation of resource management within network slices. In this paper, we propose PCLANSA, a proactive closed-loop algorithm that dynamically allocates and scales resources to meet the demands of diverse applications in real time for an end-to-end (E2E) network slice. In our experiment, PCLANSA was evaluated to ensure that each virtual network function is allocated the resources it requires, thereby maximizing efficiency and minimizing waste. This goal is achieved through the intelligent scaling of virtual network functions. The benefits of PCLANSA have been demonstrated across various network slice types, including eMBB, mMTC, uRLLC, and VoIP. This finding indicates the potential for substantial gains in resource utilization and cost savings, with the possibility of reducing over-provisioning by up to 54.85%.
为确保第五代(5G)及以上(B5G)网络中定制服务的最高性能和可靠性,需要在网络片内实现资源管理的自动化。在本文中,我们提出了一种主动闭环算法PCLANSA,它可以动态分配和扩展资源,以实时满足端到端(E2E)网络切片的各种应用需求。在我们的实验中,对PCLANSA进行了评估,以确保每个虚拟网络功能都分配到所需的资源,从而最大化效率并最小化浪费。这一目标是通过虚拟网络功能的智能扩展来实现的。PCLANSA的优势已经在各种网络片类型中得到了证明,包括eMBB、mMTC、uRLLC和VoIP。这一发现表明,在资源利用和成本节约方面有巨大的潜力,有可能将过度供应减少54.85%。
{"title":"Proactive Service Assurance in 5G and B5G Networks: A Closed-Loop Algorithm for End-to-End Network Slices","authors":"Nguyen Phuc Tran;Oscar Delgado;Brigitte Jaumard","doi":"10.1109/TNSM.2025.3635028","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3635028","url":null,"abstract":"Ensuring the highest levels of performance and reliability for customized services in fifth-generation (5G) and beyond (B5G) networks requires the automation of resource management within network slices. In this paper, we propose <sc>PCLANSA</small>, a proactive closed-loop algorithm that dynamically allocates and scales resources to meet the demands of diverse applications in real time for an end-to-end (E2E) network slice. In our experiment, <sc>PCLANSA</small> was evaluated to ensure that each virtual network function is allocated the resources it requires, thereby maximizing efficiency and minimizing waste. This goal is achieved through the intelligent scaling of virtual network functions. The benefits of <sc>PCLANSA</small> have been demonstrated across various network slice types, including eMBB, mMTC, uRLLC, and VoIP. This finding indicates the potential for substantial gains in resource utilization and cost savings, with the possibility of reducing over-provisioning by up to 54.85%.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"23 ","pages":"668-680"},"PeriodicalIF":5.4,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145852505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intent-Based Automatic Security Enhancement Method Toward Service Function Chain 面向业务功能链的基于意图的自动安全增强方法
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-11-19 DOI: 10.1109/TNSM.2025.3635228
Deqiang Zhou;Xinsheng Ji;Wei You;Hang Qiu;Yu Zhao;Mingyan Xu
The reliance on Network Function Virtualization (NFV) and Software-Defined Network (SDN) introduces a wide variety of security risks in Service Function Chain (SFC), necessitating the implementation of automated security measures to safeguard ongoing service delivery. To address the security risks faced by online SFCs and the shortcomings of traditional manual configuration, we introduce Intent-Based Networking (IBN) for the first time to propose an automatic security enhancement method through embedding Network Security Functions (NSFs). However, the diverse security requirements and performance requirements of SFCs pose significant challenges to the translation from intents to NSF embedding schemes, which manifest in two main aspects. In the logical orchestration stage, NSF composition consisting of NSF sets and their logical embedding locations will significantly impact the security effect. So security intent language model, a formalized method, is proposed to express the security intents. Additionally, NSF Embedding Model Generation Algorithm (EMGA) is designed to determine NSF composition by utilizing NSF capability label model and NSF collaboration model, where NSF composition can be further formulated as NSF embedding model. In the physical embedding stage, the differentiated service requirements among SFCs result in NSF embedded model obtained by EMGA being a multi-objective optimization problem with variable objectives. Therefore, Adaptive Security-aware Embedding Algorithm (ASEA) featuring adaptive link weight mapping mechanism is proposed to solve the optimal NSF embedding schemes. This enables the automatic translation of security intents into NSF embedding schemes, ensuring that both security requirements are met and service performance is guaranteed. We develop the system instance to verify the feasibility of intent translation solution, and massive evaluations demonstrate that ASEA algorithm has better performance compared with the existing works in the diverse requirement scenarios.
对NFV (Network Function Virtualization)和SDN (Software-Defined Network)技术的依赖给SFC (Service Function Chain)带来了各种各样的安全风险,需要实施自动化的安全措施来保障持续的业务交付。针对在线sfc面临的安全风险和传统手工配置的不足,本文首次引入基于意图的网络(IBN),提出了一种通过嵌入网络安全功能(nsf)实现自动安全增强的方法。然而,sfc不同的安全需求和性能需求给从意图到NSF嵌入方案的转换带来了重大挑战,主要表现在两个方面。在逻辑编排阶段,由NSF集合组成的NSF组合及其逻辑嵌入位置将显著影响安全效果。为此,提出了一种形式化的安全意图表达方法——安全意图语言模型。设计了NSF嵌入模型生成算法(EMGA),利用NSF能力标签模型和NSF协作模型确定NSF组成,其中NSF组成可进一步表述为NSF嵌入模型。在物理嵌入阶段,sfc之间服务需求的差异导致EMGA得到的NSF嵌入模型是一个多目标变目标优化问题。为此,提出了基于自适应链路权重映射机制的自适应安全感知嵌入算法(ASEA)来求解最优的NSF嵌入方案。自动将安全意图转换为NSF嵌入方案,既能满足安全需求,又能保证业务性能。我们开发了系统实例来验证意图转换解决方案的可行性,大量的评估表明,在不同的需求场景下,ASEA算法比现有的工作具有更好的性能。
{"title":"Intent-Based Automatic Security Enhancement Method Toward Service Function Chain","authors":"Deqiang Zhou;Xinsheng Ji;Wei You;Hang Qiu;Yu Zhao;Mingyan Xu","doi":"10.1109/TNSM.2025.3635228","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3635228","url":null,"abstract":"The reliance on Network Function Virtualization (NFV) and Software-Defined Network (SDN) introduces a wide variety of security risks in Service Function Chain (SFC), necessitating the implementation of automated security measures to safeguard ongoing service delivery. To address the security risks faced by online SFCs and the shortcomings of traditional manual configuration, we introduce Intent-Based Networking (IBN) for the first time to propose an automatic security enhancement method through embedding Network Security Functions (NSFs). However, the diverse security requirements and performance requirements of SFCs pose significant challenges to the translation from intents to NSF embedding schemes, which manifest in two main aspects. In the logical orchestration stage, NSF composition consisting of NSF sets and their logical embedding locations will significantly impact the security effect. So security intent language model, a formalized method, is proposed to express the security intents. Additionally, NSF Embedding Model Generation Algorithm (EMGA) is designed to determine NSF composition by utilizing NSF capability label model and NSF collaboration model, where NSF composition can be further formulated as NSF embedding model. In the physical embedding stage, the differentiated service requirements among SFCs result in NSF embedded model obtained by EMGA being a multi-objective optimization problem with variable objectives. Therefore, Adaptive Security-aware Embedding Algorithm (ASEA) featuring adaptive link weight mapping mechanism is proposed to solve the optimal NSF embedding schemes. This enables the automatic translation of security intents into NSF embedding schemes, ensuring that both security requirements are met and service performance is guaranteed. We develop the system instance to verify the feasibility of intent translation solution, and massive evaluations demonstrate that ASEA algorithm has better performance compared with the existing works in the diverse requirement scenarios.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"23 ","pages":"1043-1060"},"PeriodicalIF":5.4,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145852522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint Computing Offloading and Resource Allocation for Classification Intelligence Tasks in MEC Systems MEC系统中分类智能任务的联合计算卸载与资源分配
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-11-12 DOI: 10.1109/TNSM.2025.3632162
Yuanpeng Zheng;Tiankui Zhang;Rong Huang;Yapeng Wang
Mobile edge computing (MEC) facilitates high reliability and low-latency applications by bringing computation and data storage closer to end-users. Intelligent computing is an important application of MEC, where computing resources are used to solve intelligent task-related problems based on task requirements. However, efficiently offloading computing and allocating resources for intelligent tasks in MEC systems is a challenging problem due to complex interactions between task requirements and MEC resources. To address this challenge, we investigate joint computing offloading and resource allocation for classification intelligence tasks (CITs) in MEC systems. Our goal is to optimize system utility by jointly considering computing accuracy and task delay to achieve maximum utility of our system. We focus on CITs and formulate an optimization problem that considers task characteristics including the accuracy requirements and the parallel computing capabilities in MEC systems. To solve the proposed problem, we decompose it into three subproblems: subcarrier allocation, computing capacity allocation and compression offloading. We use successive convex approximation and convex optimization method to derive optimized feasible solutions for the subcarrier allocation, offloading variable, computing capacity allocation, and compression ratio. Based on our solutions, we design an efficient joint computing offloading and resource allocation algorithm for CITs in MEC systems. Our simulation demonstrates that the proposed algorithm significantly improves the performance by 16.4% on average and achieves a flexible trade-off between system revenue and cost considering CITs compared with benchmarks.
移动边缘计算(MEC)通过使计算和数据存储更接近最终用户,促进了高可靠性和低延迟的应用程序。智能计算是MEC的一个重要应用,即根据任务需求,利用计算资源解决与任务相关的智能问题。然而,由于任务需求和MEC资源之间复杂的相互作用,如何有效地为MEC系统中的智能任务卸载计算和分配资源是一个具有挑战性的问题。为了解决这一挑战,我们研究了MEC系统中分类智能任务(cit)的联合计算卸载和资源分配。我们的目标是通过综合考虑计算精度和任务延迟来优化系统的效用,以实现系统的最大效用。本文以cit为研究对象,考虑了MEC系统的任务特征,包括精度要求和并行计算能力,提出了一个优化问题。为了解决该问题,我们将其分解为三个子问题:子载波分配、计算容量分配和压缩卸载。利用连续凸逼近和凸优化方法,推导出子载波分配、卸载变量、计算容量分配和压缩比的优化可行解。在此基础上,设计了一种高效的MEC系统中cit的联合计算卸载和资源分配算法。仿真结果表明,与基准算法相比,该算法平均提高了16.4%的性能,并在考虑cit的系统收益和成本之间实现了灵活的权衡。
{"title":"Joint Computing Offloading and Resource Allocation for Classification Intelligence Tasks in MEC Systems","authors":"Yuanpeng Zheng;Tiankui Zhang;Rong Huang;Yapeng Wang","doi":"10.1109/TNSM.2025.3632162","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3632162","url":null,"abstract":"Mobile edge computing (MEC) facilitates high reliability and low-latency applications by bringing computation and data storage closer to end-users. Intelligent computing is an important application of MEC, where computing resources are used to solve intelligent task-related problems based on task requirements. However, efficiently offloading computing and allocating resources for intelligent tasks in MEC systems is a challenging problem due to complex interactions between task requirements and MEC resources. To address this challenge, we investigate joint computing offloading and resource allocation for classification intelligence tasks (CITs) in MEC systems. Our goal is to optimize system utility by jointly considering computing accuracy and task delay to achieve maximum utility of our system. We focus on CITs and formulate an optimization problem that considers task characteristics including the accuracy requirements and the parallel computing capabilities in MEC systems. To solve the proposed problem, we decompose it into three subproblems: subcarrier allocation, computing capacity allocation and compression offloading. We use successive convex approximation and convex optimization method to derive optimized feasible solutions for the subcarrier allocation, offloading variable, computing capacity allocation, and compression ratio. Based on our solutions, we design an efficient joint computing offloading and resource allocation algorithm for CITs in MEC systems. Our simulation demonstrates that the proposed algorithm significantly improves the performance by 16.4% on average and achieves a flexible trade-off between system revenue and cost considering CITs compared with benchmarks.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"23 ","pages":"1086-1099"},"PeriodicalIF":5.4,"publicationDate":"2025-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145929619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Revenue-Aware Seamless Content Distribution in Satellite-Terrestrial Integrated Networks 卫星-地面综合网络中收入感知的无缝内容分发
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-11-06 DOI: 10.1109/TNSM.2025.3629810
Haftay Gebreslasie Abreha;Ilora Maity;Youssouf Drif;Christos Politis;Symeon Chatzinotas
With the surging demand for data-intensive applications, ensuring seamless content delivery in Satellite-Terrestrial Integrated Networks (STINs) is crucial, especially for remote users. Dynamic Ad Insertion (DAI) enhances monetization and user experience, while Mobile Edge Computing (MEC) in STINs enables distributed content caching and ad insertion. However, satellite mobility and time-varying topologies cause service disruptions, while excessive or poorly placed ads risk user disengagement, impacting revenue. This paper proposes a novel framework that jointly addresses three challenges: (i) service continuity- and topology-aware content caching to adapt to STIN dynamics, (ii) Distributed DAI (D-DAI) that minimizes feeder link load and storage overhead by avoiding redundant ad-variant content storage through distributed ad stitching, and (iii) revenue-aware content distribution that explicitly models user disengagement due to ad overload to balance monetization and user satisfaction. We formulate the problem as two hierarchical Integer Linear Programming (ILP) optimizations: one content caching that aims to maximize cache hit rate and another optimizing content distribution with DAI to maximize revenue, minimize end-user costs, and enhance user experience. We develop greedy algorithms for fast initialization and a Binary Particle Swarm Optimization (BPSO)–based strategy for enhanced performance. Simulation results demonstrate that the proposed approach achieves over a 4.5% increase in revenue and reduces cache retrieval delay by more than 39% compared to the benchmark algorithms.
随着对数据密集型应用的需求激增,确保卫星-地面综合网络(STINs)中的无缝内容传输至关重要,特别是对远程用户而言。动态广告插入(DAI)增强了货币化和用户体验,而STINs中的移动边缘计算(MEC)实现了分布式内容缓存和广告插入。然而,卫星的移动性和时变拓扑结构会导致服务中断,而过多或放置不当的广告可能会导致用户脱离,从而影响收入。本文提出了一个新的框架,共同解决三个挑战:(i)服务连续性和拓扑感知的内容缓存,以适应STIN动态;(ii)分布式DAI (D-DAI),通过分布式广告拼接避免冗余的广告变体内容存储,从而最大限度地减少馈线链接负载和存储开销;(iii)收入感知的内容分发,明确模拟由于广告过载而导致的用户脱离,以平衡货币化和用户满意度。我们将问题表述为两个层次整数线性规划(ILP)优化:一个内容缓存旨在最大化缓存命中率,另一个使用DAI优化内容分发以最大化收入,最小化最终用户成本,并增强用户体验。我们开发了贪婪算法来快速初始化和基于二进制粒子群优化(BPSO)的策略来增强性能。仿真结果表明,与基准算法相比,该方法的收益提高了4.5%以上,缓存检索延迟降低了39%以上。
{"title":"Revenue-Aware Seamless Content Distribution in Satellite-Terrestrial Integrated Networks","authors":"Haftay Gebreslasie Abreha;Ilora Maity;Youssouf Drif;Christos Politis;Symeon Chatzinotas","doi":"10.1109/TNSM.2025.3629810","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3629810","url":null,"abstract":"With the surging demand for data-intensive applications, ensuring seamless content delivery in Satellite-Terrestrial Integrated Networks (STINs) is crucial, especially for remote users. Dynamic Ad Insertion (DAI) enhances monetization and user experience, while Mobile Edge Computing (MEC) in STINs enables distributed content caching and ad insertion. However, satellite mobility and time-varying topologies cause service disruptions, while excessive or poorly placed ads risk user disengagement, impacting revenue. This paper proposes a novel framework that jointly addresses three challenges: (i) service continuity- and topology-aware content caching to adapt to STIN dynamics, (ii) Distributed DAI (D-DAI) that minimizes feeder link load and storage overhead by avoiding redundant ad-variant content storage through distributed ad stitching, and (iii) revenue-aware content distribution that explicitly models user disengagement due to ad overload to balance monetization and user satisfaction. We formulate the problem as two hierarchical Integer Linear Programming (ILP) optimizations: one content caching that aims to maximize cache hit rate and another optimizing content distribution with DAI to maximize revenue, minimize end-user costs, and enhance user experience. We develop greedy algorithms for fast initialization and a Binary Particle Swarm Optimization (BPSO)–based strategy for enhanced performance. Simulation results demonstrate that the proposed approach achieves over a 4.5% increase in revenue and reduces cache retrieval delay by more than 39% compared to the benchmark algorithms.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"23 ","pages":"1128-1144"},"PeriodicalIF":5.4,"publicationDate":"2025-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11230879","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145852529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DDPG-Based Resource Management in Network Slicing for 5G-Advanced V2X Services 基于ddpg的5G-Advanced V2X网络切片资源管理
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-11-06 DOI: 10.1109/TNSM.2025.3629529
Muhammad Ashar Tariq;Malik Muhammad Saad;Dongkyun Kim
The evolution of 5G technology towards 5G-Advanced has introduced advanced vehicular applications with stringent Quality-of-Service (QoS) requirements. Addressing these demands necessitates intelligent resource management within the standard 3GPP network slicing framework. This paper proposes a novel resource management scheme leveraging a Deep Deterministic Policy Gradient (DDPG) algorithm implemented in the Network Slice Subnet Management Function (NSSMF). The scheme dynamically allocates resources to network slices based on real-time traffic demands while maintaining compatibility with existing infrastructure, ensuring cost-effectiveness. The proposed framework features a two-level architecture: the gNodeB optimizes slice-level resource allocation at the upper level, and vehicles reserve resources dynamically at the lower level using the 3GPP Semi-Persistent Scheduling (SPS) mechanism. Evaluation in a realistic, trace-based vehicular environment demonstrates the scheme’s superiority over traditional approaches, achieving higher Packet Delivery Ratio (PDR), improved Spectral Efficiency (SE), and adaptability under varying vehicular densities. These results underscore the potential of the proposed solution in meeting the QoS demands of critical 5G-Advanced vehicular applications.
5G技术向5G- advanced的演进引入了具有严格服务质量(QoS)要求的先进车载应用。解决这些需求需要在标准3GPP网络切片框架内进行智能资源管理。本文提出了一种新的资源管理方案,利用在网络切片子网管理功能(NSSMF)中实现的深度确定性策略梯度(DDPG)算法。该方案在保持与现有基础设施的兼容性的同时,根据实时流量需求动态地将资源分配到网络片上,保证了成本效益。提出的框架具有两级架构:gndeb在上层优化片级资源分配,车辆在下层使用3GPP半持久调度(SPS)机制动态预留资源。在真实的、基于轨迹的车辆环境中进行的评估表明,该方案优于传统方法,实现了更高的分组投递率(PDR)、更高的频谱效率(SE),以及对不同车辆密度的适应性。这些结果强调了所提出的解决方案在满足关键的5G-Advanced车辆应用的QoS需求方面的潜力。
{"title":"DDPG-Based Resource Management in Network Slicing for 5G-Advanced V2X Services","authors":"Muhammad Ashar Tariq;Malik Muhammad Saad;Dongkyun Kim","doi":"10.1109/TNSM.2025.3629529","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3629529","url":null,"abstract":"The evolution of 5G technology towards 5G-Advanced has introduced advanced vehicular applications with stringent Quality-of-Service (QoS) requirements. Addressing these demands necessitates intelligent resource management within the standard 3GPP network slicing framework. This paper proposes a novel resource management scheme leveraging a Deep Deterministic Policy Gradient (DDPG) algorithm implemented in the Network Slice Subnet Management Function (NSSMF). The scheme dynamically allocates resources to network slices based on real-time traffic demands while maintaining compatibility with existing infrastructure, ensuring cost-effectiveness. The proposed framework features a two-level architecture: the gNodeB optimizes slice-level resource allocation at the upper level, and vehicles reserve resources dynamically at the lower level using the 3GPP Semi-Persistent Scheduling (SPS) mechanism. Evaluation in a realistic, trace-based vehicular environment demonstrates the scheme’s superiority over traditional approaches, achieving higher Packet Delivery Ratio (PDR), improved Spectral Efficiency (SE), and adaptability under varying vehicular densities. These results underscore the potential of the proposed solution in meeting the QoS demands of critical 5G-Advanced vehicular applications.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"23 ","pages":"1061-1075"},"PeriodicalIF":5.4,"publicationDate":"2025-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145852518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Target Device Model Identification Attack in 5G Mobile Network 5G移动网络中的自适应目标设备模型识别攻击
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-10-30 DOI: 10.1109/TNSM.2025.3626804
Shaocong Feng;Baojiang Cui;Junsong Fu;Meiyi Jiang;Shengjia Chang
Enhanced system capacity is one of 5G goals. This will lead to massive heterogeneous devices in mobile networks. Mobile devices that lack basic security capability have chipset, operating system or software vulnerability. Attackers can perform Advanced Persistent Threat (APT) Attack for specific device models. In this paper, we propose an Adaptive Target Device Model Identification Attack (ATDMIA) that provides the prior knowledge for exploiting baseband vulnerability to perform targeted attacks. We discovered Globally Unique Temporary Identity (GUTI) Reuse in Evolved Packet Switching Fallback (EPSFB) and Leakage of User Equipment (UE) Capability vulnerability. Utilizing silent calls, an attacker can capture and correlate the signaling traces of the target subscriber from air interface within a specific geographic area. In addition, we design an adaptive identification algorithm which utilizes both invisible and explicit features of UE capability information to efficiently identify device models. We conducted an empirical study using 105 commercial devices, including network configuration, attack efficiency, time overhead and open-world evaluation experiments. The experimental results showed that ATDMIA can accurately correlate the EPSFB signaling traces of target victim and effectively identify the device model or manufacturer.
增强系统容量是5G的目标之一。这将导致移动网络中出现大量异构设备。缺乏基本安全功能的移动设备存在芯片组、操作系统或软件漏洞。攻击者可以针对特定的设备型号实施APT (Advanced Persistent Threat)攻击。在本文中,我们提出了一种自适应目标设备模型识别攻击(ATDMIA),它为利用基带漏洞执行目标攻击提供了先验知识。我们发现了演进分组交换回退(EPSFB)中的全局唯一临时身份(GUTI)重用和用户设备(UE)能力泄漏漏洞。利用无声呼叫,攻击者可以从特定地理区域内的空中接口捕获和关联目标用户的信令跟踪。此外,我们设计了一种自适应识别算法,利用UE能力信息的不可见和显式特征来有效地识别设备模型。我们使用105台商用设备进行了实证研究,包括网络配置、攻击效率、时间开销和开放世界评估实验。实验结果表明,ATDMIA可以准确关联目标受害者的EPSFB信号轨迹,有效识别设备型号或制造商。
{"title":"Adaptive Target Device Model Identification Attack in 5G Mobile Network","authors":"Shaocong Feng;Baojiang Cui;Junsong Fu;Meiyi Jiang;Shengjia Chang","doi":"10.1109/TNSM.2025.3626804","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3626804","url":null,"abstract":"Enhanced system capacity is one of 5G goals. This will lead to massive heterogeneous devices in mobile networks. Mobile devices that lack basic security capability have chipset, operating system or software vulnerability. Attackers can perform Advanced Persistent Threat (APT) Attack for specific device models. In this paper, we propose an Adaptive Target Device Model Identification Attack (ATDMIA) that provides the prior knowledge for exploiting baseband vulnerability to perform targeted attacks. We discovered Globally Unique Temporary Identity (GUTI) Reuse in Evolved Packet Switching Fallback (EPSFB) and Leakage of User Equipment (UE) Capability vulnerability. Utilizing silent calls, an attacker can capture and correlate the signaling traces of the target subscriber from air interface within a specific geographic area. In addition, we design an adaptive identification algorithm which utilizes both invisible and explicit features of UE capability information to efficiently identify device models. We conducted an empirical study using 105 commercial devices, including network configuration, attack efficiency, time overhead and open-world evaluation experiments. The experimental results showed that ATDMIA can accurately correlate the EPSFB signaling traces of target victim and effectively identify the device model or manufacturer.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"23 ","pages":"1028-1042"},"PeriodicalIF":5.4,"publicationDate":"2025-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145852524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generative Adversarial Networks Based Low-Rate Denial of Service Attack Detection and Mitigation in Software-Defined Networks 软件定义网络中基于生成对抗网络的低速率拒绝服务攻击检测与缓解
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-10-24 DOI: 10.1109/TNSM.2025.3625278
Manjuluri Anil Kumar;Balaprakasa Rao Killi;Eiji Oki
Low-rate Denial of Service (LDoS) attacks use short, regular bursts of traffic to exploit vulnerabilities in network protocols. They are a major threat to network security, especially in Software-Defined Networking (SDN) frameworks. These attacks are challenging to detect and mitigate because of their low traffic volume, making it impossible to distinguish them from normal traffic. We propose a real-time LDoS attack detection and mitigation framework that can protect SDN. The framework incorporates a detection module that uses a deep learning model, such as a Generative Adversarial Network (GAN), to identify the attack. An efficient mitigation module follows detection, employing mechanisms to identify and filter harmful flows in real time. Deploying the framework into SDN controllers guarantees compliance with OpenFlow standards, thereby avoiding the necessity for additional hardware. Experimental results demonstrate that the proposed system achieves a detection accuracy of over 99.98% with an average response time of 8.58 s, significantly outperforming traditional LDoS detection approaches. This study presents a scalable, real-time methodology to enhance SDN resilience against LDoS attacks.
低速率拒绝服务(LDoS)攻击使用短的、有规律的流量爆发来利用网络协议中的漏洞。它们是网络安全的主要威胁,特别是在软件定义网络(SDN)框架中。这些攻击很难检测和缓解,因为它们的流量很小,无法将其与正常流量区分开来。我们提出了一种能够保护SDN的实时LDoS攻击检测和缓解框架。该框架集成了一个检测模块,该模块使用深度学习模型(如生成对抗网络(GAN))来识别攻击。有效的缓解模块紧随检测之后,采用实时识别和过滤有害流量的机制。将框架部署到SDN控制器中可以保证符合OpenFlow标准,从而避免了额外硬件的必要性。实验结果表明,该系统的检测准确率达到99.98%以上,平均响应时间为8.58 s,显著优于传统的LDoS检测方法。本研究提出了一种可扩展的实时方法来增强SDN抵御ddos攻击的弹性。
{"title":"Generative Adversarial Networks Based Low-Rate Denial of Service Attack Detection and Mitigation in Software-Defined Networks","authors":"Manjuluri Anil Kumar;Balaprakasa Rao Killi;Eiji Oki","doi":"10.1109/TNSM.2025.3625278","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3625278","url":null,"abstract":"Low-rate Denial of Service (LDoS) attacks use short, regular bursts of traffic to exploit vulnerabilities in network protocols. They are a major threat to network security, especially in Software-Defined Networking (SDN) frameworks. These attacks are challenging to detect and mitigate because of their low traffic volume, making it impossible to distinguish them from normal traffic. We propose a real-time LDoS attack detection and mitigation framework that can protect SDN. The framework incorporates a detection module that uses a deep learning model, such as a Generative Adversarial Network (GAN), to identify the attack. An efficient mitigation module follows detection, employing mechanisms to identify and filter harmful flows in real time. Deploying the framework into SDN controllers guarantees compliance with OpenFlow standards, thereby avoiding the necessity for additional hardware. Experimental results demonstrate that the proposed system achieves a detection accuracy of over 99.98% with an average response time of 8.58 s, significantly outperforming traditional LDoS detection approaches. This study presents a scalable, real-time methodology to enhance SDN resilience against LDoS attacks.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"23 ","pages":"925-939"},"PeriodicalIF":5.4,"publicationDate":"2025-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145852545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RISK-4-Auto: Residually Interconnected and Superimposed Kolmogorov-Arnold Networks for Automotive Network Traffic Classification 基于残差互联和叠加Kolmogorov-Arnold网络的汽车网络流量分类
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-10-24 DOI: 10.1109/TNSM.2025.3625404
Anurag Dutta;Sangita Roy;Rajat Subhra Chakraborty
In modern automobiles, a Controller Area Network (CAN) bus facilitates communication among all electronic control units for critical safety functions, including steering, braking, and fuel injection. However, due to the lack of security features, it may be vulnerable to malicious bus traffic-based attacks that cause the automobile to malfunction. Such malicious bus traffic can be the result of either external fabricated messages or direct injection through the on-board diagnostic port, highlighting the need for an effective intrusion detection system to efficiently identify suspicious network flows and potential intrusions. This work introduces Residually Interconnected and Superimposed Kolmogorov-Arnold Networks (RISK-4-Auto), a set of four deep neural network architectures for intrusion detection targeting in-vehicle network traffic classification. RISK-4-Auto models, when applied on three hexadecimally identifiable sequence-based open-source datasets (collected through direct injection in the on-board diagnostic port), outperform six state-of-the-art vehicular network intrusion detection systems (as per their accuracies) by $approx 1.0163$ % for all-class classification and $approx 2.5535$ % on focused (single-class) malicious flow detection. Additionally, RISK-4-Auto enjoys a significantly lower overhead than existing state-of-the-art models, and is suitable for real-time deployment in resource-constrained automotive environments.
在现代汽车中,控制器区域网络(CAN)总线促进了关键安全功能(包括转向、制动和燃油喷射)的所有电子控制单元之间的通信。然而,由于缺乏安全特性,它可能容易受到基于总线流量的恶意攻击,从而导致汽车故障。这种恶意总线流量可能是外部伪造消息的结果,也可能是通过车载诊断端口直接注入的结果,因此需要一个有效的入侵检测系统来有效地识别可疑的网络流和潜在的入侵。本文介绍了残馀互联和叠加Kolmogorov-Arnold网络(RISK-4-Auto),这是一组用于针对车载网络流量分类的入侵检测的四种深度神经网络架构。当将RISK-4-Auto模型应用于三个十六进制可识别的基于序列的开源数据集(通过车载诊断端口的直接注入收集)时,在所有类别分类和集中(单一类别)恶意流量检测方面,风险-4- auto模型的性能分别比六个最先进的车辆网络入侵检测系统(根据其准确性)高出约1.0163美元和2.5535美元。此外,与现有的先进模型相比,RISK-4-Auto的开销要低得多,适合在资源受限的汽车环境中进行实时部署。
{"title":"RISK-4-Auto: Residually Interconnected and Superimposed Kolmogorov-Arnold Networks for Automotive Network Traffic Classification","authors":"Anurag Dutta;Sangita Roy;Rajat Subhra Chakraborty","doi":"10.1109/TNSM.2025.3625404","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3625404","url":null,"abstract":"In modern automobiles, a Controller Area Network (CAN) bus facilitates communication among all electronic control units for critical safety functions, including steering, braking, and fuel injection. However, due to the lack of security features, it may be vulnerable to malicious bus traffic-based attacks that cause the automobile to malfunction. Such malicious bus traffic can be the result of either external fabricated messages or direct injection through the on-board diagnostic port, highlighting the need for an effective intrusion detection system to efficiently identify suspicious network flows and potential intrusions. This work introduces Residually Interconnected and Superimposed Kolmogorov-Arnold Networks (<sc>RISK-4-Auto</small>), a set of four deep neural network architectures for intrusion detection targeting in-vehicle network traffic classification. <sc>RISK-4-Auto</small> models, when applied on three hexadecimally identifiable sequence-based open-source datasets (collected through direct injection in the on-board diagnostic port), outperform six state-of-the-art vehicular network intrusion detection systems (as per their accuracies) by <inline-formula> <tex-math>$approx 1.0163$ </tex-math></inline-formula>% for all-class classification and <inline-formula> <tex-math>$approx 2.5535$ </tex-math></inline-formula>% on focused (single-class) malicious flow detection. Additionally, <sc>RISK-4-Auto</small> enjoys a significantly lower overhead than existing state-of-the-art models, and is suitable for real-time deployment in resource-constrained automotive environments.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"23 ","pages":"1076-1085"},"PeriodicalIF":5.4,"publicationDate":"2025-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145852546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Network and Service Management
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1