首页 > 最新文献

IEEE Transactions on Network and Service Management最新文献

英文 中文
MPC-Based 5G uRLLC Rate Calculation 基于 MPC 的 5G uRLLC 速率计算
IF 4.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-12 DOI: 10.1109/TNSM.2024.3459634
Jun Liu;Paulo Renato da Costa Mendes;Andreas Wirsen;Daniel Görges
The development of 5G enables communication systems to satisfy heterogeneous service requirements of novel applications. For instance, ultra-reliable low latency communication (uRLLC) is applicable for many safety-critical and latency-sensitive scenarios. Many research papers aim to convert the stringent reliability and latency factors to a static data rate requirement. However, in most industrial scenarios, the communication traffic presents short-term/long-term dependency, burst, and non-stationary characteristics. This makes it more challenging to obtain a tight upper bound for the rate requirement of uRLLC. In this work, we introduce a novel solution based on decentralized model predictive control (MPC), where the dynamic incoming communication traffic and the users’ quality of service (QoS) requirements are reformulated into an up-to-date data rate constraint. Under such assumptions, we consider a use case of the resource allocation problem for a single uRLLC network slice. The allocation task is solved by the successive convex approximation (SCA) algorithm for a more in-depth analysis. The simulation results show that the proposed algorithm can deal with non-stationary communication traffic in real-time, as well as provide good performance with guaranteed delay and reliability requirements.
5G的发展使通信系统能够满足新应用的异构业务需求。例如,超可靠的低延迟通信(uRLLC)适用于许多安全关键和延迟敏感的场景。许多研究论文旨在将严格的可靠性和延迟因素转换为静态数据速率要求。然而,在大多数工业场景中,通信流量呈现出短期/长期依赖、突发和非平稳特征。这使得获得uRLLC的速率需求的严格上限更具挑战性。在这项工作中,我们引入了一种基于分散模型预测控制(MPC)的新解决方案,其中动态传入通信流量和用户的服务质量(QoS)要求被重新制定为最新的数据速率约束。在这样的假设下,我们考虑单个uRLLC网络片的资源分配问题的用例。为了进行更深入的分析,采用逐次凸逼近(SCA)算法求解分配任务。仿真结果表明,该算法能够实时处理非稳态通信流量,在保证时延和可靠性的前提下,具有良好的性能。
{"title":"MPC-Based 5G uRLLC Rate Calculation","authors":"Jun Liu;Paulo Renato da Costa Mendes;Andreas Wirsen;Daniel Görges","doi":"10.1109/TNSM.2024.3459634","DOIUrl":"10.1109/TNSM.2024.3459634","url":null,"abstract":"The development of 5G enables communication systems to satisfy heterogeneous service requirements of novel applications. For instance, ultra-reliable low latency communication (uRLLC) is applicable for many safety-critical and latency-sensitive scenarios. Many research papers aim to convert the stringent reliability and latency factors to a static data rate requirement. However, in most industrial scenarios, the communication traffic presents short-term/long-term dependency, burst, and non-stationary characteristics. This makes it more challenging to obtain a tight upper bound for the rate requirement of uRLLC. In this work, we introduce a novel solution based on decentralized model predictive control (MPC), where the dynamic incoming communication traffic and the users’ quality of service (QoS) requirements are reformulated into an up-to-date data rate constraint. Under such assumptions, we consider a use case of the resource allocation problem for a single uRLLC network slice. The allocation task is solved by the successive convex approximation (SCA) algorithm for a more in-depth analysis. The simulation results show that the proposed algorithm can deal with non-stationary communication traffic in real-time, as well as provide good performance with guaranteed delay and reliability requirements.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"21 6","pages":"6770-6795"},"PeriodicalIF":4.7,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10679265","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142187249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Meta-Peering: Automating ISP Peering Decision Process 元对等:ISP 对等互联决策过程自动化
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-12 DOI: 10.1109/tnsm.2024.3459796
Md Ibrahim Ibne Alam, Anindo Mahmood, Prasun K. Dey, Murat Yuksel, Koushik Kar
{"title":"Meta-Peering: Automating ISP Peering Decision Process","authors":"Md Ibrahim Ibne Alam, Anindo Mahmood, Prasun K. Dey, Murat Yuksel, Koushik Kar","doi":"10.1109/tnsm.2024.3459796","DOIUrl":"https://doi.org/10.1109/tnsm.2024.3459796","url":null,"abstract":"","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"11 1","pages":""},"PeriodicalIF":5.3,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142187250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating the Dependability of Software-Defined IIoT-Edge Networks for Next-Generation Offshore Wind Farms 研究下一代海上风电场软件定义的 IIoT 边缘网络的可依赖性
IF 4.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-11 DOI: 10.1109/TNSM.2024.3458447
Agrippina Mwangi;Nadine Kabbara;Patrick Coudray;Mikkel Gryning;Madeleine Gibescu
Next-generation offshore wind farms are increasingly adopting vendor-agnostic software-defined networking (SDN) to oversee their Industrial Internet of Things Edge (IIoT-Edge) networks. The SDN-enabled IIoT-Edge networks present a promising solution for high availability and consistent performance-demanding environments such as offshore wind farm critical infrastructure monitoring, operation, and maintenance. Inevitably, these networks encounter stochastic failures such as random component malfunctions, software malfunctions, CPU overconsumption, and memory leakages. These stochastic failures result in intermittent network service interruptions, disrupting the real-time exchange of critical, latency-sensitive data essential for offshore wind farm operations. Given the criticality of data transfer in offshore wind farms, this paper investigates the dependability of the SDN-enabled IIoT-Edge networks amid the highlighted stochastic failures using a two-pronged approach to: (i) observe the transient behavior using a proof-of-concept simulation testbed and (ii) quantitatively assess the steady-state behavior using a probabilistic Homogeneous Continuous Time Markov Model (HCTMM) under varying failure and repair conditions. The study finds that network throughput decreases during failures in the transient behavior analysis. After quantitatively analyzing 15 case scenarios with varying failure and repair combinations, steady-state availability ranged from 93% to 98%, nearing the industry-standard SLA of 99.999%, guaranteeing up to 3 years of uninterrupted network service.
下一代海上风电场越来越多地采用与供应商无关的软件定义网络(SDN)来监督其工业物联网边缘(IIoT-Edge)网络。支持sdn的IIoT-Edge网络为海上风电场关键基础设施监控、运营和维护等高可用性和一致的性能要求环境提供了有前途的解决方案。这些网络不可避免地会遇到随机故障,例如随机组件故障、软件故障、CPU过度消耗和内存泄漏。这些随机故障导致间歇性网络服务中断,破坏了海上风电场运行所需的关键、延迟敏感数据的实时交换。鉴于海上风电场数据传输的重要性,本文采用双管齐下的方法研究了sdn支持的iiot边缘网络在突出的随机故障中的可靠性:(i)使用概念验证模拟试验台观察瞬态行为,(ii)使用概率齐次连续时间马尔可夫模型(HCTMM)在不同故障和修复条件下定量评估稳态行为。研究发现,网络暂态行为分析中出现故障时,网络吞吐量会下降。在定量分析了15个不同故障和修复组合的案例场景后,稳态可用性范围从93%到98%,接近行业标准SLA的99.999%,保证了长达3年的不间断网络服务。
{"title":"Investigating the Dependability of Software-Defined IIoT-Edge Networks for Next-Generation Offshore Wind Farms","authors":"Agrippina Mwangi;Nadine Kabbara;Patrick Coudray;Mikkel Gryning;Madeleine Gibescu","doi":"10.1109/TNSM.2024.3458447","DOIUrl":"10.1109/TNSM.2024.3458447","url":null,"abstract":"Next-generation offshore wind farms are increasingly adopting vendor-agnostic software-defined networking (SDN) to oversee their Industrial Internet of Things Edge (IIoT-Edge) networks. The SDN-enabled IIoT-Edge networks present a promising solution for high availability and consistent performance-demanding environments such as offshore wind farm critical infrastructure monitoring, operation, and maintenance. Inevitably, these networks encounter stochastic failures such as random component malfunctions, software malfunctions, CPU overconsumption, and memory leakages. These stochastic failures result in intermittent network service interruptions, disrupting the real-time exchange of critical, latency-sensitive data essential for offshore wind farm operations. Given the criticality of data transfer in offshore wind farms, this paper investigates the dependability of the SDN-enabled IIoT-Edge networks amid the highlighted stochastic failures using a two-pronged approach to: (i) observe the transient behavior using a proof-of-concept simulation testbed and (ii) quantitatively assess the steady-state behavior using a probabilistic Homogeneous Continuous Time Markov Model (HCTMM) under varying failure and repair conditions. The study finds that network throughput decreases during failures in the transient behavior analysis. After quantitatively analyzing 15 case scenarios with varying failure and repair combinations, steady-state availability ranged from 93% to 98%, nearing the industry-standard SLA of 99.999%, guaranteeing up to 3 years of uninterrupted network service.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"21 6","pages":"6126-6139"},"PeriodicalIF":4.7,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10677450","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142187258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring QUIC Security and Privacy: A Comprehensive Survey on QUIC Security and Privacy Vulnerabilities, Threats, Attacks, and Future Research Directions 探索 QUIC 安全与隐私:关于 QUIC 安全与隐私漏洞、威胁、攻击和未来研究方向的全面调查
IF 4.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-11 DOI: 10.1109/TNSM.2024.3457858
Y A Joarder;Carol Fung
QUIC is a modern transport protocol aiming to improve Web connection performance and security. It is the transport layer for HTTP/3. QUIC offers numerous advantages over traditional transport layer protocols, such as TCP and UDP, including reduced latency, improved congestion control, connection migration and encryption by default. However, these benefits introduce new security and privacy challenges that need to be addressed, as cyber attackers can exploit weaknesses in the protocol. QUIC’s security and privacy issues have been largely unexplored, as existing research on QUIC primarily focuses on performance upgrades. This survey paper addresses the knowledge gap in QUIC’s security and privacy challenges while proposing directions for future research to enhance its security and privacy. Our comprehensive analysis covers QUIC’s history, architecture, core mechanisms (such as cryptographic design and handshaking process), security model, and threat landscape. We examine QUIC’s significant vulnerabilities, critical security and privacy attacks, emerging threats, advanced security and privacy challenges, and mitigation strategies. Furthermore, we outline future research directions to improve QUIC’s security and privacy. By exploring the protocol’s security and privacy implications, this paper informs decision-making processes and enhances online safety for users and professionals. Our research identifies key risks, vulnerabilities, threats, and attacks targeting QUIC, providing actionable insights to strengthen the protocol. Through this comprehensive analysis, we contribute to developing and deploying a faster, more secure next-generation Internet infrastructure. We hope this investigation serves as a foundation for future Internet security and privacy innovations, ensuring robust protection for modern digital communications.
QUIC是一种现代传输协议,旨在提高Web连接的性能和安全性。它是HTTP/3的传输层。与传统的传输层协议(如TCP和UDP)相比,QUIC提供了许多优势,包括减少延迟、改进拥塞控制、连接迁移和默认加密。然而,这些好处带来了新的安全和隐私挑战,需要解决,因为网络攻击者可以利用协议中的弱点。由于现有的QUIC研究主要集中在性能升级上,因此QUIC的安全和隐私问题在很大程度上尚未得到探索。本调查报告解决了QUIC在安全和隐私挑战方面的知识差距,同时提出了未来研究的方向,以增强其安全性和隐私性。我们的全面分析涵盖了QUIC的历史、架构、核心机制(如加密设计和握手过程)、安全模型和威胁形势。我们研究了QUIC的重大漏洞、关键安全和隐私攻击、新出现的威胁、高级安全和隐私挑战以及缓解策略。此外,我们概述了未来的研究方向,以提高QUIC的安全性和隐私性。通过探索协议的安全和隐私影响,本文为决策过程提供信息,并提高用户和专业人员的在线安全性。我们的研究确定了针对QUIC的关键风险、漏洞、威胁和攻击,为加强协议提供了可操作的见解。通过这项全面的分析,我们为开发和部署更快、更安全的下一代互联网基础设施做出了贡献。我们希望这项调查能成为未来互联网安全和隐私创新的基础,确保对现代数字通信的有力保护。
{"title":"Exploring QUIC Security and Privacy: A Comprehensive Survey on QUIC Security and Privacy Vulnerabilities, Threats, Attacks, and Future Research Directions","authors":"Y A Joarder;Carol Fung","doi":"10.1109/TNSM.2024.3457858","DOIUrl":"10.1109/TNSM.2024.3457858","url":null,"abstract":"QUIC is a modern transport protocol aiming to improve Web connection performance and security. It is the transport layer for HTTP/3. QUIC offers numerous advantages over traditional transport layer protocols, such as TCP and UDP, including reduced latency, improved congestion control, connection migration and encryption by default. However, these benefits introduce new security and privacy challenges that need to be addressed, as cyber attackers can exploit weaknesses in the protocol. QUIC’s security and privacy issues have been largely unexplored, as existing research on QUIC primarily focuses on performance upgrades. This survey paper addresses the knowledge gap in QUIC’s security and privacy challenges while proposing directions for future research to enhance its security and privacy. Our comprehensive analysis covers QUIC’s history, architecture, core mechanisms (such as cryptographic design and handshaking process), security model, and threat landscape. We examine QUIC’s significant vulnerabilities, critical security and privacy attacks, emerging threats, advanced security and privacy challenges, and mitigation strategies. Furthermore, we outline future research directions to improve QUIC’s security and privacy. By exploring the protocol’s security and privacy implications, this paper informs decision-making processes and enhances online safety for users and professionals. Our research identifies key risks, vulnerabilities, threats, and attacks targeting QUIC, providing actionable insights to strengthen the protocol. Through this comprehensive analysis, we contribute to developing and deploying a faster, more secure next-generation Internet infrastructure. We hope this investigation serves as a foundation for future Internet security and privacy innovations, ensuring robust protection for modern digital communications.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"21 6","pages":"6953-6973"},"PeriodicalIF":4.7,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142187264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Queue Control Policies for Latency-Critical Traffic in Mobile Networks 移动网络延迟关键流量的高效队列控制策略
IF 4.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-11 DOI: 10.1109/TNSM.2024.3458390
Mohammed Abdullah;Salah Eddine Elayoubi;Tijani Chahed
We propose a novel resource allocation framework for latency-critical traffic, namely Ultra Reliable Low Latency Communications (URLLC), in mobile networks which meets stringent latency and reliability requirements while minimizing the allocated resources. The Quality of Service (QoS) requirement is formulated in terms of the probability that the latency exceeds a maximal allowed budget. We develop a discrete-time queuing model for the system, in the case where the URLLC reservation is fully-flexible, and when the reservation is made on a slot basis while URLLC packets arrive in mini-slots. We then exploit this model to propose a control scheme that dynamically updates the amount of resources to be allocated per time slot so as to meet the QoS requirement. We formulate an optimization framework that derives the policy which achieves the QoS target while minimizing resource consumption and propose offline algorithms that converge to the quasi optimal reservation policy. In the case when traffic is unknown, we propose online algorithms based on stochastic bandits to achieve this aim. Numerical experiments validate our model and confirm the efficiency of our algorithms in terms of meeting the delay violation target at minimal cost.
我们为移动网络中的延迟关键流量(即超可靠低延迟通信(URLLC))提出了一种新的资源分配框架,它既能满足严格的延迟和可靠性要求,又能最大限度地减少所分配的资源。服务质量(QoS)要求以延迟超过最大允许预算的概率来表示。在 URLLC 预留完全灵活的情况下,以及在 URLLC 数据包以插槽为单位到达时,我们为系统开发了一个离散时间排队模型。然后,我们利用这一模型提出一种控制方案,动态更新每个时隙分配的资源量,以满足 QoS 要求。我们制定了一个优化框架,推导出既能实现 QoS 目标又能最大限度减少资源消耗的策略,并提出了收敛到准最优预订策略的离线算法。在流量未知的情况下,我们提出了基于随机匪帮的在线算法来实现这一目标。数值实验验证了我们的模型,并确认了我们的算法在以最小成本满足延迟违规目标方面的效率。
{"title":"Efficient Queue Control Policies for Latency-Critical Traffic in Mobile Networks","authors":"Mohammed Abdullah;Salah Eddine Elayoubi;Tijani Chahed","doi":"10.1109/TNSM.2024.3458390","DOIUrl":"10.1109/TNSM.2024.3458390","url":null,"abstract":"We propose a novel resource allocation framework for latency-critical traffic, namely Ultra Reliable Low Latency Communications (URLLC), in mobile networks which meets stringent latency and reliability requirements while minimizing the allocated resources. The Quality of Service (QoS) requirement is formulated in terms of the probability that the latency exceeds a maximal allowed budget. We develop a discrete-time queuing model for the system, in the case where the URLLC reservation is fully-flexible, and when the reservation is made on a slot basis while URLLC packets arrive in mini-slots. We then exploit this model to propose a control scheme that dynamically updates the amount of resources to be allocated per time slot so as to meet the QoS requirement. We formulate an optimization framework that derives the policy which achieves the QoS target while minimizing resource consumption and propose offline algorithms that converge to the quasi optimal reservation policy. In the case when traffic is unknown, we propose online algorithms based on stochastic bandits to achieve this aim. Numerical experiments validate our model and confirm the efficiency of our algorithms in terms of meeting the delay violation target at minimal cost.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"21 5","pages":"5076-5090"},"PeriodicalIF":4.7,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142187251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Survivable Payment Channel Networks 可存活的支付渠道网络
IF 4.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-10 DOI: 10.1109/TNSM.2024.3456229
Yekaterina Podiatchev;Ariel Orda;Ori Rottenstreich
Payment channel networks (PCNs) are a leading method to scale the transaction throughput in cryptocurrencies. Two participants can use a bidirectional payment channel for making multiple mutual payments without committing them to the blockchain. Opening a payment channel is a slow operation that involves an on-chain transaction locking a certain amount of funds. These aspects limit the number of channels that can be opened or maintained. Users may route payments through a multi-hop path and thus avoid opening and maintaining a channel for each new destination. Unlike regular networks, in PCNs capacity depends on the usage patterns and, moreover, channels may become unidirectional. Since payments often fail due to channel depletion, a protection scheme to overcome failures is of interest. We define the stopping time of a payment channel as the time at which the channel becomes depleted. We analyze the mean stopping time of a channel as well as that of a network with a set of channels and examine the stopping time of channels in particular topologies. We then propose a scheme for optimizing the capacity distribution among the channels in order to increase the minimal stopping time in the network. We conduct experiments and demonstrate the accuracy of our model and the efficiency of the proposed optimization scheme.
支付通道网络(pcn)是扩展加密货币交易吞吐量的主要方法。两个参与者可以使用双向支付通道进行多次相互支付,而无需将其提交给区块链。开通支付通道是一个缓慢的操作,涉及到锁定一定数量资金的链上交易。这些方面限制了可以打开或维护的通道的数量。用户可以通过多跳路径路由支付,从而避免为每个新目的地打开和维护一个通道。与常规网络不同,pcn的容量取决于使用模式,而且,信道可能是单向的。由于支付经常由于通道耗尽而失败,因此克服失败的保护方案是感兴趣的。我们将支付通道的停止时间定义为通道耗尽的时间。我们分析了信道的平均停止时间以及信道集网络的平均停止时间,并研究了特定拓扑下信道的停止时间。然后,我们提出了一种优化通道间容量分配的方案,以增加网络中的最小停车时间。通过实验验证了模型的准确性和优化方案的有效性。
{"title":"Survivable Payment Channel Networks","authors":"Yekaterina Podiatchev;Ariel Orda;Ori Rottenstreich","doi":"10.1109/TNSM.2024.3456229","DOIUrl":"10.1109/TNSM.2024.3456229","url":null,"abstract":"Payment channel networks (PCNs) are a leading method to scale the transaction throughput in cryptocurrencies. Two participants can use a bidirectional payment channel for making multiple mutual payments without committing them to the blockchain. Opening a payment channel is a slow operation that involves an on-chain transaction locking a certain amount of funds. These aspects limit the number of channels that can be opened or maintained. Users may route payments through a multi-hop path and thus avoid opening and maintaining a channel for each new destination. Unlike regular networks, in PCNs capacity depends on the usage patterns and, moreover, channels may become unidirectional. Since payments often fail due to channel depletion, a protection scheme to overcome failures is of interest. We define the stopping time of a payment channel as the time at which the channel becomes depleted. We analyze the mean stopping time of a channel as well as that of a network with a set of channels and examine the stopping time of channels in particular topologies. We then propose a scheme for optimizing the capacity distribution among the channels in order to increase the minimal stopping time in the network. We conduct experiments and demonstrate the accuracy of our model and the efficiency of the proposed optimization scheme.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"21 6","pages":"6218-6232"},"PeriodicalIF":4.7,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142187252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Time-Distributed Feature Learning for Internet of Things Network Traffic Classification 用于物联网网络流量分类的时间分布式特征学习
IF 4.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-10 DOI: 10.1109/TNSM.2024.3457579
Yoga Suhas Kuruba Manjunath;Sihao Zhao;Xiao-Ping Zhang;Lian Zhao
Deep learning-based network traffic classification (NTC) techniques, including conventional and class-of-service (CoS) classifiers, are a popular tool that aids in the quality of service (QoS) and radio resource management for the Internet of Things (IoT) network. Holistic temporal features consist of inter-, intra-, and pseudo-temporal features within packets, between packets, and among flows, providing the maximum information on network services without depending on defined classes in a problem. Conventional spatio-temporal features in the current solutions extract only space and time information between packets and flows, ignoring the information within packets and flow for IoT traffic. Therefore, we propose a new, efficient, holistic feature extraction method for deep-learning-based NTC using time-distributed feature learning to maximize the accuracy of the NTC. We apply a time-distributed wrapper on deep-learning layers to help extract pseudo-temporal features and spatio-temporal features. Pseudo-temporal features are mathematically complex to explain since, in deep learning, a black box extracts them. However, the features are temporal because of the time-distributed wrapper; therefore, we call them pseudo-temporal features. Since our method is efficient in learning holistic-temporal features, we can extend our method to both conventional and CoS NTC. Our solution proves that pseudo-temporal and spatial-temporal features can significantly improve the robustness and performance of any NTC. We analyze the solution theoretically and experimentally on different real-world datasets. The experimental results show that the holistic-temporal time-distributed feature learning method, on average, is 13.5% more accurate than the state-of-the-art conventional and CoS classifiers.
基于深度学习的网络流量分类(NTC)技术,包括传统和服务分类器(CoS)分类器,是一种流行的工具,有助于物联网(IoT)网络的服务质量(QoS)和无线电资源管理。整体时态特征包括包内、包之间和流之间的间时态特征、内时态特征和伪时态特征,提供关于网络服务的最大信息,而不依赖于问题中定义的类。目前解决方案中传统的时空特征只提取了数据包和流之间的时空信息,忽略了物联网流量中数据包和流内部的信息。因此,我们提出了一种新的、高效的、整体的基于深度学习的NTC特征提取方法,利用时间分布特征学习来最大化NTC的准确性。我们在深度学习层上应用时间分布包装器来帮助提取伪时间特征和时空特征。伪时间特征在数学上很难解释,因为在深度学习中,它们是由一个黑箱提取出来的。然而,由于时间分布的包装器,这些特征是暂时的;因此,我们称它们为伪时间特征。由于我们的方法在学习整体时间特征方面是有效的,我们可以将我们的方法扩展到传统和CoS NTC。我们的解决方案证明了伪时间和时空特征可以显著提高任意NTC的鲁棒性和性能。我们在不同的现实世界数据集上对该解决方案进行了理论和实验分析。实验结果表明,整体时间分布特征学习方法比最先进的传统分类器和CoS分类器平均准确率提高13.5%。
{"title":"Time-Distributed Feature Learning for Internet of Things Network Traffic Classification","authors":"Yoga Suhas Kuruba Manjunath;Sihao Zhao;Xiao-Ping Zhang;Lian Zhao","doi":"10.1109/TNSM.2024.3457579","DOIUrl":"10.1109/TNSM.2024.3457579","url":null,"abstract":"Deep learning-based network traffic classification (NTC) techniques, including conventional and class-of-service (CoS) classifiers, are a popular tool that aids in the quality of service (QoS) and radio resource management for the Internet of Things (IoT) network. Holistic temporal features consist of inter-, intra-, and pseudo-temporal features within packets, between packets, and among flows, providing the maximum information on network services without depending on defined classes in a problem. Conventional spatio-temporal features in the current solutions extract only space and time information between packets and flows, ignoring the information within packets and flow for IoT traffic. Therefore, we propose a new, efficient, holistic feature extraction method for deep-learning-based NTC using time-distributed feature learning to maximize the accuracy of the NTC. We apply a time-distributed wrapper on deep-learning layers to help extract pseudo-temporal features and spatio-temporal features. Pseudo-temporal features are mathematically complex to explain since, in deep learning, a black box extracts them. However, the features are temporal because of the time-distributed wrapper; therefore, we call them pseudo-temporal features. Since our method is efficient in learning holistic-temporal features, we can extend our method to both conventional and CoS NTC. Our solution proves that pseudo-temporal and spatial-temporal features can significantly improve the robustness and performance of any NTC. We analyze the solution theoretically and experimentally on different real-world datasets. The experimental results show that the holistic-temporal time-distributed feature learning method, on average, is 13.5% more accurate than the state-of-the-art conventional and CoS classifiers.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"21 6","pages":"6566-6581"},"PeriodicalIF":4.7,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142187253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DGS: An Efficient Delay-Guaranteed Scheduling Framework for Wireless Deterministic Networking DGS:无线确定性网络的高效延迟保证调度框架
IF 4.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-09 DOI: 10.1109/TNSM.2024.3456576
Minghui Chang;Haojun Lv;Yunqi Gao;Bing Hu;Wei Wang;Ze Yang
Deterministic Networking (DetNet) aims to provide an end-to-end ultra-reliable data network with ultra-low latency and jitter. However, implementing DetNet in wireless networks, particularly in the air interface, still faces the challenge of guaranteeing bounded delay. This paper proposes a delay-guaranteed three-layer scheduling framework for DetNet, named Deterministic Guarantee Scheduling (DGS). The top layer calculates the amount of new data entering the queue in each scheduling period and timestamps the data to track its arrival time. Based on the remaining waiting time of each flow’s data volume, the middle layer proposes a scheduling algorithm based on urgency, prioritizing the scheduling of data volumes with the shortest remaining queuing time. The lower layer fine-tunes the scheduling results obtained by the middle layer for actual transmission. We implemented the DGS framework on the 5G-air-simulator platform. Simulation results demonstrate that DGS outperforms all other mechanisms by guaranteeing delay for a larger number of deterministic flows and achieving better throughput performance.
确定性网络(Deterministic Networking,简称DetNet)旨在提供端到端超低延迟和抖动的超可靠数据网络。然而,在无线网络中,特别是在空中接口中实现DetNet,仍然面临着保证有界延迟的挑战。本文提出了一种面向DetNet的延迟保证三层调度框架——确定性保证调度(Deterministic Guarantee scheduling, DGS)。顶层计算每个调度周期内进入队列的新数据量,并为数据加上时间戳以跟踪其到达时间。中间层根据各流数据卷的剩余等待时间,提出了一种基于紧迫性的调度算法,优先调度剩余排队时间最短的数据卷。下层根据实际传输对中间层得到的调度结果进行微调。我们在5g空气模拟器平台上实现了DGS框架。仿真结果表明,DGS通过保证大量确定性流的延迟和实现更好的吞吐量性能,优于所有其他机制。
{"title":"DGS: An Efficient Delay-Guaranteed Scheduling Framework for Wireless Deterministic Networking","authors":"Minghui Chang;Haojun Lv;Yunqi Gao;Bing Hu;Wei Wang;Ze Yang","doi":"10.1109/TNSM.2024.3456576","DOIUrl":"10.1109/TNSM.2024.3456576","url":null,"abstract":"Deterministic Networking (DetNet) aims to provide an end-to-end ultra-reliable data network with ultra-low latency and jitter. However, implementing DetNet in wireless networks, particularly in the air interface, still faces the challenge of guaranteeing bounded delay. This paper proposes a delay-guaranteed three-layer scheduling framework for DetNet, named Deterministic Guarantee Scheduling (DGS). The top layer calculates the amount of new data entering the queue in each scheduling period and timestamps the data to track its arrival time. Based on the remaining waiting time of each flow’s data volume, the middle layer proposes a scheduling algorithm based on urgency, prioritizing the scheduling of data volumes with the shortest remaining queuing time. The lower layer fine-tunes the scheduling results obtained by the middle layer for actual transmission. We implemented the DGS framework on the 5G-air-simulator platform. Simulation results demonstrate that DGS outperforms all other mechanisms by guaranteeing delay for a larger number of deterministic flows and achieving better throughput performance.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"21 6","pages":"6582-6596"},"PeriodicalIF":4.7,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142187255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reliable Task Offloading in Sustainable Edge Computing with Imperfect Channel State Information 不完善信道状态信息下可持续边缘计算中的可靠任务卸载
IF 4.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-09 DOI: 10.1109/TNSM.2024.3456568
Peng Peng;Wentai Wu;Weiwei Lin;Fan Zhang;Yongheng Liu;Keqin Li
As a promising paradigm, edge computing enhances service provisioning by offloading tasks to powerful servers at the network edge. Meanwhile, Non-Orthogonal Multiple Access (NOMA) and renewable energy sources are increasingly adopted for spectral efficiency and carbon footprint reduction. However, these new techniques inevitably introduce reliability risks to the edge system generally because of i) imperfect Channel State Information (CSI), which can misguide offloading decisions and cause transmission outages, and ii) unstable renewable energy supply, which complicates device availability. To tackle these issues, we first establish a system model that measures service reliability based on probabilistic principles for the NOMA-based edge system. As a solution, a Reliable Offloading method with Multi-Agent deep reinforcement learning (ROMA) is proposed. In ROMA, we first reformulate the reliability-critical constraint into an long-term optimization problem via Lyapunov optimization. We discretize the hybrid action space and convert the resource allocation on edge servers into a 0-1 knapsack problem. The optimization problem is then formulated as a Partially Observable Markov Decision Process (POMDP) and addressed by multi-agent proximal policy optimization (PPO). Experimental evaluations demonstrate the superiority of ROMA over existing methods in reducing grid energy costs and enhancing system reliability, achieving Pareto-optimal performance under various settings.
作为一种很有前途的范例,边缘计算通过将任务卸载到网络边缘的功能强大的服务器来增强服务供应。同时,为了提高频谱效率和减少碳足迹,越来越多地采用非正交多址(NOMA)和可再生能源。然而,这些新技术不可避免地会给边缘系统带来可靠性风险,因为i)不完善的信道状态信息(CSI)可能会误导卸载决策并导致传输中断,以及ii)不稳定的可再生能源供应,这会使设备可用性复杂化。为了解决这些问题,我们首先为基于noma的边缘系统建立了一个基于概率原则的服务可靠性度量系统模型。为此,提出了一种基于多智能体深度强化学习(ROMA)的可靠卸载方法。在ROMA中,我们首先通过Lyapunov优化将可靠性关键约束重新表述为一个长期优化问题。将混合动作空间离散化,将边缘服务器上的资源分配转化为0-1背包问题。然后将优化问题表述为部分可观察马尔可夫决策过程(POMDP),并通过多智能体近端策略优化(PPO)来解决。实验评估表明,ROMA在降低电网能源成本和提高系统可靠性方面优于现有方法,在各种设置下实现了帕累托最优性能。
{"title":"Reliable Task Offloading in Sustainable Edge Computing with Imperfect Channel State Information","authors":"Peng Peng;Wentai Wu;Weiwei Lin;Fan Zhang;Yongheng Liu;Keqin Li","doi":"10.1109/TNSM.2024.3456568","DOIUrl":"10.1109/TNSM.2024.3456568","url":null,"abstract":"As a promising paradigm, edge computing enhances service provisioning by offloading tasks to powerful servers at the network edge. Meanwhile, Non-Orthogonal Multiple Access (NOMA) and renewable energy sources are increasingly adopted for spectral efficiency and carbon footprint reduction. However, these new techniques inevitably introduce reliability risks to the edge system generally because of i) imperfect Channel State Information (CSI), which can misguide offloading decisions and cause transmission outages, and ii) unstable renewable energy supply, which complicates device availability. To tackle these issues, we first establish a system model that measures service reliability based on probabilistic principles for the NOMA-based edge system. As a solution, a Reliable Offloading method with Multi-Agent deep reinforcement learning (ROMA) is proposed. In ROMA, we first reformulate the reliability-critical constraint into an long-term optimization problem via Lyapunov optimization. We discretize the hybrid action space and convert the resource allocation on edge servers into a 0-1 knapsack problem. The optimization problem is then formulated as a Partially Observable Markov Decision Process (POMDP) and addressed by multi-agent proximal policy optimization (PPO). Experimental evaluations demonstrate the superiority of ROMA over existing methods in reducing grid energy costs and enhancing system reliability, achieving Pareto-optimal performance under various settings.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"21 6","pages":"6423-6436"},"PeriodicalIF":4.7,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142187254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Causal Genetic Network Anomaly Detection Method for Imbalanced Data and Information Redundancy 针对不平衡数据和信息冗余的因果遗传网络异常现象检测方法
IF 4.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-06 DOI: 10.1109/TNSM.2024.3455768
Zengri Zeng;Xuhui Liu;Ming Dai;Jian Zheng;Xiaoheng Deng;Detian Zeng;Jie Chen
The proliferation of Internet-connected devices and the complexity of modern network environments have led to the collection of massive and high-dimensional datasets, resulting in substantial information redundancy and sample imbalance issues. These challenges not only hinder the computational efficiency and generalizability of anomaly detection systems but also compromise their ability to detect rare attack types, posing significant security threats. To address these pressing issues, we propose a novel causal genetic network-based anomaly detection method, the CNSGA, which integrates causal inference and the nondominated sorting genetic algorithm-III (NSGA-III). The CNSGA leverages causal reasoning to exclude irrelevant information, focusing solely on the features that are causally related to the outcome labels. Simultaneously, NSGA-III iteratively eliminates redundant information and prioritizes minority samples, thereby enhancing detection performance. To quantitatively assess the improvements achieved, we introduce two indices: a detection balance index and an optimal feature subset index. These indices, along with the causal effect weights, serve as fitness metrics for iterative optimization. The optimized individuals are then selected for subsequent population generation on the basis of nondominated reference point ordering. The experimental results obtained with four real-world network attack datasets demonstrate that the CNSGA significantly outperforms existing methods in terms of overall precision, the imbalance index, and the optimal feature subset index, with maximum increases exceeding 10%, 0.5, and 50%, respectively. Notably, for the CICDDoS2019 dataset, the CNSGA requires only 16-dimensional features to effectively detect more than 70% of all sample types, including 6 more network attack sample types than the other methods detect. The significance and impact of this work encompass the ability to eliminate redundant information, increase detection rates, balance attack detection systems, and ensure stability and generalizability. The proposed CNSGA framework represents a significant step forward in developing efficient and accurate anomaly detection systems capable of defending against a wide range of cyber threats in complex network environments.
互联网连接设备的激增和现代网络环境的复杂性导致大量高维数据集的收集,导致大量的信息冗余和样本不平衡问题。这些挑战不仅阻碍了异常检测系统的计算效率和通用性,而且损害了它们检测罕见攻击类型的能力,构成了重大的安全威胁。为了解决这些紧迫的问题,我们提出了一种新的基于因果遗传网络的异常检测方法——CNSGA,它集成了因果推理和非主导排序遗传算法- iii (NSGA-III)。CNSGA利用因果推理来排除不相关的信息,只关注与结果标签有因果关系的特征。同时,NSGA-III迭代剔除冗余信息,对少数样本进行优先排序,提高检测性能。为了定量评估所取得的改进,我们引入了两个指标:检测平衡指标和最优特征子集指标。这些指标与因果效应权重一起作为迭代优化的适应度指标。然后在非支配参考点排序的基础上选择优化后的个体进行后续种群生成。在4个真实网络攻击数据集上的实验结果表明,CNSGA在总体精度、不平衡指数和最优特征子集指数上均显著优于现有方法,最大增幅分别超过10%、0.5和50%。值得注意的是,对于CICDDoS2019数据集,CNSGA仅需要16维特征即可有效检测70%以上的样本类型,其中网络攻击样本类型比其他方法检测的多6种。这项工作的意义和影响包括消除冗余信息、提高检测率、平衡攻击检测系统以及确保稳定性和通用性的能力。提出的CNSGA框架代表了在开发高效、准确的异常检测系统方面迈出的重要一步,该系统能够在复杂的网络环境中防御各种网络威胁。
{"title":"Causal Genetic Network Anomaly Detection Method for Imbalanced Data and Information Redundancy","authors":"Zengri Zeng;Xuhui Liu;Ming Dai;Jian Zheng;Xiaoheng Deng;Detian Zeng;Jie Chen","doi":"10.1109/TNSM.2024.3455768","DOIUrl":"10.1109/TNSM.2024.3455768","url":null,"abstract":"The proliferation of Internet-connected devices and the complexity of modern network environments have led to the collection of massive and high-dimensional datasets, resulting in substantial information redundancy and sample imbalance issues. These challenges not only hinder the computational efficiency and generalizability of anomaly detection systems but also compromise their ability to detect rare attack types, posing significant security threats. To address these pressing issues, we propose a novel causal genetic network-based anomaly detection method, the CNSGA, which integrates causal inference and the nondominated sorting genetic algorithm-III (NSGA-III). The CNSGA leverages causal reasoning to exclude irrelevant information, focusing solely on the features that are causally related to the outcome labels. Simultaneously, NSGA-III iteratively eliminates redundant information and prioritizes minority samples, thereby enhancing detection performance. To quantitatively assess the improvements achieved, we introduce two indices: a detection balance index and an optimal feature subset index. These indices, along with the causal effect weights, serve as fitness metrics for iterative optimization. The optimized individuals are then selected for subsequent population generation on the basis of nondominated reference point ordering. The experimental results obtained with four real-world network attack datasets demonstrate that the CNSGA significantly outperforms existing methods in terms of overall precision, the imbalance index, and the optimal feature subset index, with maximum increases exceeding 10%, 0.5, and 50%, respectively. Notably, for the CICDDoS2019 dataset, the CNSGA requires only 16-dimensional features to effectively detect more than 70% of all sample types, including 6 more network attack sample types than the other methods detect. The significance and impact of this work encompass the ability to eliminate redundant information, increase detection rates, balance attack detection systems, and ensure stability and generalizability. The proposed CNSGA framework represents a significant step forward in developing efficient and accurate anomaly detection systems capable of defending against a wide range of cyber threats in complex network environments.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"21 6","pages":"6937-6952"},"PeriodicalIF":4.7,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142187256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Network and Service Management
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1