首页 > 最新文献

IEEE Transactions on Network and Service Management最新文献

英文 中文
OptCDU: Optimizing the Computing Data Unit Size for COIN OptCDU:优化 COIN 计算数据单元大小
IF 4.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-30 DOI: 10.1109/TNSM.2024.3452485
Huanzhuo Wu;Jia He;Jiakang Weng;Giang T. Nguyen;Martin Reisslein;Frank H. P. Fitzek
Computing in the Network (COIN) has the potential to reduce the data traffic and thus the end-to-end latencies for data-rich services. Existing COIN studies have neglected the impact of the size of the data unit that the network nodes compute on. However, similar to the impact of the protocol data unit (packet) size in conventional store-and-forward packet-switching networks, the Computing Data Unit (CDU) size is an elementary parameter that strongly influences the COIN dynamics. We model the end-to-end service time consisting of the network transport delays (for data transmission and link propagation), the loading delays of the data into the computing units, and the computing delays in the network nodes. We derive the optimal CDU size that minimizes the end-to-end service time with gradient descent. We evaluate the impact of the CDU sizing on the amount of data transmitted over the network links and the end-to-end service time for computing the convolutional neural network (CNN) based Yoho and a Deep Neural Network (DNN) based Multi-Layer Perceptron (MLP). We distribute the Yoho and MLP neural modules over up to five network nodes. Our emulation evaluations indicate that COIN strongly reduces the amount of network traffic after the first few computing nodes. Also, the CDU size optimization has a strong impact on the end-to-end service time; whereby, CDU sizes that are too small or too large can double the service time. Our emulations validate that our gradient descent minimization correctly identifies the optimal CDU size.
网络计算(COIN)有可能减少数据流量,从而减少数据丰富的服务的端到端延迟。现有的COIN研究忽略了网络节点计算所依赖的数据单元大小的影响。然而,与传统存储转发分组交换网络中协议数据单元(数据包)大小的影响类似,计算数据单元(CDU)大小是一个强烈影响COIN动态的基本参数。我们对端到端服务时间建模,包括网络传输延迟(用于数据传输和链路传播)、数据加载到计算单元的延迟以及网络节点的计算延迟。采用梯度下降法推导出端到端服务时间最小的最优CDU大小。我们评估了CDU大小对通过网络链路传输的数据量的影响,以及计算基于卷积神经网络(CNN)的Yoho和基于深度神经网络(DNN)的多层感知器(MLP)的端到端服务时间。我们将Yoho和MLP神经模块分布在多达五个网络节点上。我们的仿真评估表明,在最初的几个计算节点之后,COIN大大减少了网络流量。另外,CDU的大小优化对端到端服务时间有很大的影响;因此,太小或太大的CDU都会使服务时间增加一倍。我们的仿真验证了我们的梯度下降最小化正确地识别了最佳的CDU大小。
{"title":"OptCDU: Optimizing the Computing Data Unit Size for COIN","authors":"Huanzhuo Wu;Jia He;Jiakang Weng;Giang T. Nguyen;Martin Reisslein;Frank H. P. Fitzek","doi":"10.1109/TNSM.2024.3452485","DOIUrl":"10.1109/TNSM.2024.3452485","url":null,"abstract":"Computing in the Network (COIN) has the potential to reduce the data traffic and thus the end-to-end latencies for data-rich services. Existing COIN studies have neglected the impact of the size of the data unit that the network nodes compute on. However, similar to the impact of the protocol data unit (packet) size in conventional store-and-forward packet-switching networks, the Computing Data Unit (CDU) size is an elementary parameter that strongly influences the COIN dynamics. We model the end-to-end service time consisting of the network transport delays (for data transmission and link propagation), the loading delays of the data into the computing units, and the computing delays in the network nodes. We derive the optimal CDU size that minimizes the end-to-end service time with gradient descent. We evaluate the impact of the CDU sizing on the amount of data transmitted over the network links and the end-to-end service time for computing the convolutional neural network (CNN) based Yoho and a Deep Neural Network (DNN) based Multi-Layer Perceptron (MLP). We distribute the Yoho and MLP neural modules over up to five network nodes. Our emulation evaluations indicate that COIN strongly reduces the amount of network traffic after the first few computing nodes. Also, the CDU size optimization has a strong impact on the end-to-end service time; whereby, CDU sizes that are too small or too large can double the service time. Our emulations validate that our gradient descent minimization correctly identifies the optimal CDU size.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"21 6","pages":"6095-6111"},"PeriodicalIF":4.7,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142187266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GreenShield: Optimizing Firewall Configuration for Sustainable Networks 绿盾为可持续网络优化防火墙配置
IF 4.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-30 DOI: 10.1109/TNSM.2024.3452150
Daniele Bringhenti;Fulvio Valenza
Sustainability is an increasingly critical design feature for modern computer networks. However, green objectives related to energy savings are affected by the application of approximate cybersecurity management techniques. In particular, their impact is evident in distributed firewall configuration, where traditional manual approaches create redundant architectures, leading to avoidable power consumption. This issue has not been addressed by the approaches proposed in literature to automate firewall configuration so far, because their optimization is not focused on network sustainability. Therefore, this paper presents GreenShield as a possible solution that combines security and green-oriented optimization for firewall configuration. Specifically, GreenShield minimizes the power consumption related to firewalls activated in the network while ensuring that the security requested by the network administrator is guaranteed, and the one due to traffic processing by making firewalls to block undesired traffic as near as possible to the sources. The framework implementing GreenShield has undergone experimental tests to assess the provided optimization and its scalability performance.
可持续性是现代计算机网络日益重要的设计特征。然而,与节能相关的绿色目标受到近似网络安全管理技术应用的影响。特别是,它们的影响在分布式防火墙配置中非常明显,传统的手动方法会创建冗余架构,从而导致可避免的功耗。到目前为止,文献中提出的自动化防火墙配置方法还没有解决这个问题,因为它们的优化并不关注网络的可持续性。因此,本文提出了GreenShield作为一种可能的解决方案,将安全性和面向绿色的防火墙配置优化相结合。具体来说,GreenShield最大限度地减少了与网络中激活的防火墙相关的功耗,同时确保了网络管理员所要求的安全性,以及通过使防火墙尽可能靠近源阻止不需要的流量而导致的流量处理。实现GreenShield的框架已经进行了实验测试,以评估所提供的优化及其可扩展性性能。
{"title":"GreenShield: Optimizing Firewall Configuration for Sustainable Networks","authors":"Daniele Bringhenti;Fulvio Valenza","doi":"10.1109/TNSM.2024.3452150","DOIUrl":"10.1109/TNSM.2024.3452150","url":null,"abstract":"Sustainability is an increasingly critical design feature for modern computer networks. However, green objectives related to energy savings are affected by the application of approximate cybersecurity management techniques. In particular, their impact is evident in distributed firewall configuration, where traditional manual approaches create redundant architectures, leading to avoidable power consumption. This issue has not been addressed by the approaches proposed in literature to automate firewall configuration so far, because their optimization is not focused on network sustainability. Therefore, this paper presents GreenShield as a possible solution that combines security and green-oriented optimization for firewall configuration. Specifically, GreenShield minimizes the power consumption related to firewalls activated in the network while ensuring that the security requested by the network administrator is guaranteed, and the one due to traffic processing by making firewalls to block undesired traffic as near as possible to the sources. The framework implementing GreenShield has undergone experimental tests to assess the provided optimization and its scalability performance.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"21 6","pages":"6909-6923"},"PeriodicalIF":4.7,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10660559","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142187270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multipartite Entanglement Distribution in the Quantum Internet: Knowing When to Stop! 量子互联网中的多方纠缠分发:知止!
IF 4.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-30 DOI: 10.1109/TNSM.2024.3452326
Angela Sara Cacciapuoti;Jessica Illiano;Michele Viscardi;Marcello Caleffi
Multipartite entanglement distribution is a key functionality of the Quantum Internet. However, quantum entanglement is very fragile, easily degraded by decoherence, which strictly constraints the time horizon within the distribution has to be completed. This, coupled with the quantum noise irremediably impinging on the channels utilized for entanglement distribution, may imply the need to attempt the distribution process multiple times before the targeted network nodes successfully share the desired entangled state. And there is no guarantee that this is accomplished within the time horizon dictated by the coherence times. As a consequence, in noisy scenarios requiring multiple distribution attempts, it may be convenient to stop the distribution process early. In this paper, we take steps in the direction of knowing when to stop the entanglement distribution by developing a theoretical framework, able to capture the quantum noise effects. Specifically, we first prove that the entanglement distribution process can be modeled as a Markov decision process. Then, we prove that the optimal decision policy exhibits attractive features, which we exploit to reduce the computational complexity. The developed framework provides quantum network designers with flexible tools to optimally engineer the design parameters of the entanglement distribution process.
多部纠缠分布是量子互联网的一个关键功能。然而,量子纠缠非常脆弱,容易因退相干而退化,这严格限制了必须完成分布内的时间范围。这一点,再加上量子噪声对用于纠缠分配的信道的不可补救的冲击,可能意味着在目标网络节点成功共享所需的纠缠状态之前,需要多次尝试分配过程。这并不能保证在连贯时间规定的时间范围内完成。因此,在需要多次分配尝试的嘈杂情况下,尽早停止分配过程可能是方便的。在本文中,我们通过开发一个能够捕获量子噪声效应的理论框架,朝着知道何时停止纠缠分布的方向迈出了一步。具体来说,我们首先证明了纠缠分布过程可以建模为马尔可夫决策过程。然后,我们证明了最优决策策略具有吸引人的特征,我们利用这些特征来降低计算复杂度。所开发的框架为量子网络设计者提供了灵活的工具来优化设计纠缠分布过程的设计参数。
{"title":"Multipartite Entanglement Distribution in the Quantum Internet: Knowing When to Stop!","authors":"Angela Sara Cacciapuoti;Jessica Illiano;Michele Viscardi;Marcello Caleffi","doi":"10.1109/TNSM.2024.3452326","DOIUrl":"10.1109/TNSM.2024.3452326","url":null,"abstract":"Multipartite entanglement distribution is a key functionality of the Quantum Internet. However, quantum entanglement is very fragile, easily degraded by decoherence, which strictly constraints the time horizon within the distribution has to be completed. This, coupled with the quantum noise irremediably impinging on the channels utilized for entanglement distribution, may imply the need to attempt the distribution process multiple times before the targeted network nodes successfully share the desired entangled state. And there is no guarantee that this is accomplished within the time horizon dictated by the coherence times. As a consequence, in noisy scenarios requiring multiple distribution attempts, it may be convenient to stop the distribution process early. In this paper, we take steps in the direction of knowing when to stop the entanglement distribution by developing a theoretical framework, able to capture the quantum noise effects. Specifically, we first prove that the entanglement distribution process can be modeled as a Markov decision process. Then, we prove that the optimal decision policy exhibits attractive features, which we exploit to reduce the computational complexity. The developed framework provides quantum network designers with flexible tools to optimally engineer the design parameters of the entanglement distribution process.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"21 6","pages":"6041-6058"},"PeriodicalIF":4.7,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10660502","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142187268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fairness-Aware VNF Mapping and Scheduling in Satellite Edge Networks for Mission-Critical Applications 面向关键任务应用的卫星边缘网络中公平感知的 VNF 映射和调度
IF 4.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-29 DOI: 10.1109/TNSM.2024.3452031
Haftay Gebreslasie Abreha;Houcine Chougrani;Ilora Maity;Youssouf Drif;Christos Politis;Symeon Chatzinotas
Satellite Edge Computing (SEC) is seen as a promising solution for deploying network functions in orbit to provide ubiquitous services with low latency and bandwidth. Software Defined Networks (SDN) and Network Function Virtualization (NFV) enable SEC to manage and deploy services more flexibly. In this paper, we study a dynamic and topology-aware VNF mapping and scheduling strategy within an SDN/NFV-enabled SEC infrastructure. Our focus is on meeting the stringent requirements of mission-critical (MC) applications, recognizing their significance in both satellite-to-satellite and edge-to-satellite communications while ensuring service delay margin fairness across various time-sensitive service requests. We formulate the VNF mapping and scheduling problem as an Integer Nonlinear Programming problem (INLP), with the objective of minimax fairness among specified requests while considering dynamic satellite network topology, traffic, and resource constraints. We then propose two algorithms for solving the INLP problem: Fairness-Aware Greedy Algorithm for Dynamic VNF Mapping and Scheduling (FAGD_MASC) and Fairness-Aware Simulated Annealing-Based Algorithm for Dynamic VNF Mapping and Scheduling (FASD_MASC) which are suitable for low and high service arrival rates, respectively. Our extensive simulations demonstrate that both FAGD_MASC and FASD_MASC approaches are very close to the optimization-based solution and outperform the benchmark solution in terms of service acceptance rates.
卫星边缘计算(SEC)被视为一种很有前途的解决方案,用于在轨道上部署网络功能,以低延迟和带宽提供无处不在的服务。SDN (Software Defined Networks)和NFV (Network Function Virtualization)技术使SEC能够更加灵活地管理和部署业务。在本文中,我们研究了在支持SDN/ nfv的SEC基础设施中动态和拓扑感知的VNF映射和调度策略。我们的重点是满足关键任务(MC)应用的严格要求,认识到它们在卫星对卫星和边缘对卫星通信中的重要性,同时确保各种时间敏感服务请求的服务延迟裕度公平。我们将VNF映射和调度问题表述为一个整数非线性规划问题(INLP),其目标是在考虑动态卫星网络拓扑结构、流量和资源约束的情况下,在指定请求之间实现最小最大公平性。然后,我们提出了两种解决INLP问题的算法:分别适用于低服务到达率和高服务到达率的基于公平感知的动态VNF映射和调度贪心算法(FAGD_MASC)和基于公平感知的基于模拟退火的动态VNF映射和调度算法(FASD_MASC)。我们的大量模拟表明,FAGD_MASC和FASD_MASC方法都非常接近基于优化的解决方案,并且在服务接受率方面优于基准解决方案。
{"title":"Fairness-Aware VNF Mapping and Scheduling in Satellite Edge Networks for Mission-Critical Applications","authors":"Haftay Gebreslasie Abreha;Houcine Chougrani;Ilora Maity;Youssouf Drif;Christos Politis;Symeon Chatzinotas","doi":"10.1109/TNSM.2024.3452031","DOIUrl":"10.1109/TNSM.2024.3452031","url":null,"abstract":"Satellite Edge Computing (SEC) is seen as a promising solution for deploying network functions in orbit to provide ubiquitous services with low latency and bandwidth. Software Defined Networks (SDN) and Network Function Virtualization (NFV) enable SEC to manage and deploy services more flexibly. In this paper, we study a dynamic and topology-aware VNF mapping and scheduling strategy within an SDN/NFV-enabled SEC infrastructure. Our focus is on meeting the stringent requirements of mission-critical (MC) applications, recognizing their significance in both satellite-to-satellite and edge-to-satellite communications while ensuring service delay margin fairness across various time-sensitive service requests. We formulate the VNF mapping and scheduling problem as an Integer Nonlinear Programming problem (\u0000<monospace>INLP</monospace>\u0000), with the objective of \u0000<italic>minimax</i>\u0000 fairness among specified requests while considering dynamic satellite network topology, traffic, and resource constraints. We then propose two algorithms for solving the \u0000<monospace>INLP</monospace>\u0000 problem: Fairness-Aware Greedy Algorithm for Dynamic VNF Mapping and Scheduling (\u0000<monospace>FAGD_MASC</monospace>\u0000) and Fairness-Aware Simulated Annealing-Based Algorithm for Dynamic VNF Mapping and Scheduling (\u0000<monospace>FASD_MASC</monospace>\u0000) which are suitable for low and high service arrival rates, respectively. Our extensive simulations demonstrate that both \u0000<monospace>FAGD_MASC</monospace>\u0000 and \u0000<monospace>FASD_MASC</monospace>\u0000 approaches are very close to the optimization-based solution and outperform the benchmark solution in terms of service acceptance rates.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"21 6","pages":"6716-6730"},"PeriodicalIF":4.7,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10659145","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142187272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data Aggregation Management With Self-Sovereign Identity in Decentralized Networks 去中心化网络中的自主身份数据聚合管理
IF 4.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-29 DOI: 10.1109/TNSM.2024.3451995
Yepeng Ding;Junwei Yu;Shaowen Li;Hiroyuki Sato;Maro G. Machizawa
Data aggregation management is paramount in data-driven distributed systems. Conventional solutions premised on centralized networks grapple with security challenges concerning authenticity, confidentiality, integrity, and privacy. Recently, distributed ledger technology has gained popularity for its decentralized nature to facilitate overcoming these challenges. Nevertheless, insufficient identity management introduces risks like impersonation and unauthorized access. In this paper, we propose Degator, a data aggregation management framework that leverages self-sovereign identity and functions in decentralized networks to address security concerns and mitigate identity-related risks. We formulate fully decentralized aggregation protocols for data persistence and acquisition in Degator. Degator is compatible with existing data persistence methods, and supports cost-effective data acquisition minimizing dependency on distributed ledgers. We also conduct a formal analysis to elucidate the mechanism of Degator to tackle current security challenges in conventional data aggregation management. Furthermore, we showcase the applicability of Degator through its application in the management of decentralized neuroscience data aggregation and demonstrate its scalability via performance evaluation.
数据聚合管理在数据驱动的分布式系统中是至关重要的。以集中式网络为前提的传统解决方案面临着真实性、机密性、完整性和隐私性方面的安全挑战。最近,分布式账本技术因其去中心化的特性而受到欢迎,有助于克服这些挑战。然而,身份管理不足会带来冒充和未经授权访问等风险。在本文中,我们提出了Degator,这是一个数据聚合管理框架,它利用分散网络中的自我主权身份和功能来解决安全问题并减轻与身份相关的风险。我们在Degator中为数据持久化和获取制定了完全分散的聚合协议。Degator与现有的数据持久化方法兼容,并支持经济高效的数据获取,最大限度地减少对分布式账本的依赖。我们还进行了形式化分析,阐明了Degator的机制,以解决当前传统数据聚合管理中的安全挑战。此外,我们通过Degator在分散神经科学数据聚合管理中的应用展示了它的适用性,并通过性能评估展示了它的可扩展性。
{"title":"Data Aggregation Management With Self-Sovereign Identity in Decentralized Networks","authors":"Yepeng Ding;Junwei Yu;Shaowen Li;Hiroyuki Sato;Maro G. Machizawa","doi":"10.1109/TNSM.2024.3451995","DOIUrl":"10.1109/TNSM.2024.3451995","url":null,"abstract":"Data aggregation management is paramount in data-driven distributed systems. Conventional solutions premised on centralized networks grapple with security challenges concerning authenticity, confidentiality, integrity, and privacy. Recently, distributed ledger technology has gained popularity for its decentralized nature to facilitate overcoming these challenges. Nevertheless, insufficient identity management introduces risks like impersonation and unauthorized access. In this paper, we propose Degator, a data aggregation management framework that leverages self-sovereign identity and functions in decentralized networks to address security concerns and mitigate identity-related risks. We formulate fully decentralized aggregation protocols for data persistence and acquisition in Degator. Degator is compatible with existing data persistence methods, and supports cost-effective data acquisition minimizing dependency on distributed ledgers. We also conduct a formal analysis to elucidate the mechanism of Degator to tackle current security challenges in conventional data aggregation management. Furthermore, we showcase the applicability of Degator through its application in the management of decentralized neuroscience data aggregation and demonstrate its scalability via performance evaluation.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"21 6","pages":"6174-6189"},"PeriodicalIF":4.7,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10659216","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142187271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Is There a DDoS?: System+Application Variable Monitoring to Ascertain the Attack Presence 有 DDoS 吗?系统+应用变量监控以确定攻击是否存在
IF 4.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-29 DOI: 10.1109/TNSM.2024.3451613
Gunjan Kumar Saini;Gaurav Somani
The state of the art has numerous contributions which focus on combating the DDoS attacks. We argue that the mitigation methods are only useful if the victim service or the mitigation method can ascertain the presence of a DDoS attack. In many of the past solutions, the authors decide the presence of DDoS using quick and dirty checks. However, precise mechanisms are still needed so that the accurate decisions about DDoS mitigation can be made. In this work, we propose a method for detecting the presence of DDoS attacks using system variables available at the server or victim server operating system. To achieve this, we propose a machine learning based detection model in which there are three steps involved. In the first step, we monitored 14 different systems and application variables/ characteristics with and without a variety of DDoS attacks. In the second step, we trained machine learning model with monitored data of all the selected variables. In the final step, our approach uses the artificial neural network (ANN) and random forest (RF) based approaches to detect the presence of DDoS attacks. Our presence identification approach gives a detection accuracy of 88%-95% for massive attacks, 65%-77% for mixed traffic having a mixture of low-rate attack and benign requests, 58%-60% for flashcrowd, 76%-81% for mixed traffic having a mixture of massive attack and benign traffic and 58%-64% for low rate attacks with a detection time of 4-5 seconds.
目前的技术状态有许多专注于对抗DDoS攻击的贡献。我们认为,只有当受害者服务或缓解方法能够确定DDoS攻击的存在时,缓解方法才有用。在过去的许多解决方案中,作者使用快速和肮脏的检查来确定DDoS的存在。但是,仍然需要精确的机制,以便能够做出有关DDoS缓解的准确决策。在这项工作中,我们提出了一种使用服务器或受害服务器操作系统上可用的系统变量来检测DDoS攻击存在的方法。为了实现这一点,我们提出了一个基于机器学习的检测模型,其中涉及三个步骤。在第一步中,我们监控了14个不同的系统和应用程序变量/特征,有和没有各种DDoS攻击。在第二步中,我们使用所有选定变量的监控数据训练机器学习模型。在最后一步中,我们的方法使用基于人工神经网络(ANN)和随机森林(RF)的方法来检测DDoS攻击的存在。我们的存在识别方法对大规模攻击的检测准确率为88%-95%,对低速率攻击和良性请求混合流量的检测准确率为65%-77%,对flashcrowd的检测准确率为58%-60%,对大规模攻击和良性流量混合流量的检测准确率为76%-81%,对低速率攻击的检测准确率为58%-64%,检测时间为4-5秒。
{"title":"Is There a DDoS?: System+Application Variable Monitoring to Ascertain the Attack Presence","authors":"Gunjan Kumar Saini;Gaurav Somani","doi":"10.1109/TNSM.2024.3451613","DOIUrl":"10.1109/TNSM.2024.3451613","url":null,"abstract":"The state of the art has numerous contributions which focus on combating the DDoS attacks. We argue that the mitigation methods are only useful if the victim service or the mitigation method can ascertain the presence of a DDoS attack. In many of the past solutions, the authors decide the presence of DDoS using quick and dirty checks. However, precise mechanisms are still needed so that the accurate decisions about DDoS mitigation can be made. In this work, we propose a method for detecting the presence of DDoS attacks using system variables available at the server or victim server operating system. To achieve this, we propose a machine learning based detection model in which there are three steps involved. In the first step, we monitored 14 different systems and application variables/ characteristics with and without a variety of DDoS attacks. In the second step, we trained machine learning model with monitored data of all the selected variables. In the final step, our approach uses the artificial neural network (ANN) and random forest (RF) based approaches to detect the presence of DDoS attacks. Our presence identification approach gives a detection accuracy of 88%-95% for massive attacks, 65%-77% for mixed traffic having a mixture of low-rate attack and benign requests, 58%-60% for flashcrowd, 76%-81% for mixed traffic having a mixture of massive attack and benign traffic and 58%-64% for low rate attacks with a detection time of 4-5 seconds.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"21 6","pages":"6899-6908"},"PeriodicalIF":4.7,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142187273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
QoE Estimation Across Different Cloud Gaming Services Using Transfer Learning 利用迁移学习估计不同云游戏服务的 QoE
IF 4.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-28 DOI: 10.1109/TNSM.2024.3451300
Marcos Carvalho;Daniel Soares;Daniel Fernandes Macedo
Cloud Gaming (CG) has become one of the most important cloud-based services in recent years by providing games to different end-network devices, such as personal computers (wired network) and smartphones/tablets (mobile network). CG services stand challenging for network operators since this service demands rigorous network Quality of Services (QoS). Nevertheless, ensuring proper Quality of Experience (QoE) keeps the end-users engaged in the CG services. However, several factors influence users’ experience, such as context (i.e., game type/players) and the end-network type (wired/mobile). In this case, Machine Learning (ML) models have achieved the state-of-the-art on the end-users’ QoE estimation. Despite that, traditional ML models demand a larger amount of data and assume that the training and test have the same distribution, which can make the ML models hard to generalize to other scenarios from what was trained. This work employs Transfer Learning (TL) techniques to create QoE estimation over different cloud gaming services (wired/mobile) and contexts (game type/players). We improved our previous work by performing a subjective QoE assessment with real users playing new games on a mobile cloud gaming testbed. Results show that transfer learning can decrease the average MSE error by at least 34.7% compared to the source model (wired) performance on the mobile cloud gaming and to 81.5% compared with the model trained from scratch.
云游戏(CG)通过向不同的终端网络设备(如个人电脑(有线网络)和智能手机/平板电脑(移动网络))提供游戏,近年来已成为最重要的基于云的服务之一。CG业务对网络运营商来说是一个挑战,因为它需要严格的网络服务质量(QoS)。然而,确保适当的体验质量(QoE)可以使最终用户参与CG服务。然而,有几个因素会影响用户的体验,例如环境(即游戏类型/玩家)和终端网络类型(有线/移动)。在这种情况下,机器学习(ML)模型在最终用户的QoE估计上达到了最先进的水平。尽管如此,传统的机器学习模型需要大量的数据,并且假设训练和测试具有相同的分布,这使得机器学习模型很难从训练的内容推广到其他场景。这项工作使用迁移学习(TL)技术在不同的云游戏服务(有线/移动)和环境(游戏类型/玩家)上创建QoE估计。我们通过对在移动云游戏测试平台上玩新游戏的真实用户进行主观QoE评估来改进之前的工作。结果表明,与源模型(有线)在移动云游戏上的表现相比,迁移学习可以将平均MSE误差降低至少34.7%,与从头开始训练的模型相比,迁移学习可以将平均MSE误差降低81.5%。
{"title":"QoE Estimation Across Different Cloud Gaming Services Using Transfer Learning","authors":"Marcos Carvalho;Daniel Soares;Daniel Fernandes Macedo","doi":"10.1109/TNSM.2024.3451300","DOIUrl":"10.1109/TNSM.2024.3451300","url":null,"abstract":"Cloud Gaming (CG) has become one of the most important cloud-based services in recent years by providing games to different end-network devices, such as personal computers (wired network) and smartphones/tablets (mobile network). CG services stand challenging for network operators since this service demands rigorous network Quality of Services (QoS). Nevertheless, ensuring proper Quality of Experience (QoE) keeps the end-users engaged in the CG services. However, several factors influence users’ experience, such as context (i.e., game type/players) and the end-network type (wired/mobile). In this case, Machine Learning (ML) models have achieved the state-of-the-art on the end-users’ QoE estimation. Despite that, traditional ML models demand a larger amount of data and assume that the training and test have the same distribution, which can make the ML models hard to generalize to other scenarios from what was trained. This work employs Transfer Learning (TL) techniques to create QoE estimation over different cloud gaming services (wired/mobile) and contexts (game type/players). We improved our previous work by performing a subjective QoE assessment with real users playing new games on a mobile cloud gaming testbed. Results show that transfer learning can decrease the average MSE error by at least 34.7% compared to the source model (wired) performance on the mobile cloud gaming and to 81.5% compared with the model trained from scratch.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"21 6","pages":"5935-5946"},"PeriodicalIF":4.7,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142187275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy Efficient UAV-Assisted IoT Data Collection: A Graph-Based Deep Reinforcement Learning Approach 高能效无人机辅助物联网数据采集:基于图的深度强化学习方法
IF 4.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-28 DOI: 10.1109/TNSM.2024.3450964
Qianqian Wu;Qiang Liu;Wenliang Zhu;Zefan Wu
With the advancements in technologies such as 5G, Unmanned Aerial Vehicles (UAVs) have exhibited their potential in various application scenarios, including wireless coverage, search operations, and disaster response. In this paper, we consider the utilization of a group of UAVs as aerial base stations (BS) to collect data from IoT sensor devices. The objective is to maximize the volume of collected data while simultaneously enhancing the geographical fairness among these points of interest, all within the constraints of limited energy resources. Therefore, we propose a deep reinforcement learning (DRL) method based on Graph Attention Networks (GAT), referred to as “GADRL”. GADRL utilizes graph convolutional neural networks to extract spatial correlations among multiple UAVs and makes decisions in a distributed manner under the guidance of DRL. Furthermore, we employ Long Short-Term Memory to establish memory units for storing and utilizing historical information. Numerical results demonstrate that GADRL consistently outperforms four baseline methods, validating its computational efficiency.
随着5G等技术的进步,无人机在无线覆盖、搜索行动、灾难响应等各种应用场景中展现了其潜力。在本文中,我们考虑利用一组无人机作为空中基站(BS)从物联网传感器设备收集数据。目标是最大限度地收集数据,同时加强这些兴趣点之间的地理公平性,所有这些都在有限的能源资源的限制下。因此,我们提出了一种基于图注意网络(GAT)的深度强化学习(DRL)方法,简称“GADRL”。GADRL利用图卷积神经网络提取多架无人机之间的空间相关性,并在DRL的指导下进行分布式决策。此外,我们采用长短期记忆建立记忆单元来存储和利用历史信息。数值结果表明,GADRL的计算效率始终优于4种基准方法。
{"title":"Energy Efficient UAV-Assisted IoT Data Collection: A Graph-Based Deep Reinforcement Learning Approach","authors":"Qianqian Wu;Qiang Liu;Wenliang Zhu;Zefan Wu","doi":"10.1109/TNSM.2024.3450964","DOIUrl":"10.1109/TNSM.2024.3450964","url":null,"abstract":"With the advancements in technologies such as 5G, Unmanned Aerial Vehicles (UAVs) have exhibited their potential in various application scenarios, including wireless coverage, search operations, and disaster response. In this paper, we consider the utilization of a group of UAVs as aerial base stations (BS) to collect data from IoT sensor devices. The objective is to maximize the volume of collected data while simultaneously enhancing the geographical fairness among these points of interest, all within the constraints of limited energy resources. Therefore, we propose a deep reinforcement learning (DRL) method based on Graph Attention Networks (GAT), referred to as “GADRL”. GADRL utilizes graph convolutional neural networks to extract spatial correlations among multiple UAVs and makes decisions in a distributed manner under the guidance of DRL. Furthermore, we employ Long Short-Term Memory to establish memory units for storing and utilizing historical information. Numerical results demonstrate that GADRL consistently outperforms four baseline methods, validating its computational efficiency.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"21 6","pages":"6082-6094"},"PeriodicalIF":4.7,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142187274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distributed Learning Framework for eMBB-URLLC Multiplexing in Open Radio Access Networks 开放无线接入网络中 eMBB-URLLC 复用的分布式学习框架
IF 4.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-28 DOI: 10.1109/TNSM.2024.3451295
Madyan Alsenwi;Eva Lagunas;Symeon Chatzinotas
Next-generation (NextG) cellular networks are expected to evolve towards virtualization and openness, incorporating reprogrammable components that facilitate intelligence and real-time analytics. This paper builds on these innovations to address the network slicing problem in multi-cell open radio access wireless networks, focusing on two key services: enhanced Mobile BroadBand (eMBB) and Ultra-Reliable Low Latency Communications (URLLC). A stochastic resource allocation problem is formulated with the goal of balancing the average eMBB data rate and its variance, while ensuring URLLC constraints. A distributed learning framework based on the Deep Reinforcement Learning (DRL) technique is developed following the Open Radio Access Networks (O-RAN) architectures to solve the formulated optimization problem. The proposed learning approach enables training a global machine learning model at a central cloud server and sharing it with edge servers for executions. Specifically, deep learning agents are distributed at network edge servers and embedded within the Near-Real-Time Radio access network Intelligent Controller (Near-RT RIC) to collect network information and perform online executions. A global deep learning model is trained by a central training engine embedded within the Non-Real-Time RIC (Non-RT RIC) at the central server using received data from edge servers. The performed simulation results validate the efficacy of the proposed algorithm in achieving URLLC constraints while maintaining the eMBB Quality of Service (QoS).
下一代(NextG)蜂窝网络预计将向虚拟化和开放性方向发展,并采用可重新编程的组件,以促进智能化和实时分析。本文以这些创新为基础,解决多蜂窝开放式无线接入无线网络中的网络切片问题,重点关注两种关键服务:增强型移动宽带(eMBB)和超可靠低延迟通信(URLLC)。随机资源分配问题的目标是平衡平均 eMBB 数据速率及其方差,同时确保 URLLC 约束条件。根据开放无线接入网络(O-RAN)架构,开发了基于深度强化学习(DRL)技术的分布式学习框架,以解决所提出的优化问题。所提出的学习方法能够在中央云服务器上训练全局机器学习模型,并将其共享给边缘服务器执行。具体来说,深度学习代理分布在网络边缘服务器上,并嵌入近实时无线接入网智能控制器(Near-RT RIC)中,以收集网络信息并执行在线执行。全局深度学习模型由嵌入在中央服务器非实时 RIC(Non-RT RIC)内的中央训练引擎利用从边缘服务器接收到的数据进行训练。仿真结果验证了所提算法在实现 URLLC 约束的同时保持 eMBB 服务质量(QoS)的有效性。
{"title":"Distributed Learning Framework for eMBB-URLLC Multiplexing in Open Radio Access Networks","authors":"Madyan Alsenwi;Eva Lagunas;Symeon Chatzinotas","doi":"10.1109/TNSM.2024.3451295","DOIUrl":"10.1109/TNSM.2024.3451295","url":null,"abstract":"Next-generation (NextG) cellular networks are expected to evolve towards virtualization and openness, incorporating reprogrammable components that facilitate intelligence and real-time analytics. This paper builds on these innovations to address the network slicing problem in multi-cell open radio access wireless networks, focusing on two key services: enhanced Mobile BroadBand (eMBB) and Ultra-Reliable Low Latency Communications (URLLC). A stochastic resource allocation problem is formulated with the goal of balancing the average eMBB data rate and its variance, while ensuring URLLC constraints. A distributed learning framework based on the Deep Reinforcement Learning (DRL) technique is developed following the Open Radio Access Networks (O-RAN) architectures to solve the formulated optimization problem. The proposed learning approach enables training a global machine learning model at a central cloud server and sharing it with edge servers for executions. Specifically, deep learning agents are distributed at network edge servers and embedded within the Near-Real-Time Radio access network Intelligent Controller (Near-RT RIC) to collect network information and perform online executions. A global deep learning model is trained by a central training engine embedded within the Non-Real-Time RIC (Non-RT RIC) at the central server using received data from edge servers. The performed simulation results validate the efficacy of the proposed algorithm in achieving URLLC constraints while maintaining the eMBB Quality of Service (QoS).","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"21 5","pages":"5718-5732"},"PeriodicalIF":4.7,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142187288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic Flow Scheduling for DNN Training Workloads in Data Centers 数据中心 DNN 训练工作负载的动态流量调度
IF 4.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-27 DOI: 10.1109/TNSM.2024.3450670
Xiaoyang Zhao;Chuan Wu;Xia Zhu
Distributed deep learning (DL) training constitutes a significant portion of workloads in modern data centers that are equipped with high computational capacities, such as GPU servers. However, frequent tensor exchanges among workers during distributed deep neural network (DNN) training can result in heavy traffic in the data center network, leading to congestion at server NICs and in the switching network. Unfortunately, none of the existing DL communication libraries support active flow control to optimize tensor transmission performance, instead relying on passive adjustments to the congestion window or sending rate based on packet loss or delay. To address this issue, we propose a flow scheduler per host that dynamically tunes the sending rates of outgoing tensor flows from each server, maximizing network bandwidth utilization and expediting job training progress. Our scheduler comprises two main components: a monitoring module that interacts with state-of-the-art communication libraries supporting parameter server and all-reduce paradigms to track the training progress of DNN jobs, and a congestion control protocol that receives in-network feedback from traversing switches and computes optimized flow sending rates. For data centers where switches are not programmable, we provide a software solution that emulates switch behavior and interacts with the scheduler on servers. Experiments with real-world GPU testbed and trace-driven simulation demonstrate that our scheduler outperforms common rate control protocols and representative learning-based schemes in various settings.
分布式深度学习(DL)训练在配备高计算能力的现代数据中心(如GPU服务器)中构成了很大一部分工作负载。然而,在分布式深度神经网络(DNN)训练过程中,工作人员之间频繁的张量交换会导致数据中心网络的流量过大,从而导致服务器网卡和交换网络的拥塞。不幸的是,现有的DL通信库都不支持主动流量控制来优化张量传输性能,而是依赖于对拥塞窗口或基于丢包或延迟的发送速率的被动调整。为了解决这个问题,我们提出了一个每台主机的流量调度器,它可以动态地调整每台服务器发出的张量流的发送速率,最大限度地提高网络带宽利用率,加快工作培训进度。我们的调度程序包括两个主要组件:一个监控模块,它与最先进的通信库交互,支持参数服务器和all-reduce范式,以跟踪DNN作业的训练进度;一个拥塞控制协议,它接收来自遍历交换机的网络内反馈,并计算优化的流发送速率。对于交换机不可编程的数据中心,我们提供了一种软件解决方案,可以模拟交换机行为并与服务器上的调度程序进行交互。在真实的GPU测试平台和跟踪驱动仿真中进行的实验表明,我们的调度程序在各种设置下优于常见的速率控制协议和代表性的基于学习的方案。
{"title":"Dynamic Flow Scheduling for DNN Training Workloads in Data Centers","authors":"Xiaoyang Zhao;Chuan Wu;Xia Zhu","doi":"10.1109/TNSM.2024.3450670","DOIUrl":"10.1109/TNSM.2024.3450670","url":null,"abstract":"Distributed deep learning (DL) training constitutes a significant portion of workloads in modern data centers that are equipped with high computational capacities, such as GPU servers. However, frequent tensor exchanges among workers during distributed deep neural network (DNN) training can result in heavy traffic in the data center network, leading to congestion at server NICs and in the switching network. Unfortunately, none of the existing DL communication libraries support active flow control to optimize tensor transmission performance, instead relying on passive adjustments to the congestion window or sending rate based on packet loss or delay. To address this issue, we propose a flow scheduler per host that dynamically tunes the sending rates of outgoing tensor flows from each server, maximizing network bandwidth utilization and expediting job training progress. Our scheduler comprises two main components: a monitoring module that interacts with state-of-the-art communication libraries supporting parameter server and all-reduce paradigms to track the training progress of DNN jobs, and a congestion control protocol that receives in-network feedback from traversing switches and computes optimized flow sending rates. For data centers where switches are not programmable, we provide a software solution that emulates switch behavior and interacts with the scheduler on servers. Experiments with real-world GPU testbed and trace-driven simulation demonstrate that our scheduler outperforms common rate control protocols and representative learning-based schemes in various settings.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"21 6","pages":"6643-6657"},"PeriodicalIF":4.7,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142187211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Network and Service Management
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1