首页 > 最新文献

IEEE Transactions on Cloud Computing最新文献

英文 中文
Delay-Sensitive Task Offloading Optimization by Geometric Programming 通过几何编程优化对延迟敏感的任务卸载
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-03-28 DOI: 10.1109/TCC.2024.3406384
Mohammad Fathi;Mohammad Saroughi;Azarhedi Zareie
Mobile cloud computing is an emerging technology to address the resource limitation of mobile terminals. These terminals need to satisfy the performance requirements of emerging resource-consuming applications. Among these applications, delay-sensitive applications are becoming popular with the requirements of low execution times. Satisfying the delay requirements of these applications is the main objective in the task offloading of mobile cloud computing. In this paper, considering a network of wireless and wired infrastructures, a resource allocation problem in the form of a non-convex problem is formulated to provide a fair delay for offloaded tasks by delay-sensitive applications. Both transmission and computation delays are included in the formulation of the offloading delay. To tackle the problem's complexity, the assignment of mobile terminals to radio access networks and cloud servers is done by proposing greedy assignment solutions. The derived problem which is a geometric programming problem is then solved using convex programming. The performance of the proposed solution is evaluated versus the number of mobile terminals with different values of bandwidth resources at the radio network, workloads, and demand CPU cycles at mobile terminals. Numerical results demonstrate the effectiveness of the proposed solution to decrease the offloading delay in comparison with similar schemes.
移动云计算是一项新兴技术,旨在解决移动终端的资源限制问题。这些终端需要满足新出现的资源消耗型应用对性能的要求。在这些应用中,对延迟敏感的应用正变得越来越流行,它们要求较低的执行时间。满足这些应用的延迟要求是移动云计算任务卸载的主要目标。本文考虑了无线和有线基础设施网络,以非凸问题的形式提出了一个资源分配问题,为延迟敏感型应用的卸载任务提供公平的延迟。在制定卸载延迟时,传输和计算延迟都包括在内。为解决该问题的复杂性,通过提出贪婪分配方案,将移动终端分配到无线接入网络和云服务器。得出的问题是一个几何编程问题,然后使用凸编程法进行求解。根据移动终端数量、无线网络带宽资源、工作负载和移动终端 CPU 周期需求的不同值,对所提解决方案的性能进行了评估。数值结果表明,与类似方案相比,所提方案能有效减少卸载延迟。
{"title":"Delay-Sensitive Task Offloading Optimization by Geometric Programming","authors":"Mohammad Fathi;Mohammad Saroughi;Azarhedi Zareie","doi":"10.1109/TCC.2024.3406384","DOIUrl":"10.1109/TCC.2024.3406384","url":null,"abstract":"Mobile cloud computing is an emerging technology to address the resource limitation of mobile terminals. These terminals need to satisfy the performance requirements of emerging resource-consuming applications. Among these applications, delay-sensitive applications are becoming popular with the requirements of low execution times. Satisfying the delay requirements of these applications is the main objective in the task offloading of mobile cloud computing. In this paper, considering a network of wireless and wired infrastructures, a resource allocation problem in the form of a non-convex problem is formulated to provide a fair delay for offloaded tasks by delay-sensitive applications. Both transmission and computation delays are included in the formulation of the offloading delay. To tackle the problem's complexity, the assignment of mobile terminals to radio access networks and cloud servers is done by proposing greedy assignment solutions. The derived problem which is a geometric programming problem is then solved using convex programming. The performance of the proposed solution is evaluated versus the number of mobile terminals with different values of bandwidth resources at the radio network, workloads, and demand CPU cycles at mobile terminals. Numerical results demonstrate the effectiveness of the proposed solution to decrease the offloading delay in comparison with similar schemes.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"12 3","pages":"889-896"},"PeriodicalIF":5.3,"publicationDate":"2024-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141192560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving Data Locality of Tasks by Executor Allocation in Spark Computing Environment 通过 Spark 计算环境中的执行器分配提高任务的数据位置性
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-03-28 DOI: 10.1109/TCC.2024.3406041
Zhongming Fu;Mengsi He;Yang Yi;Zhuo Tang
The concept of data locality is crucial for distributed systems (e.g., Spark and Hadoop) to process Big Data. Most of the existing research optimized the data locality from the aspect of task scheduling. However, as the execution container of Spark's tasks, the executor launched on different nodes can directly affect the data locality achieved by the tasks. This article tries to improve the data locality of tasks by executor allocation in Spark framework. First, because of different communication modes at stages, we separately model the communication cost of tasks for transferring input data to the executors. Then formalize an optimal executor allocation problem to minimize the total communication cost of transferring all input data. This problem is proven to be NP-hard. Finally, we present a greed dropping heuristic algorithm to provide solution to the executor allocation problem. Our proposals are implemented in Spark-3.4.0 and its performance is evaluated through representative micro-benchmarks (i.e., WordCount, Join, Sort) and macro-benchmarks (i.e., PageRank and LDA). Extensive experiments show that the proposed executor allocation strategy can decrease the network traffic and data access time by improving the data locality during the task scheduling. Its performance benefits are particularly significant for iterative applications.
数据局部性的概念对于分布式系统(如 Spark 和 Hadoop)处理大数据至关重要。现有研究大多从任务调度方面优化数据本地性。然而,作为 Spark 任务的执行容器,不同节点上启动的执行器会直接影响任务实现的数据局部性。本文试图通过 Spark 框架中的执行器分配来改善任务的数据局部性。首先,由于各阶段的通信模式不同,我们分别建立了任务向执行器传输输入数据的通信成本模型。然后形式化一个最优执行器分配问题,以最小化传输所有输入数据的总通信成本。这个问题被证明是 NP 难的。最后,我们提出了一种放弃贪婪的启发式算法,为执行器分配问题提供了解决方案。我们的建议在 Spark-3.4.0 中实现,并通过代表性的微基准(即 WordCount、Join、Sort)和宏基准(即 PageRank 和 LDA)对其性能进行了评估。广泛的实验表明,所提出的执行器分配策略可以在任务调度过程中改善数据位置,从而减少网络流量和数据访问时间。其性能优势对于迭代应用尤为显著。
{"title":"Improving Data Locality of Tasks by Executor Allocation in Spark Computing Environment","authors":"Zhongming Fu;Mengsi He;Yang Yi;Zhuo Tang","doi":"10.1109/TCC.2024.3406041","DOIUrl":"10.1109/TCC.2024.3406041","url":null,"abstract":"The concept of data locality is crucial for distributed systems (e.g., Spark and Hadoop) to process Big Data. Most of the existing research optimized the data locality from the aspect of task scheduling. However, as the execution container of Spark's tasks, the executor launched on different nodes can directly affect the data locality achieved by the tasks. This article tries to improve the data locality of tasks by executor allocation in Spark framework. First, because of different communication modes at stages, we separately model the communication cost of tasks for transferring input data to the executors. Then formalize an optimal executor allocation problem to minimize the total communication cost of transferring all input data. This problem is proven to be NP-hard. Finally, we present a greed dropping heuristic algorithm to provide solution to the executor allocation problem. Our proposals are implemented in Spark-3.4.0 and its performance is evaluated through representative micro-benchmarks (i.e., \u0000<italic>WordCount</i>\u0000, \u0000<italic>Join</i>\u0000, \u0000<italic>Sort</i>\u0000) and macro-benchmarks (i.e., \u0000<italic>PageRank</i>\u0000 and \u0000<italic>LDA</i>\u0000). Extensive experiments show that the proposed executor allocation strategy can decrease the network traffic and data access time by improving the data locality during the task scheduling. Its performance benefits are particularly significant for iterative applications.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"12 3","pages":"876-888"},"PeriodicalIF":5.3,"publicationDate":"2024-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141192515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic Task Offloading in Edge Computing Based on Dependency-Aware Reinforcement Learning 基于依赖感知强化学习的边缘计算中的动态任务卸载
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-03-27 DOI: 10.1109/TCC.2024.3381646
Xiangchun Chen;Jiannong Cao;Yuvraj Sahni;Shan Jiang;Zhixuan Liang
Collaborative edge computing (CEC) is an emerging computing paradigm in which edge nodes collaborate to perform tasks from end devices. Task offloading decides when and at which edge node tasks are executed. Most existing studies assume task profiles and network conditions are known in advance, which can hardly adapt to dynamic real-world computation environments. Some learning-based methods use online task offloading without considering task dependency and network flow scheduling, leading to underutilized resources and flow congestion. We study Online Dependent Task Offloading (ODTO) in CEC, jointly optimizing network flow scheduling to optimize quality of service by reducing task completion time and energy consumption. The challenge of ODTO lies in how to offload dependent tasks and schedule network flows in dynamic networks. We model ODTO as the Markov Decision Process (MDP) and propose an Asynchronous Deep Progressive Reinforcement Learning (ADPRL) approach that optimize offloading and bandwidth decisions. We design a novel dependency-aware reward mechanism to address task dependency and dynamic network. Extensive experiments on the Alibaba cluster trace dataset and synthetic dataset indicate that our algorithm outperforms heuristic and learning-based methods in average task completion time and energy consumption.
协作边缘计算(CEC)是一种新兴的计算模式,在这种模式下,边缘节点协作执行来自终端设备的任务。任务卸载决定了何时以及在哪个边缘节点执行任务。现有的大多数研究都假定任务配置文件和网络条件是事先已知的,这很难适应现实世界的动态计算环境。一些基于学习的方法使用在线任务卸载,而不考虑任务依赖性和网络流调度,导致资源利用不足和流量拥塞。我们研究了 CEC 中的在线任务卸载(ODTO),通过减少任务完成时间和能耗,联合优化网络流调度以优化服务质量。ODTO 的挑战在于如何在动态网络中卸载依赖任务和调度网络流。我们将 ODTO 建模为马尔可夫决策过程(Markov Decision Process,MDP),并提出了一种异步深度渐进强化学习(Asynchronous Deep Progressive Reinforcement Learning,ADPRL)方法来优化卸载和带宽决策。我们设计了一种新颖的依赖感知奖励机制,以解决任务依赖性和动态网络问题。在阿里巴巴集群跟踪数据集和合成数据集上进行的大量实验表明,我们的算法在平均任务完成时间和能耗方面优于启发式方法和基于学习的方法。
{"title":"Dynamic Task Offloading in Edge Computing Based on Dependency-Aware Reinforcement Learning","authors":"Xiangchun Chen;Jiannong Cao;Yuvraj Sahni;Shan Jiang;Zhixuan Liang","doi":"10.1109/TCC.2024.3381646","DOIUrl":"10.1109/TCC.2024.3381646","url":null,"abstract":"Collaborative edge computing (CEC) is an emerging computing paradigm in which edge nodes collaborate to perform tasks from end devices. Task offloading decides when and at which edge node tasks are executed. Most existing studies assume task profiles and network conditions are known in advance, which can hardly adapt to dynamic real-world computation environments. Some learning-based methods use online task offloading without considering task dependency and network flow scheduling, leading to underutilized resources and flow congestion. We study Online Dependent Task Offloading (ODTO) in CEC, jointly optimizing network flow scheduling to optimize quality of service by reducing task completion time and energy consumption. The challenge of ODTO lies in how to offload dependent tasks and schedule network flows in dynamic networks. We model ODTO as the Markov Decision Process (MDP) and propose an Asynchronous Deep Progressive Reinforcement Learning (ADPRL) approach that optimize offloading and bandwidth decisions. We design a novel dependency-aware reward mechanism to address task dependency and dynamic network. Extensive experiments on the Alibaba cluster trace dataset and synthetic dataset indicate that our algorithm outperforms heuristic and learning-based methods in average task completion time and energy consumption.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"12 2","pages":"594-608"},"PeriodicalIF":6.5,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140314864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Reinforcement Learning Based Dynamic Flowlet Switching for DCN 基于深度强化学习的 DCN 动态小流量切换
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-03-27 DOI: 10.1109/TCC.2024.3382132
Xinglong Diao;Huaxi Gu;Wenting Wei;Guoyong Jiang;Baochun Li
Flowlet switching has been proven to be an effective technology for fine-grained load balancing in data center networks. However, flowlet detection based on static flowlet timeout values, lacks accuracy and effectiveness in complex network environments. In this article, we propose a new deep reinforcement learning approach, called DRLet, to dynamically detect flowlets. DRLet offers two advantages: first, it provides dynamic flowlet timeout values to detect bursts into fine-grained flowlets; second, flowlet timeout values are automatically configured by the deep reinforcement learning agent, which only requires simple and measurable network states, instead of any prior knowledge, to achieve the pre-defined goal. With our approach, the flowlet timeout value dynamically matches the network load scenario, ensuring the accuracy and effectiveness of flowlet detection while suppressing packet reordering. Our results show that DRLet achieves superior performance compared to existing schemes based on static flowlet timeout values in both baseline and asymmetric topologies.
在数据中心网络中,小流量交换已被证明是一种有效的细粒度负载平衡技术。然而,在复杂的网络环境中,基于静态小流量超时值的小流量检测缺乏准确性和有效性。在本文中,我们提出了一种新的深度强化学习方法,称为 DRLet,用于动态检测小流量。DRLet 有两个优点:首先,它提供动态小流量超时值,以检测细粒度小流量的突发;其次,小流量超时值由深度强化学习代理自动配置,它只需要简单和可测量的网络状态,而不需要任何先验知识,就能实现预定目标。通过我们的方法,小流量超时值可动态匹配网络负载情况,确保小流量检测的准确性和有效性,同时抑制数据包重排序。我们的研究结果表明,在基线拓扑和非对称拓扑中,与基于静态小流量超时值的现有方案相比,DRLet 实现了更优越的性能。
{"title":"Deep Reinforcement Learning Based Dynamic Flowlet Switching for DCN","authors":"Xinglong Diao;Huaxi Gu;Wenting Wei;Guoyong Jiang;Baochun Li","doi":"10.1109/TCC.2024.3382132","DOIUrl":"10.1109/TCC.2024.3382132","url":null,"abstract":"Flowlet switching has been proven to be an effective technology for fine-grained load balancing in data center networks. However, flowlet detection based on static flowlet timeout values, lacks accuracy and effectiveness in complex network environments. In this article, we propose a new deep reinforcement learning approach, called DRLet, to dynamically detect flowlets. DRLet offers two advantages: first, it provides dynamic flowlet timeout values to detect bursts into fine-grained flowlets; second, flowlet timeout values are automatically configured by the deep reinforcement learning agent, which only requires simple and measurable network states, instead of any prior knowledge, to achieve the pre-defined goal. With our approach, the flowlet timeout value dynamically matches the network load scenario, ensuring the accuracy and effectiveness of flowlet detection while suppressing packet reordering. Our results show that DRLet achieves superior performance compared to existing schemes based on static flowlet timeout values in both baseline and asymmetric topologies.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"12 2","pages":"580-593"},"PeriodicalIF":6.5,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140314972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Makespan and Security-Aware Workflow Scheduling for Cloud Service Cost Minimization 最小化云服务成本的工时和安全意识工作流调度
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-03-27 DOI: 10.1109/TCC.2024.3382351
Liying Li;Chengliang Zhou;Peijin Cong;Yufan Shen;Junlong Zhou;Tongquan Wei
The market penetration of Infrastructure-as-a-Service (IaaS) in cloud computing is increasing benefiting from its flexibility and scalability. One of the most important issues for IaaS cloud service providers is to minimize the monetary cost while meeting cloud user experience requirements such as makespan and security. Prior works on cloud service cost minimization ignore either security or makespan which is very important for user experience. In this article, we propose a two-stage algorithm to solve the cloud service cost minimization problem at the premise of satisfying the security and makespan requirements of cloud users. Specifically, in the first stage, we propose a novel security service selection scheme to ensure system security by judiciously selecting security services with low cost for tasks under the constraints of time and security. In the second stage, to further reduce the cloud service cost, we design a workflow scheduling method based on an improved firefly algorithm (IFA). The IFA-based method schedules cloud service workflows to virtual machines of small cost at the premise of guaranteeing security and makespan. It can quickly find the workflow scheduling solution with minimized cost using our designed updating scheme and mapping operator. Extensive simulations are conducted on real-world workflows to verify the efficacy of the proposed two-stage method. Simulation results show that the proposed two-stage method outperforms the baseline and two benchmarking methods in terms of cost minimization without violating security and time constraints. Compared to benchmarking methods, the cloud service cost can be reduced by up to 57.6% by using our proposed approach.
基础设施即服务(IaaS)的灵活性和可扩展性使其在云计算领域的市场渗透率不断提高。对于 IaaS 云服务提供商来说,最重要的问题之一是在满足云用户体验要求(如正常运行时间和安全性)的同时最大限度地降低货币成本。之前关于云服务成本最小化的研究忽略了安全性或正常运行时间,而正常运行时间对用户体验非常重要。在本文中,我们提出了一种两阶段算法,在满足云用户安全性和时延要求的前提下解决云服务成本最小化问题。具体来说,在第一阶段,我们提出了一种新颖的安全服务选择方案,在时间和安全的约束下,为任务明智地选择成本低的安全服务,以确保系统安全。在第二阶段,为了进一步降低云服务成本,我们设计了一种基于改进萤火虫算法(IFA)的工作流调度方法。基于 IFA 的方法在保证安全和有效期的前提下,将云服务工作流调度到成本较小的虚拟机上。利用我们设计的更新方案和映射算子,它能快速找到成本最小的工作流调度方案。我们在实际工作流中进行了大量仿真,以验证所提出的两阶段方法的有效性。仿真结果表明,在不违反安全性和时间限制的前提下,所提出的两阶段方法在成本最小化方面优于基准方法和两种基准方法。与基准方法相比,使用我们提出的方法,云服务成本最多可降低 57.6%。
{"title":"Makespan and Security-Aware Workflow Scheduling for Cloud Service Cost Minimization","authors":"Liying Li;Chengliang Zhou;Peijin Cong;Yufan Shen;Junlong Zhou;Tongquan Wei","doi":"10.1109/TCC.2024.3382351","DOIUrl":"10.1109/TCC.2024.3382351","url":null,"abstract":"The market penetration of Infrastructure-as-a-Service (IaaS) in cloud computing is increasing benefiting from its flexibility and scalability. One of the most important issues for IaaS cloud service providers is to minimize the monetary cost while meeting cloud user experience requirements such as makespan and security. Prior works on cloud service cost minimization ignore either security or makespan which is very important for user experience. In this article, we propose a two-stage algorithm to solve the cloud service cost minimization problem at the premise of satisfying the security and makespan requirements of cloud users. Specifically, in the first stage, we propose a novel security service selection scheme to ensure system security by judiciously selecting security services with low cost for tasks under the constraints of time and security. In the second stage, to further reduce the cloud service cost, we design a workflow scheduling method based on an improved firefly algorithm (IFA). The IFA-based method schedules cloud service workflows to virtual machines of small cost at the premise of guaranteeing security and makespan. It can quickly find the workflow scheduling solution with minimized cost using our designed updating scheme and mapping operator. Extensive simulations are conducted on real-world workflows to verify the efficacy of the proposed two-stage method. Simulation results show that the proposed two-stage method outperforms the baseline and two benchmarking methods in terms of cost minimization without violating security and time constraints. Compared to benchmarking methods, the cloud service cost can be reduced by up to 57.6% by using our proposed approach.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"12 2","pages":"609-624"},"PeriodicalIF":6.5,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140314801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Live Migration of Virtual Machines Based on Dirty Page Similarity 基于脏页面相似性的虚拟机实时迁移
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-03-20 DOI: 10.1109/TCC.2024.3379494
Yucong Chen;Shuaixin Xu;Hubin Yang;Rui Zhou;Deke Guo;Qingguo Zhou
Pre-copy-based Virtual Machine (VM) live migration seamlessly migrates the running VM to the target physical server by pre-copying memory pages and realizing updates through loop iterations. This method, which has high reliability and robustness, can effectively achieve load balancing and reduce energy consumption. It is widely used in the industry to manage server cluster resources. However, it also involves many problems, such as many dirty memory pages resulting from repeated transmission and convergence failure of iterative transmission. Hence, pre-copy live migration cannot efficiently allocate server cluster resources. To resolve these problems, a VM pre-copy live migration technology based on the similarity of dirty memory pages is proposed in this paper. The access priority of historical dirty memory pages was determined by calculating the similarity weight based on the Hamming distance. A priority-based delay transmission scheme for high dirty pages and low dirty pages was used to decrease the frequent transmission of high dirty memory pages, increase the convergence speed of the live-migration iterative copy process, and reduce the overall migration time of VMs. A comparative analysis of experimental results based on six dimensions showed that the proposed method achieved better migration efficiency than the conventional live migration strategy.
基于预复制的虚拟机(VM)实时迁移通过预复制内存页并通过循环迭代实现更新,将运行中的虚拟机无缝迁移到目标物理服务器上。这种方法具有高可靠性和鲁棒性,能有效实现负载平衡并降低能耗。这种方法在业界被广泛应用于服务器集群资源管理。但是,它也存在很多问题,如重复传输会导致很多脏内存页,迭代传输会导致收敛失败等。因此,预复制实时迁移无法有效分配服务器集群资源。为了解决这些问题,本文提出了一种基于脏内存页相似性的虚拟机预复制实时迁移技术。通过计算基于汉明距离的相似性权重来确定历史脏内存页的访问优先级。采用基于优先级的高脏页和低脏页延迟传输方案,减少了高脏内存页的频繁传输,提高了实时迁移迭代复制过程的收敛速度,缩短了虚拟机的整体迁移时间。基于六个维度的实验结果对比分析表明,与传统的实时迁移策略相比,所提出的方法实现了更好的迁移效率。
{"title":"Live Migration of Virtual Machines Based on Dirty Page Similarity","authors":"Yucong Chen;Shuaixin Xu;Hubin Yang;Rui Zhou;Deke Guo;Qingguo Zhou","doi":"10.1109/TCC.2024.3379494","DOIUrl":"10.1109/TCC.2024.3379494","url":null,"abstract":"Pre-copy-based Virtual Machine (VM) live migration seamlessly migrates the running VM to the target physical server by pre-copying memory pages and realizing updates through loop iterations. This method, which has high reliability and robustness, can effectively achieve load balancing and reduce energy consumption. It is widely used in the industry to manage server cluster resources. However, it also involves many problems, such as many dirty memory pages resulting from repeated transmission and convergence failure of iterative transmission. Hence, pre-copy live migration cannot efficiently allocate server cluster resources. To resolve these problems, a VM pre-copy live migration technology based on the similarity of dirty memory pages is proposed in this paper. The access priority of historical dirty memory pages was determined by calculating the similarity weight based on the Hamming distance. A priority-based delay transmission scheme for high dirty pages and low dirty pages was used to decrease the frequent transmission of high dirty memory pages, increase the convergence speed of the live-migration iterative copy process, and reduce the overall migration time of VMs. A comparative analysis of experimental results based on six dimensions showed that the proposed method achieved better migration efficiency than the conventional live migration strategy.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"12 2","pages":"563-579"},"PeriodicalIF":6.5,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140200145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hyperion: Hardware-Based High-Performance and Secure System for Container Networks 海伯利安基于硬件的集装箱网络高性能安全系统
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-03-20 DOI: 10.1109/TCC.2024.3403175
Myoungsung You;Minjae Seo;Jaehan Kim;Seungwon Shin;Jaehyun Nam
Containers have become the predominant virtualization technique for deploying microservices in cloud environments. However, container networking, critical for microservice functionality, often introduces significant overhead and resource consumption, potentially degrading the performance of microservices. This challenge arises from the complexity of the software-based network data plane, responsible for network virtualization and access control within container traffic. To tackle this challenge, we propose Hyperion, a novel hardware-based container networking system that prioritizes high performance and security. Leveraging smartNICs, commonly found in cloud environments, Hyperion implements a fully-functional container network data plane, encompassing network virtualization and access control. It also has the capability to dynamically optimize its data plane for agile responses to frequent changes in container environments, ensuring up-to-date data plane operation. This hardware-based design empowers Hyperion to significantly improve the overall container networking performance without relying on the host system resources. Notably, Hyperion seamlessly integrates with existing containerized applications without necessitating modifications. Our evaluation shows that compared to state-of-the-art solutions, Hyperion achieves significant improvements in HTTP container communication latency and throughput by up to 2.25x and 4.3x, respectively. Furthermore, it reduces CPU utilization associated with container networking by up to 4x.
容器已成为在云环境中部署微服务的主要虚拟化技术。然而,对微服务功能至关重要的容器网络往往会带来巨大的开销和资源消耗,从而可能降低微服务的性能。这一挑战源于基于软件的网络数据平面的复杂性,它负责容器流量中的网络虚拟化和访问控制。为了应对这一挑战,我们提出了基于硬件的新型容器网络系统 Hyperion,该系统优先考虑高性能和安全性。利用云环境中常见的智能网卡,Hyperion 实现了全功能的容器网络数据平面,包括网络虚拟化和访问控制。它还能动态优化数据平面,以灵活应对容器环境的频繁变化,确保数据平面的最新运行状态。这种基于硬件的设计使 Hyperion 能够在不依赖主机系统资源的情况下显著提高容器网络的整体性能。值得注意的是,Hyperion 可与现有的容器化应用程序无缝集成,无需进行修改。我们的评估显示,与最先进的解决方案相比,Hyperion 在 HTTP 容器通信延迟和吞吐量方面实现了显著改善,分别提高了 2.25 倍和 4.3 倍。此外,它还将与容器网络相关的 CPU 利用率降低了 4 倍。
{"title":"Hyperion: Hardware-Based High-Performance and Secure System for Container Networks","authors":"Myoungsung You;Minjae Seo;Jaehan Kim;Seungwon Shin;Jaehyun Nam","doi":"10.1109/TCC.2024.3403175","DOIUrl":"10.1109/TCC.2024.3403175","url":null,"abstract":"Containers have become the predominant virtualization technique for deploying microservices in cloud environments. However, container networking, critical for microservice functionality, often introduces significant overhead and resource consumption, potentially degrading the performance of microservices. This challenge arises from the complexity of the software-based network data plane, responsible for network virtualization and access control within container traffic. To tackle this challenge, we propose \u0000<monospace>Hyperion</monospace>\u0000, a novel hardware-based container networking system that prioritizes high performance and security. Leveraging smartNICs, commonly found in cloud environments, \u0000<monospace>Hyperion</monospace>\u0000 implements a fully-functional container network data plane, encompassing network virtualization and access control. It also has the capability to dynamically optimize its data plane for agile responses to frequent changes in container environments, ensuring up-to-date data plane operation. This hardware-based design empowers \u0000<monospace>Hyperion</monospace>\u0000 to significantly improve the overall container networking performance without relying on the host system resources. Notably, \u0000<monospace>Hyperion</monospace>\u0000 seamlessly integrates with existing containerized applications without necessitating modifications. Our evaluation shows that compared to state-of-the-art solutions, \u0000<monospace>Hyperion</monospace>\u0000 achieves significant improvements in HTTP container communication latency and throughput by up to 2.25x and 4.3x, respectively. Furthermore, it reduces CPU utilization associated with container networking by up to 4x.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"12 3","pages":"844-858"},"PeriodicalIF":5.3,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141151312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Game-Based Low Complexity and Near Optimal Task Offloading for Mobile Blockchain Systems 基于游戏的移动区块链系统的低复杂性和近乎最优的任务卸载
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-03-18 DOI: 10.1109/TCC.2024.3376394
Junfei Wang;Jing Li;Zhen Gao;Zhu Han;Chao Qiu;Xiaofei Wang
The Internet of Things (IoT) finds applications across diverse fields but grapples with privacy and security concerns. Blockchain offers a remedy by instilling trust among IoT devices. The development of blockchain in IoT encounters hurdles due to its resource-intensive computation processing, notably in PoW-based systems. Cloud and edge computing can facilitate the application of blockchain in this environment, and the IoT users who want to mine in blockchain need to pay the computation resource rent to the Cloud Computing Service Provider (CCSP) for offloading the mining workload. In this scenario, these IoT miners can form groups to trade with CCSP to maximize their utility. In this paper, a mixed model of the Stackelberg game and coalition formation game is embraced to address the grouping and pricing issues between IoT miners and CCSP. In particular, the Stackelberg game is utilized to handle the pricing problem, and the coalition formation game is employed to tackle the best group partition problem. Moreover, a coalition formation algorithm is proposed to obtain a near-optimal solution with very low complexity. Simulation results show that our proposed algorithm can obtain a performance that is very near to the exhaustive search method, outperforms other existing schemes, and requires only a small computation overhead.
物联网(IoT)在各个领域都有应用,但却存在隐私和安全问题。区块链通过在物联网设备之间建立信任提供了一种补救措施。在物联网中开发区块链会遇到资源密集型计算处理的障碍,特别是在基于 PoW 的系统中。云计算和边缘计算可以促进区块链在该环境中的应用,想要在区块链中挖矿的物联网用户需要向云计算服务提供商(CCSP)支付计算资源租金,以卸载挖矿工作量。在这种情况下,这些物联网矿工可以组成小组与CCSP进行交易,以实现效用最大化。本文采用斯塔克尔伯格博弈和联盟形成博弈的混合模型来解决物联网矿工与 CCSP 之间的分组和定价问题。其中,斯塔克尔伯格博弈用于处理定价问题,联盟形成博弈用于处理最佳分组问题。此外,我们还提出了一种联盟形成算法,以极低的复杂度获得接近最优的解决方案。仿真结果表明,我们提出的算法可以获得与穷举搜索法非常接近的性能,优于其他现有方案,而且只需少量计算开销。
{"title":"Game-Based Low Complexity and Near Optimal Task Offloading for Mobile Blockchain Systems","authors":"Junfei Wang;Jing Li;Zhen Gao;Zhu Han;Chao Qiu;Xiaofei Wang","doi":"10.1109/TCC.2024.3376394","DOIUrl":"10.1109/TCC.2024.3376394","url":null,"abstract":"The Internet of Things (IoT) finds applications across diverse fields but grapples with privacy and security concerns. Blockchain offers a remedy by instilling trust among IoT devices. The development of blockchain in IoT encounters hurdles due to its resource-intensive computation processing, notably in PoW-based systems. Cloud and edge computing can facilitate the application of blockchain in this environment, and the IoT users who want to mine in blockchain need to pay the computation resource rent to the Cloud Computing Service Provider (CCSP) for offloading the mining workload. In this scenario, these IoT miners can form groups to trade with CCSP to maximize their utility. In this paper, a mixed model of the Stackelberg game and coalition formation game is embraced to address the grouping and pricing issues between IoT miners and CCSP. In particular, the Stackelberg game is utilized to handle the pricing problem, and the coalition formation game is employed to tackle the best group partition problem. Moreover, a coalition formation algorithm is proposed to obtain a near-optimal solution with very low complexity. Simulation results show that our proposed algorithm can obtain a performance that is very near to the exhaustive search method, outperforms other existing schemes, and requires only a small computation overhead.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"12 2","pages":"539-549"},"PeriodicalIF":6.5,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140165520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anole: Scheduling Flows for Fast Datacenter Networks With Packet Re-Prioritization Anole:利用数据包重排优先级为快速数据中心网络调度流量
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-03-18 DOI: 10.1109/TCC.2024.3376716
Song Zhang;Lide Suo;Wenxin Li;Yuan Liu;Yulong Li;Keqiu Li
Many existing datacenter transports perform one-shot packet priority tagging at end-hosts and leave them fixed during the packet's transmission. In this article, we experimentally show that: 1) such fixed packet priority is not sufficient for FCT (flow completion time) minimization, and 2) adjusting packet transmission priority in the network requires effective coordination among switches. Building on these insights, we present Anole, a new datacenter transport that advocates packet re-prioritization in near-bottleneck switches to minimize FCT. To this end, Anole integrates three simple-yet-effective techniques. First, it employs an in-network telemetry (INT) based approach to dynamically detect the bottleneck for each flow. Second, it adopts an on-off rate control mechanism for each sender to pause heavily congested flows but send lightly- and non-congested ones. Last, it leverages an altruistic scheduling policy at each switch to let the flows whose next hops are bottleneck switches give way to others. We implement an Anole prototype based on DPDK and show, through both testbed experiments and simulations, that Anole delivers significant performance advantages. For example, compared to EPN, Homa, and Aeolus, it shortens the average FCT of all (small) flows by up to 61.6% (89.1%).
许多现有的数据中心传输都在终端主机上执行一次性数据包优先级标记,并在数据包传输过程中保持不变。在本文中,我们通过实验证明了这一点:1)这种固定的数据包优先级不足以实现 FCT(流量完成时间)最小化;2)在网络中调整数据包传输优先级需要交换机之间的有效协调。基于这些见解,我们提出了一种新的数据中心传输技术 Anole,它主张在接近瓶颈的交换机中重新调整数据包的优先级,以最大限度地减少 FCT。为此,Anole 整合了三种简单而有效的技术。首先,它采用基于网络内遥测(INT)的方法,动态检测每个流量的瓶颈。其次,它对每个发送方采用开关速率控制机制,暂停严重拥堵的流量,发送轻度和非拥堵的流量。最后,它利用每个交换机的利他主义调度策略,让下一跳是瓶颈交换机的流量让位于其他流量。我们基于 DPDK 实现了 Anole 原型,并通过测试平台实验和仿真表明,Anole 具有显著的性能优势。例如,与 EPN、Homa 和 Aeolus 相比,它将所有(小)流量的平均 FCT 缩短了 61.6% (89.1%)。
{"title":"Anole: Scheduling Flows for Fast Datacenter Networks With Packet Re-Prioritization","authors":"Song Zhang;Lide Suo;Wenxin Li;Yuan Liu;Yulong Li;Keqiu Li","doi":"10.1109/TCC.2024.3376716","DOIUrl":"10.1109/TCC.2024.3376716","url":null,"abstract":"Many existing datacenter transports perform one-shot packet priority tagging at end-hosts and leave them fixed during the packet's transmission. In this article, we experimentally show that: 1) such fixed packet priority is not sufficient for FCT (flow completion time) minimization, and 2) adjusting packet transmission priority in the network requires effective coordination among switches. Building on these insights, we present Anole, a new datacenter transport that advocates packet re-prioritization in near-bottleneck switches to minimize FCT. To this end, Anole integrates three simple-yet-effective techniques. First, it employs an in-network telemetry (INT) based approach to dynamically detect the bottleneck for each flow. Second, it adopts an on-off rate control mechanism for each sender to pause heavily congested flows but send lightly- and non-congested ones. Last, it leverages an altruistic scheduling policy at each switch to let the flows whose next hops are bottleneck switches give way to others. We implement an Anole prototype based on DPDK and show, through both testbed experiments and simulations, that Anole delivers significant performance advantages. For example, compared to EPN, Homa, and Aeolus, it shortens the average FCT of all (small) flows by up to 61.6% (89.1%).","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"12 2","pages":"550-562"},"PeriodicalIF":6.5,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140165983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Service Recovery in NFV-Enabled Networks: Algorithm Design and Analysis 支持 NFV 的网络中的服务恢复:算法设计与分析
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-03-17 DOI: 10.1109/TCC.2024.3402185
Dung H. P. Nguyen;Chih-Chieh Lin;Tu N. Nguyen;Shao-I Chu;Bing-Hong Liu
Network function virtualization (NFV), a novel network architecture, promises to offer a lot of convenience in network design, deployment, and management. This paradigm, although flexible, suffers from many risks engendering interruption of services, such as node and link failures. Thus, resiliency is one of the requirements in NFV-enabled network design for recovering network services once occurring failures. Therefore, in addition to a primary chain of virtual network functions (VNFs) for a service, one typically allocates the corresponding backup VNFs to satisfy the resiliency requirement. Nevertheless, this approach consumes network resources that can be inherently employed to deploy more services. Moreover, one can hardly recover all interrupted services due to the limitation of network backup resources. In this context, the importance of the services is one of the factors employed to judge the recovery priority. In this article, we first assign each service a weight expressing its importance, then seek to retrieve interrupted services such that the total weight of the recovered services is maximum. Hence, we also call this issue the VNF restoration for recovering weighted services (VRRWS) problem. We next demonstrate the difficulty of the VRRWS problem is NP-hard and propose an effective technique, termed online recovery algorithm (ORA), to address the problem without necessitating the backup resources. Eventually, we conduct extensive simulations to evaluate the performance of the proposed algorithm as well as the factors affecting the recovery. The experiment shows that the available VNFs should be migrated to appropriate nodes during the recovery process to achieve better results.
网络功能虚拟化(NFV)是一种新型网络架构,有望为网络设计、部署和管理提供诸多便利。这种模式虽然灵活,但也存在许多导致服务中断的风险,如节点和链路故障。因此,弹性是 NFV 网络设计的要求之一,以便在发生故障时恢复网络服务。因此,除了服务的主虚拟网络功能(VNF)链外,通常还会分配相应的备份 VNF 以满足弹性要求。然而,这种方法会消耗本可用于部署更多服务的网络资源。此外,由于网络备份资源的限制,人们很难恢复所有中断的服务。在这种情况下,服务的重要性是判断恢复优先级的因素之一。在本文中,我们首先为每个服务分配一个表示其重要性的权重,然后设法检索中断的服务,使恢复服务的总权重最大。因此,我们也将这一问题称为 "恢复加权服务的 VNF 恢复(VRRWS)问题"。接下来,我们证明了 VRRWS 问题的难度为 NP-hard,并提出了一种有效的技术(称为在线恢复算法 (ORA))来解决该问题,而无需使用备份资源。最后,我们进行了大量仿真,以评估所提算法的性能以及影响恢复的因素。实验结果表明,在恢复过程中,应将可用的 VNF 迁移到适当的节点,以达到更好的效果。
{"title":"Service Recovery in NFV-Enabled Networks: Algorithm Design and Analysis","authors":"Dung H. P. Nguyen;Chih-Chieh Lin;Tu N. Nguyen;Shao-I Chu;Bing-Hong Liu","doi":"10.1109/TCC.2024.3402185","DOIUrl":"10.1109/TCC.2024.3402185","url":null,"abstract":"Network function virtualization (NFV), a novel network architecture, promises to offer a lot of convenience in network design, deployment, and management. This paradigm, although flexible, suffers from many risks engendering interruption of services, such as node and link failures. Thus, resiliency is one of the requirements in NFV-enabled network design for recovering network services once occurring failures. Therefore, in addition to a primary chain of virtual network functions (VNFs) for a service, one typically allocates the corresponding backup VNFs to satisfy the resiliency requirement. Nevertheless, this approach consumes network resources that can be inherently employed to deploy more services. Moreover, one can hardly recover all interrupted services due to the limitation of network backup resources. In this context, the importance of the services is one of the factors employed to judge the recovery priority. In this article, we first assign each service a weight expressing its importance, then seek to retrieve interrupted services such that the total weight of the recovered services is maximum. Hence, we also call this issue the VNF restoration for recovering weighted services (VRRWS) problem. We next demonstrate the difficulty of the VRRWS problem is NP-hard and propose an effective technique, termed online recovery algorithm (ORA), to address the problem without necessitating the backup resources. Eventually, we conduct extensive simulations to evaluate the performance of the proposed algorithm as well as the factors affecting the recovery. The experiment shows that the available VNFs should be migrated to appropriate nodes during the recovery process to achieve better results.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"12 2","pages":"800-813"},"PeriodicalIF":6.5,"publicationDate":"2024-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141060728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Cloud Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1