首页 > 最新文献

IEEE Transactions on Cloud Computing最新文献

英文 中文
A Cost-Aware Operator Migration Approach for Distributed Stream Processing System 分布式流处理系统的成本感知算子迁移方法
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-04 DOI: 10.1109/TCC.2025.3538512
Jiawei Tan;Zhuo Tang;Wentong Cai;Wen Jun Tan;Xiong Xiao;Jiapeng Zhang;Yi Gao;Kenli Li
Stream processing is integral to edge computing due to its low-latency attributes. Nevertheless, variability in user group sizes and disparate computing capabilities of edge devices necessitate frequent operator migrations within the stream. Moreover, intricate dependencies among stream operators often obscure the detection of potential bottleneck operators until an identified bottleneck is migrated in the stream. To address this, we propose a Cost-Aware Operator Migration (CAOM) scheme. The CAOM scheme incorporates a bottleneck operator detection mechanism that directly identifies all bottleneck operators based on task running metrics. This approach avoids multiple consecutive operator migrations in complex tasks, reducing the number of task interruptions caused by operator migration. Moreover, CAOM takes into account the temporal variance in operator migration costs. By factoring in the fluctuating data generation rate from data sources at different time intervals, CAOM selects the optimal start time for operator migration to minimize the amount of accumulated data during task interruptions. Finally, we implemented CAOM on Apache Flink and evaluated its performance using the WordCount and Nexmark applications. Our experiments show that CAOM effectively reduces the number of necessary operator migrations in tasks with complex topologies and decreases the latency overhead associated with operator migration compared to state-of-the-art schemes.
流处理由于其低延迟属性而成为边缘计算不可或缺的一部分。然而,用户组大小的可变性和边缘设备的不同计算能力需要在流中频繁地迁移操作员。此外,流操作符之间复杂的依赖关系通常会模糊潜在瓶颈操作符的检测,直到已识别的瓶颈被迁移到流中。为了解决这个问题,我们提出了一种成本感知运营商迁移(CAOM)方案。CAOM方案包含瓶颈操作符检测机制,该机制根据任务运行度量直接识别所有瓶颈操作符。该方法避免了复杂任务中多个连续的算子迁移,减少了算子迁移导致的任务中断次数。此外,CAOM还考虑了操作员迁移成本的时间变化。CAOM通过考虑数据源在不同时间间隔的波动数据生成率,选择操作员迁移的最佳开始时间,以最大限度地减少任务中断期间累积的数据量。最后,我们在Apache Flink上实现了CAOM,并使用WordCount和Nexmark应用程序评估了其性能。我们的实验表明,与最先进的方案相比,CAOM有效地减少了具有复杂拓扑的任务中必要的算子迁移次数,并降低了与算子迁移相关的延迟开销。
{"title":"A Cost-Aware Operator Migration Approach for Distributed Stream Processing System","authors":"Jiawei Tan;Zhuo Tang;Wentong Cai;Wen Jun Tan;Xiong Xiao;Jiapeng Zhang;Yi Gao;Kenli Li","doi":"10.1109/TCC.2025.3538512","DOIUrl":"https://doi.org/10.1109/TCC.2025.3538512","url":null,"abstract":"Stream processing is integral to edge computing due to its low-latency attributes. Nevertheless, variability in user group sizes and disparate computing capabilities of edge devices necessitate frequent operator migrations within the stream. Moreover, intricate dependencies among stream operators often obscure the detection of potential bottleneck operators until an identified bottleneck is migrated in the stream. To address this, we propose a Cost-Aware Operator Migration (CAOM) scheme. The CAOM scheme incorporates a bottleneck operator detection mechanism that directly identifies all bottleneck operators based on task running metrics. This approach avoids multiple consecutive operator migrations in complex tasks, reducing the number of task interruptions caused by operator migration. Moreover, CAOM takes into account the temporal variance in operator migration costs. By factoring in the fluctuating data generation rate from data sources at different time intervals, CAOM selects the optimal start time for operator migration to minimize the amount of accumulated data during task interruptions. Finally, we implemented CAOM on Apache Flink and evaluated its performance using the WordCount and Nexmark applications. Our experiments show that CAOM effectively reduces the number of necessary operator migrations in tasks with complex topologies and decreases the latency overhead associated with operator migration compared to state-of-the-art schemes.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 1","pages":"441-454"},"PeriodicalIF":5.3,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143580855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint Computation Offloading and Resource Allocation in Mobile-Edge Cloud Computing: A Two-Layer Game Approach 移动边缘云计算中的联合计算卸载与资源分配:一种双层博弈方法
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-03 DOI: 10.1109/TCC.2025.3538090
Zhenli He;Ying Guo;Xiaolong Zhai;Mingxiong Zhao;Wei Zhou;Keqin Li
Mobile-Edge Cloud Computing (MECC) plays a crucial role in balancing low-latency services at the edge with the computational capabilities of cloud data centers (DCs). However, many existing studies focus on single-provider settings or limit their analysis to interactions between mobile devices (MDs) and edge servers (ESs), often overlooking the competition that occurs among ESs from different providers. This article introduces an innovative two-layer game framework that captures independent self-interested competition among MDs and ESs, providing a more accurate reflection of multi-vendor environments. Additionally, the framework explores the influence of cloud-edge collaboration on ES competition, offering new insights into these dynamics. The proposed model extends previous research by developing algorithms that optimize task offloading and resource allocation strategies for both MDs and ESs, ensuring the convergence to Nash equilibrium in both layers. Simulation results demonstrate the potential of the framework to improve resource efficiency and system responsiveness in multi-provider MECC environments.
移动边缘云计算(MECC)在平衡边缘低延迟服务与云数据中心(dc)的计算能力方面发挥着至关重要的作用。然而,许多现有研究侧重于单一提供商设置,或将其分析限制在移动设备(MDs)和边缘服务器(ESs)之间的交互上,往往忽略了来自不同提供商的ESs之间的竞争。本文介绍了一个创新的两层博弈框架,该框架捕捉了MDs和ESs之间独立的自利竞争,从而更准确地反映了多供应商环境。此外,该框架还探讨了云边缘协作对ES竞争的影响,为这些动态提供了新的见解。该模型扩展了先前的研究,开发了优化MDs和ESs的任务卸载和资源分配策略的算法,确保两层都收敛到纳什均衡。仿真结果证明了该框架在多提供商MECC环境中提高资源效率和系统响应能力的潜力。
{"title":"Joint Computation Offloading and Resource Allocation in Mobile-Edge Cloud Computing: A Two-Layer Game Approach","authors":"Zhenli He;Ying Guo;Xiaolong Zhai;Mingxiong Zhao;Wei Zhou;Keqin Li","doi":"10.1109/TCC.2025.3538090","DOIUrl":"https://doi.org/10.1109/TCC.2025.3538090","url":null,"abstract":"Mobile-Edge Cloud Computing (MECC) plays a crucial role in balancing low-latency services at the edge with the computational capabilities of cloud data centers (DCs). However, many existing studies focus on single-provider settings or limit their analysis to interactions between mobile devices (MDs) and edge servers (ESs), often overlooking the competition that occurs among ESs from different providers. This article introduces an innovative two-layer game framework that captures independent self-interested competition among MDs and ESs, providing a more accurate reflection of multi-vendor environments. Additionally, the framework explores the influence of cloud-edge collaboration on ES competition, offering new insights into these dynamics. The proposed model extends previous research by developing algorithms that optimize task offloading and resource allocation strategies for both MDs and ESs, ensuring the convergence to Nash equilibrium in both layers. Simulation results demonstrate the potential of the framework to improve resource efficiency and system responsiveness in multi-provider MECC environments.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 1","pages":"411-428"},"PeriodicalIF":5.3,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143580874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Developments on the “Machine Learning as a Service for High Energy Physics” Framework and Related Cloud Native Solution “机器学习即高能物理服务”框架及相关云原生解决方案的进展
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-03 DOI: 10.1109/TCC.2025.3535793
Luca Giommi;Daniele Spiga;Mattia Paladino;Valentin Kuznetsov;Daniele Bonacorsi
Machine Learning (ML) techniques have been successfully used in many areas of High Energy Physics (HEP) and will play a significant role in the success of upcoming High-Luminosity Large Hadron Collider (HL-LHC) program at CERN. An unprecedented amount of data at the exascale will be collected by LHC experiments in the next decade, and this effort will require novel approaches to train and use ML models. The work presented in this paper is focused on the developments of a ML as a Service (MLaaS) solution for HEP, aiming to provide a cloud service that allows HEP users to run ML pipelines via HTTPs calls. These pipelines are executed by using MLaaS4HEP framework, which allows reading data, processing data, and training ML models directly using ROOT files of arbitrary size from local or distributed data sources. In particular, new features implemented on the framework will be presented as well as updates on the architecture of an existing prototype of the MLaaS4HEP cloud service will be provided. This solution includes two OAuth2 proxy servers as authentication/authorization layer, a MLaaS4HEP server, an XRootD proxy server for enabling access to remote ROOT data, and the TensorFlow as a Service (TFaaS) service in charge of the inference phase.
机器学习(ML)技术已经成功地应用于高能物理(HEP)的许多领域,并将在欧洲核子研究中心(CERN)即将到来的高亮度大型强子对撞机(HL-LHC)项目的成功中发挥重要作用。未来十年,大型强子对撞机的实验将在百亿亿次上收集前所未有的大量数据,而这一努力将需要新的方法来训练和使用机器学习模型。本文介绍的工作重点是为HEP开发机器学习即服务(MLaaS)解决方案,旨在提供一种云服务,允许HEP用户通过HTTPs调用运行机器学习管道。这些管道通过使用MLaaS4HEP框架执行,该框架允许直接使用来自本地或分布式数据源的任意大小的ROOT文件读取数据、处理数据和训练ML模型。特别是,在框架上实现的新功能将被展示,以及现有的MLaaS4HEP云服务原型的架构更新将被提供。该解决方案包括两个OAuth2代理服务器作为身份验证/授权层,一个MLaaS4HEP服务器,一个XRootD代理服务器用于支持对远程ROOT数据的访问,以及负责推理阶段的TensorFlow即服务(TFaaS)服务。
{"title":"Developments on the “Machine Learning as a Service for High Energy Physics” Framework and Related Cloud Native Solution","authors":"Luca Giommi;Daniele Spiga;Mattia Paladino;Valentin Kuznetsov;Daniele Bonacorsi","doi":"10.1109/TCC.2025.3535793","DOIUrl":"https://doi.org/10.1109/TCC.2025.3535793","url":null,"abstract":"Machine Learning (ML) techniques have been successfully used in many areas of High Energy Physics (HEP) and will play a significant role in the success of upcoming High-Luminosity Large Hadron Collider (HL-LHC) program at CERN. An unprecedented amount of data at the exascale will be collected by LHC experiments in the next decade, and this effort will require novel approaches to train and use ML models. The work presented in this paper is focused on the developments of a ML as a Service (MLaaS) solution for HEP, aiming to provide a cloud service that allows HEP users to run ML pipelines via HTTPs calls. These pipelines are executed by using MLaaS4HEP framework, which allows reading data, processing data, and training ML models directly using ROOT files of arbitrary size from local or distributed data sources. In particular, new features implemented on the framework will be presented as well as updates on the architecture of an existing prototype of the MLaaS4HEP cloud service will be provided. This solution includes two OAuth2 proxy servers as authentication/authorization layer, a MLaaS4HEP server, an XRootD proxy server for enabling access to remote ROOT data, and the TensorFlow as a Service (TFaaS) service in charge of the inference phase.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 1","pages":"429-440"},"PeriodicalIF":5.3,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143580872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Verifiable Encrypted Image Retrieval With Reversible Data Hiding in Cloud Environment 云环境下具有可逆数据隐藏的可验证加密图像检索
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-29 DOI: 10.1109/TCC.2025.3535937
Mingyue Li;Yuting Zhu;Ruizhong Du;Chunfu Jia
With growing numbers of users outsourcing images to cloud servers, privacy-preserving content-based image retrieval (CBIR) is widely studied. However, existing privacy-preserving CBIR schemes have limitations in terms of low search accuracy and efficiency due to the use of unreasonable index structures or retrieval methods. Meanwhile, existing result verification schemes do not consider the privacy of verification information. To address these problems, we propose a new secure verification encrypted image retrieval scheme. Specifically, we design an additional homomorphic bitmap index structure by using a pre-trained CNN model with modified fully connected layers to extract image feature vectors and organize them into a bitmap. It makes the extracted features more representative and robust compared to manually designed features, and only performs vector addition during the search process, improving search efficiency and accuracy. Moreover, we design a reversible data hiding (RDH) technique with color images, which embeds the verification information into the least significant bits of the encrypted image pixels to improve the security of the verification information. Finally, we analyze the security of our scheme against chosen-plaintext attacks (CPA) in the security analysis and demonstrate the effectiveness of our scheme on two real-world datasets (i.e., COCO and Flickr-25 k) through experiments.
随着越来越多的用户将图像外包到云服务器上,隐私保护的基于内容的图像检索(CBIR)得到了广泛的研究。然而,现有的保护隐私的CBIR方案由于使用了不合理的索引结构或检索方法,存在搜索精度和效率不高的局限性。同时,现有的结果验证方案没有考虑验证信息的私密性。为了解决这些问题,我们提出了一种新的安全验证加密图像检索方案。具体来说,我们设计了一个额外的同态位图索引结构,通过使用预训练的CNN模型和修改的全连接层来提取图像特征向量并将它们组织成位图。它使提取的特征比人工设计的特征更具代表性和鲁棒性,并且在搜索过程中只进行向量相加,提高了搜索效率和准确性。此外,我们设计了一种彩色图像的可逆数据隐藏(RDH)技术,该技术将验证信息嵌入到加密图像像素的最低有效位,以提高验证信息的安全性。最后,我们在安全分析中分析了我们的方案对选择明文攻击(CPA)的安全性,并通过实验证明了我们的方案在两个真实数据集(即COCO和flickr - 25k)上的有效性。
{"title":"Verifiable Encrypted Image Retrieval With Reversible Data Hiding in Cloud Environment","authors":"Mingyue Li;Yuting Zhu;Ruizhong Du;Chunfu Jia","doi":"10.1109/TCC.2025.3535937","DOIUrl":"https://doi.org/10.1109/TCC.2025.3535937","url":null,"abstract":"With growing numbers of users outsourcing images to cloud servers, privacy-preserving content-based image retrieval (CBIR) is widely studied. However, existing privacy-preserving CBIR schemes have limitations in terms of low search accuracy and efficiency due to the use of unreasonable index structures or retrieval methods. Meanwhile, existing result verification schemes do not consider the privacy of verification information. To address these problems, we propose a new secure verification encrypted image retrieval scheme. Specifically, we design an additional homomorphic bitmap index structure by using a pre-trained CNN model with modified fully connected layers to extract image feature vectors and organize them into a bitmap. It makes the extracted features more representative and robust compared to manually designed features, and only performs vector addition during the search process, improving search efficiency and accuracy. Moreover, we design a reversible data hiding (RDH) technique with color images, which embeds the verification information into the least significant bits of the encrypted image pixels to improve the security of the verification information. Finally, we analyze the security of our scheme against chosen-plaintext attacks (CPA) in the security analysis and demonstrate the effectiveness of our scheme on two real-world datasets (i.e., COCO and Flickr-25 k) through experiments.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 1","pages":"397-410"},"PeriodicalIF":5.3,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143580871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PiCoP: Service Mesh for Sharing Microservices in Multiple Environments Using Protocol-Independent Context Propagation PiCoP:使用协议无关的上下文传播在多个环境中共享微服务的服务网格
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-20 DOI: 10.1109/TCC.2025.3531954
Hiroya Onoe;Daisuke Kotani;Yasuo Okabe
Continuous integration and continuous delivery require many production-like environments in a cluster for testing, staging, debugging, and previewing. In applications built on microservice architecture, sharing common microservices in multiple environments is an effective way to reduce resource consumption. Previous methods extend application layer protocols like HTTP and gRPC to propagate contexts including environment identifiers and to route requests. However, microservices also use other protocols such as MySQL, Redis, Memcached, and AMQP, and extending each protocol requires lots of effort to implement the extensions. This paper proposes PiCoP, a framework to share microservices in multiple environments by propagating contexts and routing requests independently of application layer protocols. PiCoP provides a protocol that propagates contexts by appending them to the front of each TCP byte stream and constructs a service mesh that uses the protocol to route requests. We design the protocol to make it easy to instrument into a system. We demonstrate that PiCoP can reduce resource usage and that it applies to a real-world application, enabling the sharing of microservices in multiple environments using any application layer protocol.
持续集成和持续交付需要集群中许多类似于生产的环境来进行测试、准备、调试和预览。在基于微服务架构的应用中,在多个环境中共享公共微服务是减少资源消耗的有效方法。以前的方法扩展了应用层协议,如HTTP和gRPC,以传播上下文,包括环境标识符和路由请求。然而,微服务也使用其他协议,如MySQL、Redis、Memcached和AMQP,扩展每个协议需要大量的努力来实现扩展。本文提出了PiCoP框架,该框架通过传播上下文和独立于应用层协议的路由请求来在多个环境中共享微服务。PiCoP提供了一种协议,该协议通过将上下文附加到每个TCP字节流的前端来传播上下文,并构建了一个使用该协议路由请求的服务网格。我们设计的协议,使其易于仪器到一个系统。我们演示了PiCoP可以减少资源使用,并且它适用于实际应用程序,支持使用任何应用层协议在多个环境中共享微服务。
{"title":"PiCoP: Service Mesh for Sharing Microservices in Multiple Environments Using Protocol-Independent Context Propagation","authors":"Hiroya Onoe;Daisuke Kotani;Yasuo Okabe","doi":"10.1109/TCC.2025.3531954","DOIUrl":"https://doi.org/10.1109/TCC.2025.3531954","url":null,"abstract":"Continuous integration and continuous delivery require many production-like environments in a cluster for testing, staging, debugging, and previewing. In applications built on microservice architecture, sharing common microservices in multiple environments is an effective way to reduce resource consumption. Previous methods extend application layer protocols like HTTP and gRPC to propagate contexts including environment identifiers and to route requests. However, microservices also use other protocols such as MySQL, Redis, Memcached, and AMQP, and extending each protocol requires lots of effort to implement the extensions. This paper proposes PiCoP, a framework to share microservices in multiple environments by propagating contexts and routing requests independently of application layer protocols. PiCoP provides a protocol that propagates contexts by appending them to the front of each TCP byte stream and constructs a service mesh that uses the protocol to route requests. We design the protocol to make it easy to instrument into a system. We demonstrate that PiCoP can reduce resource usage and that it applies to a real-world application, enabling the sharing of microservices in multiple environments using any application layer protocol.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 1","pages":"383-396"},"PeriodicalIF":5.3,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143580854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint Adaptive Aggregation and Resource Allocation for Hierarchical Federated Learning Systems Based on Edge-Cloud Collaboration 基于边缘云协同的分层联邦学习系统联合自适应聚合与资源分配
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-15 DOI: 10.1109/TCC.2025.3530681
Yi Su;Wenhao Fan;Qingcheng Meng;Penghui Chen;Yuan'an Liu
Hierarchical federated learning shows excellent potential for communication-computation trade-offs and reliable data privacy protection by introducing edge-cloud collaboration. Considering non-independent and identically distributed data distribution among devices and edges, this article aims to minimize the final loss function under time and energy budget constraints by optimizing the aggregation frequency and resource allocation jointly. Although there is no closed-form expression relating the final loss function to optimization variables, we divide the hierarchical federated learning process into multiple cloud intervals and analyze the convergence bound for each cloud interval. Then, we transform the initial problem into one that can be adaptively optimized in each cloud interval. We propose an adaptive hierarchical federated learning process, termed as AHFLP, where we determine edge and cloud aggregation frequency for each cloud interval based on estimated parameters, and then the CPU frequency of devices and wireless channel bandwidth allocation can be optimized in each edge. Simulations are conducted under different models, datasets and data distributions, and the results demonstrate the superiority of our proposed AHFLP compared with existing schemes.
通过引入边缘云协作,分层联邦学习在通信-计算权衡和可靠数据隐私保护方面显示出极好的潜力。考虑到数据在设备和边缘之间的非独立、同分布分布,本文旨在通过联合优化聚合频率和资源分配,在时间和能量预算约束下最小化最终损失函数。虽然没有将最终损失函数与优化变量联系起来的封闭表达式,但我们将分层联邦学习过程划分为多个云区间,并分析每个云区间的收敛界。然后,将初始问题转化为可在每个云区间内自适应优化的问题。我们提出了一种自适应分层联邦学习过程,称为AHFLP,其中我们根据估计的参数确定每个云间隔的边缘和云聚合频率,然后可以在每个边缘优化设备的CPU频率和无线信道带宽分配。在不同的模型、数据集和数据分布下进行了仿真,结果表明了所提出的AHFLP方案与现有方案相比的优越性。
{"title":"Joint Adaptive Aggregation and Resource Allocation for Hierarchical Federated Learning Systems Based on Edge-Cloud Collaboration","authors":"Yi Su;Wenhao Fan;Qingcheng Meng;Penghui Chen;Yuan'an Liu","doi":"10.1109/TCC.2025.3530681","DOIUrl":"https://doi.org/10.1109/TCC.2025.3530681","url":null,"abstract":"Hierarchical federated learning shows excellent potential for communication-computation trade-offs and reliable data privacy protection by introducing edge-cloud collaboration. Considering non-independent and identically distributed data distribution among devices and edges, this article aims to minimize the final loss function under time and energy budget constraints by optimizing the aggregation frequency and resource allocation jointly. Although there is no closed-form expression relating the final loss function to optimization variables, we divide the hierarchical federated learning process into multiple cloud intervals and analyze the convergence bound for each cloud interval. Then, we transform the initial problem into one that can be adaptively optimized in each cloud interval. We propose an adaptive hierarchical federated learning process, termed as AHFLP, where we determine edge and cloud aggregation frequency for each cloud interval based on estimated parameters, and then the CPU frequency of devices and wireless channel bandwidth allocation can be optimized in each edge. Simulations are conducted under different models, datasets and data distributions, and the results demonstrate the superiority of our proposed AHFLP compared with existing schemes.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 1","pages":"369-382"},"PeriodicalIF":5.3,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143570725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy-Aware Offloading of Containerized Tasks in Cloud Native V2X Networks 云原生 V2X 网络中容器化任务的能量感知卸载
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-14 DOI: 10.1109/TCC.2025.3529245
Estela Carmona-Cejudo;Francesco Iadanza
In cloud-native environments, executing vehicle-to-everything (V2X) tasks in edge nodes close to users significantly reduces service end-to-end latency. Containerization further reduces resource and time consumption, and, subsequently, application latency. Since edge nodes are typically resource and energy-constrained, optimizing offloading decisions and managing edge energy consumption is crucial. However, the offloading of containerized tasks has not been thoroughly explored from a practical implementation perspective. This paper proposes an optimization framework for energy-aware offloading of V2X tasks implemented as Kubernetes pods. A weighted utility function is derived based on cumulative pod response time, and an edge-to-cloud offloading decision algorithm (ECODA) is proposed. The system's energy cost model is derived, and a closed-loop repeated reward-based mechanism for CPU adjustment is presented. An energy-aware (EA)-ECODA is proposed to solve the offloading optimization problem while adjusting CPU usage according to energy considerations. Simulations show that ECODA and EA-ECODA outperform first-in, first-served (FIFS) and EA-FIFS in terms of utility, average pod response time, and resource usage, with low computational complexity. Additionally, a real testbed evaluation of a vulnerable road user application demonstrates that ECODA outperforms Kubernetes vertical scaling in terms of service-level delay. Moreover, EA-ECODA significantly improves energy usage utility.
在云原生环境中,在靠近用户的边缘节点上执行车辆到一切(V2X)任务可以显著降低服务端到端延迟。容器化进一步减少了资源和时间消耗,并随后减少了应用程序延迟。由于边缘节点通常是资源和能量受限的,因此优化卸载决策和管理边缘能耗至关重要。然而,从实际实现的角度来看,还没有对集装箱化任务的卸载进行彻底的探索。本文提出了一个优化框架,用于作为Kubernetes pod实现的V2X任务的能量感知卸载。基于累积pod响应时间推导了加权效用函数,提出了边缘到云卸载决策算法(ECODA)。推导了系统的能量成本模型,提出了一种基于闭环重复奖励的CPU调节机制。提出了一种能量感知(energy-aware, EA)-ECODA算法来解决卸载优化问题,同时根据能源考虑调整CPU使用率。仿真结果表明,ECODA和EA-ECODA在效用、平均pod响应时间和资源使用方面都优于FIFS和EA-FIFS,且计算复杂度较低。此外,对一个脆弱的道路用户应用程序的真实测试平台评估表明,ECODA在服务级延迟方面优于Kubernetes垂直扩展。此外,EA-ECODA显著提高了能源使用效用。
{"title":"Energy-Aware Offloading of Containerized Tasks in Cloud Native V2X Networks","authors":"Estela Carmona-Cejudo;Francesco Iadanza","doi":"10.1109/TCC.2025.3529245","DOIUrl":"https://doi.org/10.1109/TCC.2025.3529245","url":null,"abstract":"In cloud-native environments, executing vehicle-to-everything (V2X) tasks in edge nodes close to users significantly reduces service end-to-end latency. Containerization further reduces resource and time consumption, and, subsequently, application latency. Since edge nodes are typically resource and energy-constrained, optimizing offloading decisions and managing edge energy consumption is crucial. However, the offloading of containerized tasks has not been thoroughly explored from a practical implementation perspective. This paper proposes an optimization framework for energy-aware offloading of V2X tasks implemented as Kubernetes pods. A weighted utility function is derived based on cumulative pod response time, and an edge-to-cloud offloading decision algorithm (ECODA) is proposed. The system's energy cost model is derived, and a closed-loop repeated reward-based mechanism for CPU adjustment is presented. An energy-aware (EA)-ECODA is proposed to solve the offloading optimization problem while adjusting CPU usage according to energy considerations. Simulations show that ECODA and EA-ECODA outperform first-in, first-served (FIFS) and EA-FIFS in terms of utility, average pod response time, and resource usage, with low computational complexity. Additionally, a real testbed evaluation of a vulnerable road user application demonstrates that ECODA outperforms Kubernetes vertical scaling in terms of service-level delay. Moreover, EA-ECODA significantly improves energy usage utility.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 1","pages":"336-350"},"PeriodicalIF":5.3,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143570758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hybrid Serverless Platform for Smart Deployment of Service Function Chains 用于智能部署服务功能链的无服务器混合平台
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-14 DOI: 10.1109/TCC.2025.3528573
Sheshadri K R;J. Lakshmi
Cloud Data Centres deal with dynamic changes all the time. Networks in particular, need to adapt their configurations to changing workloads. Given these expectations, Network Function Virtualization (NFV) using Software Defined Networks (SDNs) has realized the aspect of programmability in networks. NFVs allow network services to be programmed as software entities that can be deployed on commodity clusters in the Cloud. Being software, they inherently carry the ability to be customized to specific tenants’ requirements and thus support multi-tenant variations with ease. However, the ability to exploit scaling in alignment with changing demands with minimal loss of service, and improving resource usage efficiency still remains a challenge. Several recent works in literature have proposed platforms to realize Virtual Network functions (VNFs) on the Cloud using service offerings such as Infrastructure as a Service (IaaS) and serverless computing. These approaches are limited by deployment difficulties (configuration and sizing), adaptability to performance requirements (elastic scaling), and changing workload dynamics (scaling and customization). In the current work, we propose a Hybrid Serverless Platform (HSP) to address these identified lacunae. The HSP is implemented using a combination of persistent IaaS, and FaaS components. The IaaS components handle the steady state load, whereas the FaaS components activate during the dynamic change associated with scaling to minimize service loss. The HSP controller takes provisioning decisions based on Quality of Service (QoS) rules and flow statistics using an auto recommender, alleviating users of sizing decisions for function deployment. HSP controller design exploits data locality in SFC realization, reducing data-transfer times between VNFs. It also enables the usage of application characteristics to offer higher control over SFC deployment. A proof-of-concept realization of HSP is presented in the paper and is evaluated for a representative Service Function Chain (SFC) for a dynamic workload, which shows minimal loss in flowlet service, up to 35% resource savings as compared to a pure IaaS deployment and up to 55% lower end-to-end times as compared to a baseline FaaS implementation.
云数据中心一直在处理动态变化。特别是网络,需要调整其配置以适应不断变化的工作负载。基于这些期望,使用软件定义网络(sdn)的网络功能虚拟化(NFV)实现了网络的可编程性。nfv允许将网络服务编程为可以部署在云中的商品集群上的软件实体。作为软件,它们固有地具有针对特定承租者需求进行定制的能力,因此可以轻松地支持多承租者变体。然而,在满足不断变化的需求的同时最小化服务损失并提高资源使用效率的能力仍然是一个挑战。最近的一些文献工作提出了使用诸如基础设施即服务(IaaS)和无服务器计算等服务产品在云上实现虚拟网络功能(VNFs)的平台。这些方法受到部署困难(配置和大小调整)、对性能需求的适应性(弹性扩展)和不断变化的工作负载动态(扩展和自定义)的限制。在目前的工作中,我们提出了一个混合无服务器平台(HSP)来解决这些已确定的空白。HSP是使用持久IaaS和FaaS组件的组合实现的。IaaS组件处理稳定状态负载,而FaaS组件在与扩展相关的动态更改期间激活,以最小化服务损失。HSP控制器使用自动推荐器根据服务质量(QoS)规则和流统计数据做出供应决策,从而减轻了用户对功能部署的规模决策。HSP控制器设计利用SFC实现中的数据局部性,减少了vnf之间的数据传输次数。它还支持使用应用程序特征来提供对SFC部署的更高控制。本文提出了HSP的概念验证实现,并对动态工作负载的代表性服务功能链(SFC)进行了评估,结果显示流服务的损失最小,与纯IaaS部署相比可节省35%的资源,与基线FaaS实现相比可节省55%的端到端时间。
{"title":"Hybrid Serverless Platform for Smart Deployment of Service Function Chains","authors":"Sheshadri K R;J. Lakshmi","doi":"10.1109/TCC.2025.3528573","DOIUrl":"https://doi.org/10.1109/TCC.2025.3528573","url":null,"abstract":"Cloud Data Centres deal with dynamic changes all the time. Networks in particular, need to adapt their configurations to changing workloads. Given these expectations, Network Function Virtualization (NFV) using Software Defined Networks (SDNs) has realized the aspect of programmability in networks. NFVs allow network services to be programmed as software entities that can be deployed on commodity clusters in the Cloud. Being software, they inherently carry the ability to be customized to specific tenants’ requirements and thus support multi-tenant variations with ease. However, the ability to exploit scaling in alignment with changing demands with minimal loss of service, and improving resource usage efficiency still remains a challenge. Several recent works in literature have proposed platforms to realize Virtual Network functions (VNFs) on the Cloud using service offerings such as Infrastructure as a Service (IaaS) and serverless computing. These approaches are limited by deployment difficulties (configuration and sizing), adaptability to performance requirements (elastic scaling), and changing workload dynamics (scaling and customization). In the current work, we propose a Hybrid Serverless Platform (HSP) to address these identified lacunae. The HSP is implemented using a combination of persistent IaaS, and FaaS components. The IaaS components handle the steady state load, whereas the FaaS components activate during the dynamic change associated with scaling to minimize service loss. The HSP controller takes provisioning decisions based on Quality of Service (QoS) rules and flow statistics using an auto recommender, alleviating users of sizing decisions for function deployment. HSP controller design exploits data locality in SFC realization, reducing data-transfer times between VNFs. It also enables the usage of application characteristics to offer higher control over SFC deployment. A proof-of-concept realization of HSP is presented in the paper and is evaluated for a representative Service Function Chain (SFC) for a dynamic workload, which shows minimal loss in flowlet service, up to 35% resource savings as compared to a pure IaaS deployment and up to 55% lower end-to-end times as compared to a baseline FaaS implementation.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 1","pages":"351-368"},"PeriodicalIF":5.3,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143570703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CARL: Cost-Optimized Online Container Placement on VMs Using Adversarial Reinforcement Learning 卡尔:使用对抗性强化学习在虚拟机上进行成本优化的在线容器放置
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-13 DOI: 10.1109/TCC.2025.3528446
Prathamesh Saraf Vinayak;Saswat Subhajyoti Mallick;Lakshmi Jagarlamudi;Anirban Chakraborty;Yogesh Simmhan
Containerization has become popular for the deployment of applications on public clouds. Large enterprises may host 100 s of applications on 1000 s containers that are placed onto Virtual Machines (VMs). Such placement decisions happen continuously as applications are updated by DevOps pipelines that deploy the containers. Managing the placement of container resource requests onto the available capacities of VMs needs to be cost-efficient. This is well-studied, and usually modelled as a multi-dimensional Vector Bin-packing Problem (VBP). Many heuristics, and recently machine learning approaches, have been developed to solve this NP-hard problem for real-time decisions. We propose CARL, a novel approach to solve VBP through Adversarial Reinforcement Learning (RL) for cost minimization. It mimics the placement behavior of an offline semi-optimal VBP solver (teacher), while automatically learning a reward function for reducing the VM costs which out-performs the teacher. It requires limited historical container workload traces to train, and is resilient to changes in the workload distribution during inferencing. We extensively evaluate CARL on workloads derived from realistic traces from Google and Alibaba for the placement of 5 k–10 k container requests onto 2 k–8 k VMs, and compare it with classic heuristics and state-of-the-art RL methods. (1) CARL is fast, e.g., making placement decisions at $approx 1900$ requests/sec onto 8,900 candidate VMs. (2) It is efficient, achieving $approx 16%$ lower VM costs than classic and contemporary RL methods. (3) It is robust to changes in the workload, offering competitive results even when the resource needs or inter-arrival time of the container requests skew from the training workload.
容器化已经成为在公共云上部署应用程序的流行方式。大型企业可能在放置在虚拟机(vm)上的1000个容器上托管100个应用程序。当部署容器的DevOps管道更新应用程序时,这样的放置决策会不断发生。管理容器资源请求在vm可用容量上的放置需要具有成本效益。这是一个很好的研究,通常建模为多维向量装箱问题(VBP)。许多启发式方法和最近的机器学习方法已经被开发出来,以解决实时决策的np困难问题。我们提出了一种通过对抗强化学习(RL)实现成本最小化的解决VBP的新方法CARL。它模仿离线半最优VBP求解器(教师)的放置行为,同时自动学习一个奖励函数,以减少优于教师的VM成本。它需要有限的历史容器工作负载跟踪来训练,并且在推理过程中对工作负载分布的变化具有弹性。我们在谷歌和阿里巴巴的实际跟踪中广泛评估了CARL对工作负载的影响,将5 k - 10 k容器请求放置在2 k - 8 k vm上,并将其与经典的启发式方法和最先进的强化学习方法进行了比较。(1) CARL速度很快,例如,在8,900个候选vm上以大约每秒1900个请求的速度做出放置决策。(2)它是高效的,比经典和现代RL方法的VM成本低约16%。(3)它对工作量的变化具有鲁棒性,即使当资源需求或容器请求的间隔到达时间偏离训练工作量时,也能提供有竞争力的结果。
{"title":"CARL: Cost-Optimized Online Container Placement on VMs Using Adversarial Reinforcement Learning","authors":"Prathamesh Saraf Vinayak;Saswat Subhajyoti Mallick;Lakshmi Jagarlamudi;Anirban Chakraborty;Yogesh Simmhan","doi":"10.1109/TCC.2025.3528446","DOIUrl":"https://doi.org/10.1109/TCC.2025.3528446","url":null,"abstract":"Containerization has become popular for the deployment of applications on public clouds. Large enterprises may host 100 s of applications on 1000 s containers that are placed onto Virtual Machines (VMs). Such placement decisions happen continuously as applications are updated by DevOps pipelines that deploy the containers. Managing the placement of container resource requests onto the available capacities of VMs needs to be cost-efficient. This is well-studied, and usually modelled as a multi-dimensional Vector Bin-packing Problem (VBP). Many heuristics, and recently machine learning approaches, have been developed to solve this NP-hard problem for real-time decisions. We propose CARL, a novel approach to solve VBP through Adversarial Reinforcement Learning (RL) for cost minimization. It mimics the placement behavior of an offline semi-optimal VBP solver (teacher), while automatically learning a reward function for reducing the VM costs which out-performs the teacher. It requires limited historical container workload traces to train, and is resilient to changes in the workload distribution during inferencing. We extensively evaluate CARL on workloads derived from realistic traces from Google and Alibaba for the placement of 5 k–10 k container requests onto 2 k–8 k VMs, and compare it with classic heuristics and state-of-the-art RL methods. (1) CARL is <i>fast</i>, e.g., making placement decisions at <inline-formula><tex-math>$approx 1900$</tex-math></inline-formula> requests/sec onto 8,900 candidate VMs. (2) It is <i>efficient</i>, achieving <inline-formula><tex-math>$approx 16%$</tex-math></inline-formula> lower VM costs than classic and contemporary RL methods. (3) It is <i>robust</i> to changes in the workload, offering competitive results even when the resource needs or inter-arrival time of the container requests skew from the training workload.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 1","pages":"321-335"},"PeriodicalIF":5.3,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143570653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ByteTuning: Watermark Tuning for RoCEv2 ByteTuning:用于RoCEv2的水印调整
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-03 DOI: 10.1109/TCC.2025.3525496
Lizhuang Tan;Zhuo Jiang;Kefei Liu;Haoran Wei;Pengfei Huo;Huiling Shi;Wei Zhang;Wei Su
RDMA over Converged Ethernet v2 (RoCEv2) is one of the most popular high-speed datacenter networking solutions. Watermark is the general term for various trigger and release thresholds of RoCEv2 flow control protocols, and its reasonable configuration is an important factor affecting RoCEv2 performance. In this paper, we propose ByteTuning, a centralized watermark tuning system for RoCEv2. First, three real cases of network performance degradation caused by non-optimal or improper watermark configuration are reported, and the network performance results of different watermark configurations in three typical scenarios are traversed, indicating the necessity of watermark tuning. Then, based on the RDMA Fluid model, the influence of watermark on the RoCEv2 performance is modeled and evaluated. Next, the design of the ByteTuning is introduced, which includes three mechanisms. They are 1) using simulated annealing algorithm to make the real-time watermark converge to the near-optimal configuration, 2) using network telemetry to optimize the feedback overhead, 3) compressing the search space to improve the tuning efficiency. Finally, We validate the performance of ByteTuning in multiple real datacenter networking environments, and the results show that ByteTuning outperforms existing solutions.
RDMA基于融合以太网v2 (RoCEv2)是最流行的高速数据中心网络解决方案之一。水印是RoCEv2流量控制协议的各种触发和释放阈值的总称,其合理配置是影响RoCEv2性能的重要因素。本文提出了一种用于RoCEv2的集中式水印调优系统ByteTuning。首先,报告了三个由于水印配置不优或不当导致网络性能下降的真实案例,并遍历了三种典型场景下不同水印配置的网络性能结果,表明了水印调优的必要性。然后,基于RDMA流体模型,对水印对RoCEv2性能的影响进行建模和评估。接下来,介绍了ByteTuning的设计,它包括三种机制。它们分别是:1)利用模拟退火算法使实时水印收敛到接近最优配置;2)利用网络遥测优化反馈开销;3)压缩搜索空间提高调优效率。最后,我们在多个真实的数据中心网络环境中验证了ByteTuning的性能,结果表明ByteTuning优于现有的解决方案。
{"title":"ByteTuning: Watermark Tuning for RoCEv2","authors":"Lizhuang Tan;Zhuo Jiang;Kefei Liu;Haoran Wei;Pengfei Huo;Huiling Shi;Wei Zhang;Wei Su","doi":"10.1109/TCC.2025.3525496","DOIUrl":"https://doi.org/10.1109/TCC.2025.3525496","url":null,"abstract":"RDMA over Converged Ethernet v2 (RoCEv2) is one of the most popular high-speed datacenter networking solutions. Watermark is the general term for various trigger and release thresholds of RoCEv2 flow control protocols, and its reasonable configuration is an important factor affecting RoCEv2 performance. In this paper, we propose ByteTuning, a centralized watermark tuning system for RoCEv2. First, three real cases of network performance degradation caused by non-optimal or improper watermark configuration are reported, and the network performance results of different watermark configurations in three typical scenarios are traversed, indicating the necessity of watermark tuning. Then, based on the RDMA Fluid model, the influence of watermark on the RoCEv2 performance is modeled and evaluated. Next, the design of the ByteTuning is introduced, which includes three mechanisms. They are 1) using simulated annealing algorithm to make the real-time watermark converge to the near-optimal configuration, 2) using network telemetry to optimize the feedback overhead, 3) compressing the search space to improve the tuning efficiency. Finally, We validate the performance of ByteTuning in multiple real datacenter networking environments, and the results show that ByteTuning outperforms existing solutions.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 1","pages":"303-320"},"PeriodicalIF":5.3,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143570724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Cloud Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1