首页 > 最新文献

IEEE Transactions on Cloud Computing最新文献

英文 中文
psvCNN: A Zero-Knowledge CNN Prediction Integrity Verification Strategy psvCNN:零知识 CNN 预测完整性验证策略
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-01-05 DOI: 10.1109/TCC.2024.3350233
Yongkai Fan;Binyuan Xu;Linlin Zhang;Gang Tan;Shui Yu;Kuan-Ching Li;Albert Zomaya
Model prediction based on machine learning is provided as a service in cloud environments, but how to verify that the model prediction service is entirely conducted becomes a critical challenge. Although zero-knowledge proof techniques potentially solve the integrity verification problem, when applied to the prediction integrity of massive privacy-preserving Convolutional Neural Networks (CNNs), the significant proof burden results in low practicality. In this research, we present psvCNN (parallel splitting zero-knowledge technique for integrity verification). The psvCNN scheme effectively improves the utilization of computational resources in CNN prediction integrity proving by an independent splitting design. Through a convolutional kernel-based model splitting design and an underlying zero-knowledge succinct non-interactive knowledge argument, our psvCNN develops parallelizable zero-knowledge proof circuits for CNN prediction. Furthermore, psvCNN presents an updated Freivalds algorithm for a faster integrity verification process. In terms of proof time and storage, experiments show that psvCNN is practical and efficient. psvCNN generates a prediction integrity proof with a proof size of 1.2MB in 7.65s for the structurally complicated CNN model VGG16. psvCNN is 3765 times quicker than the latest zk-SNARK-based non-interactive method vCNN and 12 times faster than the latest sumcheck-based interactive technique zkCNN in terms of proving time.
在云环境中,基于机器学习的模型预测被作为一种服务提供,但如何验证模型预测服务是否完全进行成为一个关键挑战。尽管零知识证明技术有可能解决完整性验证问题,但当应用于大规模隐私保护卷积神经网络(CNN)的预测完整性时,证明负担过重导致实用性较低。在这项研究中,我们提出了 psvCNN(用于完整性验证的并行分裂零知识技术)。psvCNN 方案通过独立拆分设计,有效提高了 CNN 预测完整性证明中计算资源的利用率。通过基于卷积核的模型拆分设计和底层零知识简洁非交互知识论证,我们的 psvCNN 开发出了用于 CNN 预测的可并行零知识证明电路。此外,psvCNN 还提出了一种更新的 Freivalds 算法,以加快完整性验证过程。在证明时间和存储方面,实验表明 psvCNN 是实用而高效的。对于结构复杂的 CNN 模型 VGG16,psvCNN 在 7.65s 内生成了证明大小为 1.2MB 的预测完整性证明。
{"title":"psvCNN: A Zero-Knowledge CNN Prediction Integrity Verification Strategy","authors":"Yongkai Fan;Binyuan Xu;Linlin Zhang;Gang Tan;Shui Yu;Kuan-Ching Li;Albert Zomaya","doi":"10.1109/TCC.2024.3350233","DOIUrl":"10.1109/TCC.2024.3350233","url":null,"abstract":"Model prediction based on machine learning is provided as a service in cloud environments, but how to verify that the model prediction service is entirely conducted becomes a critical challenge. Although zero-knowledge proof techniques potentially solve the integrity verification problem, when applied to the prediction integrity of massive privacy-preserving Convolutional Neural Networks (CNNs), the significant proof burden results in low practicality. In this research, we present psvCNN (parallel splitting zero-knowledge technique for integrity verification). The psvCNN scheme effectively improves the utilization of computational resources in CNN prediction integrity proving by an independent splitting design. Through a convolutional kernel-based model splitting design and an underlying zero-knowledge succinct non-interactive knowledge argument, our psvCNN develops parallelizable zero-knowledge proof circuits for CNN prediction. Furthermore, psvCNN presents an updated Freivalds algorithm for a faster integrity verification process. In terms of proof time and storage, experiments show that psvCNN is practical and efficient. psvCNN generates a prediction integrity proof with a proof size of 1.2MB in 7.65s for the structurally complicated CNN model VGG16. psvCNN is 3765 times quicker than the latest zk-SNARK-based non-interactive method vCNN and 12 times faster than the latest sumcheck-based interactive technique zkCNN in terms of proving time.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139951405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VSA-SD: A Service Discovery Method Based on Vector Symbol Architecture for Low-Cost IoT System Development VSA-SD:基于矢量符号架构的服务发现方法,适用于低成本物联网系统开发
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2023-12-19 DOI: 10.1109/TCC.2023.3344512
Haiming Chen;Lei Wang;Wei Qin;Xinyan Zhou;Li Cui
In recent years, with the widening applications of the Internet of Things (IoT), more and more perception services (e.g., air quality indicator services, road traffic congestion monitoring services, etc) with different arguments (e.g., data type, source location, creator, etc) will be deployed by dedicated IT infrastructure service providers for constructing customized IoT systems with low cost by subscription. So it is an indispensable step to check whether the required perception services with specified arguments have been available for the constructing IoT through discovery method to reduce the redundancy of service deployment. However, it is a challenging problem to design efficient (i.e., achieving high accuracy and low response delay with low overhead), highly robust, and trustworthy mechanisms for discovering perception services on resource-constrained IoT devices. To solve this problem, we proposed a distributed service discovery method, named VSA-SD, based on the Vector Symbolic Architecture (VSA). This method employs hyperdimensional vectors to describe services in a distributed manner, and measures the degree of service matching by calculating the Hamming distance, thereby achieving service discovery. We implemented VSA-SD in NBUFlow, which is an IoT task construction and offloading test platform, and evaluated its performance through comprehensive experiments. Results show that VSA-SD outperforms the centralized, hybrid, and other distributed service discovery mechanisms in terms of accuracy, response delay, overhead, robustness, trustability, interoperability, and mobility.
近年来,随着物联网(IoT)应用领域的不断扩大,越来越多的感知服务(如空气质量指标服务、道路交通拥堵监测服务等)将由专门的 IT 基础设施服务提供商部署,这些服务具有不同的参数(如数据类型、来源位置、创建者等),通过订阅的方式以低成本构建定制化的物联网系统。因此,通过发现方法检查是否已为构建物联网提供了所需的具有指定参数的感知服务,以减少服务部署的冗余,是一个不可或缺的步骤。然而,如何在资源受限的物联网设备上设计高效(即实现高准确性、低响应延迟和低开销)、高鲁棒性和可信的感知服务发现机制是一个具有挑战性的问题。为了解决这个问题,我们提出了一种基于矢量符号架构(VSA)的分布式服务发现方法,命名为 VSA-SD。该方法采用超维向量以分布式方式描述服务,并通过计算汉明距离来衡量服务匹配程度,从而实现服务发现。我们在物联网任务构建与卸载测试平台NBUFlow中实现了VSA-SD,并通过综合实验评估了其性能。结果表明,VSA-SD 在准确性、响应延迟、开销、鲁棒性、可信任度、互操作性和移动性等方面都优于集中式、混合式和其他分布式服务发现机制。
{"title":"VSA-SD: A Service Discovery Method Based on Vector Symbol Architecture for Low-Cost IoT System Development","authors":"Haiming Chen;Lei Wang;Wei Qin;Xinyan Zhou;Li Cui","doi":"10.1109/TCC.2023.3344512","DOIUrl":"https://doi.org/10.1109/TCC.2023.3344512","url":null,"abstract":"In recent years, with the widening applications of the Internet of Things (IoT), more and more perception services (e.g., air quality indicator services, road traffic congestion monitoring services, etc) with different arguments (e.g., data type, source location, creator, etc) will be deployed by dedicated IT infrastructure service providers for constructing customized IoT systems with low cost by subscription. So it is an indispensable step to check whether the required perception services with specified arguments have been available for the constructing IoT through discovery method to reduce the redundancy of service deployment. However, it is a challenging problem to design efficient (i.e., achieving high accuracy and low response delay with low overhead), highly robust, and trustworthy mechanisms for discovering perception services on resource-constrained IoT devices. To solve this problem, we proposed a distributed service discovery method, named VSA-SD, based on the Vector Symbolic Architecture (VSA). This method employs hyperdimensional vectors to describe services in a distributed manner, and measures the degree of service matching by calculating the Hamming distance, thereby achieving service discovery. We implemented VSA-SD in NBUFlow, which is an IoT task construction and offloading test platform, and evaluated its performance through comprehensive experiments. Results show that VSA-SD outperforms the centralized, hybrid, and other distributed service discovery mechanisms in terms of accuracy, response delay, overhead, robustness, trustability, interoperability, and mobility.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140063571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Mode Instance-Intensive Workflow Task Batch Scheduling in Containerized Hybrid Cloud 容器化混合云中的多模式实例密集型工作流任务批量调度
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2023-12-19 DOI: 10.1109/TCC.2023.3344194
An Liu;Ming Gao;Jiafu Tang
The migration of containerized microservices from virtual machines (VMs) to cloud data centers has become the most advanced deployment technique for large software applications in the cloud. This study investigates the scheduling of instance-intensive workflow (IWF) tasks to be executed in containers on a hybrid cloud when computational resources are limited. The process of scheduling these IWF tasks becomes complicated when considering the deployment time of containers, inter-task communication time, and their dependencies simultaneously, particularly when the task can choose multi-mode executions due to the flexible computational resource allocation of the container. We propose a batch scheduling strategy (BSS) for the IWF task scheduling problem. The BSS prioritizes the execution of IWF tasks with high repetition rates with a certain probability and records the virtual machines and modes selected for task execution, which can reduce the data transfer time and the randomness of computation. Based on this, we use an improved hybrid algorithm combined with BSS to solve the multi-mode IWF task scheduling problem. The experimental results demonstrate that employing the BSS can reduce the scheduling time by 6% when the number of workflows increases to 80. Additionally, we tested the effectiveness of all operators in the algorithm, and the results show that each step of the algorithm yields good performance. Compared to similar algorithms in related studies, the overall algorithm can achieve a maximum reduction of approximately 18% in the target value.
将容器化微服务从虚拟机(VM)迁移到云数据中心已成为云中大型软件应用程序最先进的部署技术。本研究探讨了在计算资源有限的情况下,混合云上容器执行的实例密集型工作流(IWF)任务的调度问题。当同时考虑容器的部署时间、任务间的通信时间以及它们之间的依赖关系时,这些 IWF 任务的调度过程就变得复杂了,尤其是当任务由于容器灵活的计算资源分配而可以选择多模式执行时。我们针对 IWF 任务调度问题提出了一种批量调度策略(BSS)。BSS 以一定概率优先执行重复率高的 IWF 任务,并记录任务执行所选择的虚拟机和模式,从而减少数据传输时间和计算的随机性。在此基础上,我们采用改进的混合算法结合 BSS 解决多模式 IWF 任务调度问题。实验结果表明,当工作流数量增加到 80 个时,采用 BSS 可以减少 6% 的调度时间。此外,我们还测试了算法中所有算子的有效性,结果表明算法的每个步骤都能产生良好的性能。与相关研究中的类似算法相比,整个算法的目标值最多可减少约 18%。
{"title":"Multi-Mode Instance-Intensive Workflow Task Batch Scheduling in Containerized Hybrid Cloud","authors":"An Liu;Ming Gao;Jiafu Tang","doi":"10.1109/TCC.2023.3344194","DOIUrl":"https://doi.org/10.1109/TCC.2023.3344194","url":null,"abstract":"The migration of containerized microservices from virtual machines (VMs) to cloud data centers has become the most advanced deployment technique for large software applications in the cloud. This study investigates the scheduling of instance-intensive workflow (IWF) tasks to be executed in containers on a hybrid cloud when computational resources are limited. The process of scheduling these IWF tasks becomes complicated when considering the deployment time of containers, inter-task communication time, and their dependencies simultaneously, particularly when the task can choose multi-mode executions due to the flexible computational resource allocation of the container. We propose a batch scheduling strategy (BSS) for the IWF task scheduling problem. The BSS prioritizes the execution of IWF tasks with high repetition rates with a certain probability and records the virtual machines and modes selected for task execution, which can reduce the data transfer time and the randomness of computation. Based on this, we use an improved hybrid algorithm combined with BSS to solve the multi-mode IWF task scheduling problem. The experimental results demonstrate that employing the BSS can reduce the scheduling time by 6% when the number of workflows increases to 80. Additionally, we tested the effectiveness of all operators in the algorithm, and the results show that each step of the algorithm yields good performance. Compared to similar algorithms in related studies, the overall algorithm can achieve a maximum reduction of approximately 18% in the target value.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140063569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pay-Per-Proof: Decentralized Outsourced Multi-User PoR for Cloud Storage Payment Using Blockchain 按验证付费:使用区块链支付云存储的去中心化外包多用户 PoR
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2023-12-18 DOI: 10.1109/TCC.2023.3343710
Hui Cui;Zhiguo Wan;Tianyu Zhaolu;Huaqun Wang;Atsuko Miyaji
Cloud computing has been widely applied in data storage, but cloud computing is not armed with an efficient integrity check mechanism for users to learn whether their large volumes of data have been kept intact by the cloud. The concept of proofs of retrievability (PoR) was introduced to address such an issue by enabling users to check the integrity of their data stored by the cloud. But PoR requires users to regularly send queries to the cloud, and its integrity check method cannot be extended to share the verification responsibility in the multi-user setting where different users store the same data to the cloud. With such concerns in mind, we put forth a notion called outsourced multi-user proofs of retrievability ($mathtt {OMTPoR}$) which allows users with the same data stored by the cloud to share the information for the integrity check, and a third party is required to regularly check data integrity on behalf of users using the shared information. We give a concrete construction of $mathtt {OMTPoR}$ based on the homomorphic property of an existing property and analyze its security. To enforce honest integrity checks, we build the concrete $mathtt {OMTPoR}$ construction over the blockchain using smart contracts to guarantee the honesty of participants, yielding a decentralized outsourced multi-user PoR solution that utilizes the blockchain miners as the third parties. Furthermore, our solution enables the cloud server to obtain payment for the storage service if the PoR is verified by the miners. We fully implement the $mathtt {OMTPoR}$ scheme over the blockchain to evaluate its performance, which demonstrates obvious superiority over traditional PoR schemes without the detection of data duplication.
云计算已被广泛应用于数据存储领域,但云计算并不具备有效的完整性检查机制,用户无法了解云计算是否完整地保存了他们的大量数据。为了解决这个问题,人们提出了可检索性证明(PoR)的概念,使用户能够检查云存储数据的完整性。但是,PoR 要求用户定期向云发送查询,而且其完整性检查方法无法扩展到在多用户设置(不同用户向云存储相同数据)中分担验证责任。有鉴于此,我们提出了一种名为外包多用户可检索性证明($mathtt {OMTPoR}$)的概念,它允许云存储相同数据的用户共享完整性检查信息,并要求第三方定期使用共享信息代表用户检查数据的完整性。我们基于现有属性的同态属性给出了$mathtt {OMTPoR}$的具体构造,并分析了其安全性。为了执行诚实完整性检查,我们在区块链上使用智能合约构建了具体的 $mathtt {OMTPoR}$ 结构,以保证参与者的诚实性,从而产生了一种利用区块链矿工作为第三方的去中心化外包多用户 PoR 解决方案。此外,如果 PoR 得到矿工的验证,我们的解决方案还能让云服务器获得存储服务的付款。我们在区块链上完全实现了$mathtt {OMTPoR}$方案,以评估其性能,该方案在不检测数据重复的情况下明显优于传统的PoR方案。
{"title":"Pay-Per-Proof: Decentralized Outsourced Multi-User PoR for Cloud Storage Payment Using Blockchain","authors":"Hui Cui;Zhiguo Wan;Tianyu Zhaolu;Huaqun Wang;Atsuko Miyaji","doi":"10.1109/TCC.2023.3343710","DOIUrl":"https://doi.org/10.1109/TCC.2023.3343710","url":null,"abstract":"Cloud computing has been widely applied in data storage, but cloud computing is not armed with an efficient integrity check mechanism for users to learn whether their large volumes of data have been kept intact by the cloud. The concept of proofs of retrievability (PoR) was introduced to address such an issue by enabling users to check the integrity of their data stored by the cloud. But PoR requires users to regularly send queries to the cloud, and its integrity check method cannot be extended to share the verification responsibility in the multi-user setting where different users store the same data to the cloud. With such concerns in mind, we put forth a notion called outsourced multi-user proofs of retrievability (\u0000<inline-formula><tex-math>$mathtt {OMTPoR}$</tex-math></inline-formula>\u0000) which allows users with the same data stored by the cloud to share the information for the integrity check, and a third party is required to regularly check data integrity on behalf of users using the shared information. We give a concrete construction of \u0000<inline-formula><tex-math>$mathtt {OMTPoR}$</tex-math></inline-formula>\u0000 based on the homomorphic property of an existing property and analyze its security. To enforce honest integrity checks, we build the concrete \u0000<inline-formula><tex-math>$mathtt {OMTPoR}$</tex-math></inline-formula>\u0000 construction over the blockchain using smart contracts to guarantee the honesty of participants, yielding a decentralized outsourced multi-user PoR solution that utilizes the blockchain miners as the third parties. Furthermore, our solution enables the cloud server to obtain payment for the storage service if the PoR is verified by the miners. We fully implement the \u0000<inline-formula><tex-math>$mathtt {OMTPoR}$</tex-math></inline-formula>\u0000 scheme over the blockchain to evaluate its performance, which demonstrates obvious superiority over traditional PoR schemes without the detection of data duplication.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2023-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140063471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Approximation Algorithms for Scheduling Coflows With Total Weighted Completion Time in Identical Parallel Networks 在完全相同的并行网络中调度具有总加权完成时间的共同流的高效近似算法
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2023-12-08 DOI: 10.1109/TCC.2023.3340729
Chi-Yeh Chen
This article addresses the scheduling problem of coflows in identical parallel networks, a well-known $mathcal {NP}$-hard problem. We consider both flow-level scheduling and coflow-level scheduling problems. In the flow-level scheduling problem, flows within a coflow can be transmitted through different network cores, while in the coflow-level scheduling problem, flows within a coflow must be transmitted through the same network core. The key difference between these two problems lies in their scheduling granularity. Previous approaches relied on linear programming to solve the scheduling order. In this article, we enhance the efficiency of solving by utilizing the primal-dual method. For the flow-level scheduling problem, we propose an approximation algorithm that achieves approximation ratios of $6-frac{2}{m}$ and $5-frac{2}{m}$ for arbitrary and zero release times, respectively, where $m$ represents the number of network cores. Additionally, for the coflow-level scheduling problem, we introduce an approximation algorithm that achieves approximation ratios of $4m+1$ and $text{4}m$ for arbitrary and zero release times, respectively. The algorithm presented in this article has practical applications in data centers, such as those operated by Google or Facebook. The simulated results demonstrate the superior performance of our algorithms compared to previous approach, emphasizing their practical utility.
本文探讨了相同并行网络中的同流调度问题,这是一个著名的 $mathcal {NP}$ 难问题。我们同时考虑了流级调度和同流级调度问题。在流级调度问题中,共流中的流可以通过不同的网络核心传输,而在共流级调度问题中,共流中的流必须通过同一网络核心传输。这两个问题的关键区别在于它们的调度粒度。以往的方法依赖线性规划来解决调度顺序问题。在本文中,我们利用基元二元方法提高了求解效率。对于流级调度问题,我们提出了一种近似算法,对于任意释放时间和零释放时间,其近似率分别达到 $6-frac{2}{m}$ 和 $5-frac{2}{m}$,其中 $m$ 代表网络核心数。此外,对于共流级调度问题,我们引入了一种近似算法,该算法在任意释放时间和零释放时间下的近似率分别达到了 $4m+1$ 和 $text{4}m$。本文介绍的算法可实际应用于数据中心,如谷歌或 Facebook 运营的数据中心。仿真结果表明,与之前的方法相比,我们的算法性能更优越,从而强调了其实用性。
{"title":"Efficient Approximation Algorithms for Scheduling Coflows With Total Weighted Completion Time in Identical Parallel Networks","authors":"Chi-Yeh Chen","doi":"10.1109/TCC.2023.3340729","DOIUrl":"https://doi.org/10.1109/TCC.2023.3340729","url":null,"abstract":"This article addresses the scheduling problem of coflows in identical parallel networks, a well-known \u0000<inline-formula><tex-math>$mathcal {NP}$</tex-math></inline-formula>\u0000-hard problem. We consider both flow-level scheduling and coflow-level scheduling problems. In the flow-level scheduling problem, flows within a coflow can be transmitted through different network cores, while in the coflow-level scheduling problem, flows within a coflow must be transmitted through the same network core. The key difference between these two problems lies in their scheduling granularity. Previous approaches relied on linear programming to solve the scheduling order. In this article, we enhance the efficiency of solving by utilizing the primal-dual method. For the flow-level scheduling problem, we propose an approximation algorithm that achieves approximation ratios of \u0000<inline-formula><tex-math>$6-frac{2}{m}$</tex-math></inline-formula>\u0000 and \u0000<inline-formula><tex-math>$5-frac{2}{m}$</tex-math></inline-formula>\u0000 for arbitrary and zero release times, respectively, where \u0000<inline-formula><tex-math>$m$</tex-math></inline-formula>\u0000 represents the number of network cores. Additionally, for the coflow-level scheduling problem, we introduce an approximation algorithm that achieves approximation ratios of \u0000<inline-formula><tex-math>$4m+1$</tex-math></inline-formula>\u0000 and \u0000<inline-formula><tex-math>$text{4}m$</tex-math></inline-formula>\u0000 for arbitrary and zero release times, respectively. The algorithm presented in this article has practical applications in data centers, such as those operated by Google or Facebook. The simulated results demonstrate the superior performance of our algorithms compared to previous approach, emphasizing their practical utility.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2023-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10349921","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140063568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing Cloud Data Lake Queries With a Balanced Coverage Plan 利用平衡覆盖计划优化云数据湖查询
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2023-12-05 DOI: 10.1109/TCC.2023.3339208
Grisha Weintraub;Ehud Gudes;Shlomi Dolev;Jeffrey D. Ullman
Cloud data lakes emerge as an inexpensive solution for storing very large amounts of data. The main idea is the separation of compute and storage layers. Thus, cheap cloud storage is used for storing the data, while compute engines are used for running analytics on this data in “on-demand” mode. However, to perform any computation on the data in this architecture, the data should be moved from the storage layer to the compute layer over the network for each calculation. Obviously, that hurts calculation performance and requires huge network bandwidth. In this paper, we study different approaches to improve query performance in a data lake architecture. We define an optimization problem that can provably speed up data lake queries. We prove that the problem is NP-hard and suggest heuristic approaches. Then, we demonstrate through the experiments that our approach is feasible and efficient (up to ×30 query execution time improvement based on the TPC-H benchmark).
云数据湖是存储海量数据的一种廉价解决方案。其主要理念是将计算层和存储层分离。因此,廉价的云存储用于存储数据,而计算引擎则用于以 "按需 "模式在这些数据上运行分析。然而,在这种架构中,要对数据进行任何计算,每次计算都要通过网络将数据从存储层移动到计算层。这显然会损害计算性能,而且需要巨大的网络带宽。本文研究了在数据湖架构中提高查询性能的不同方法。我们定义了一个优化问题,可以证明它能加快数据湖查询的速度。我们证明了该问题的 NP 难度,并提出了启发式方法。然后,我们通过实验证明了我们的方法是可行且高效的(根据 TPC-H 基准,查询执行时间最多可提高 ×30)。
{"title":"Optimizing Cloud Data Lake Queries With a Balanced Coverage Plan","authors":"Grisha Weintraub;Ehud Gudes;Shlomi Dolev;Jeffrey D. Ullman","doi":"10.1109/TCC.2023.3339208","DOIUrl":"https://doi.org/10.1109/TCC.2023.3339208","url":null,"abstract":"Cloud data lakes emerge as an inexpensive solution for storing very large amounts of data. The main idea is the separation of compute and storage layers. Thus, cheap cloud storage is used for storing the data, while compute engines are used for running analytics on this data in “on-demand” mode. However, to perform any computation on the data in this architecture, the data should be moved from the storage layer to the compute layer over the network for each calculation. Obviously, that hurts calculation performance and requires huge network bandwidth. In this paper, we study different approaches to improve query performance in a data lake architecture. We define an optimization problem that can provably speed up data lake queries. We prove that the problem is NP-hard and suggest heuristic approaches. Then, we demonstrate through the experiments that our approach is feasible and efficient (up to ×30 query execution time improvement based on the TPC-H benchmark).","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2023-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140063572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrated Computation Offloading, UAV Trajectory Control, Edge-Cloud and Radio Resource Allocation in SAGIN SAGIN 中的集成计算卸载、无人机轨迹控制、边缘云和无线电资源分配
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2023-12-05 DOI: 10.1109/TCC.2023.3339394
Minh Dat Nguyen;Long Bao Le;André Girard
In this article, we study the computation offloading problem in hybrid edge-cloud based space-air-ground integrated networks (SAGIN), where joint optimization of partial computation offloading, unmanned aerial vehicle (UAV) trajectory control, user scheduling, edge-cloud computation, radio resource allocation, and admission control is performed. Specifically, the considered SAGIN employs multiple UAV-mounted edge servers with controllable UAV trajectory and a cloud sever which can be reached by ground users (GUs) via multi-hop low-earth-orbit (LEO) satellite communications. This design aims to minimize the weighted energy consumption of the GUs and UAVs while satisfying the maximum delay constraints of underlying computation tasks. To tackle the underlying non-convex mixed integer non-linear optimization problem, we use the alternating optimization approach where we iteratively solve four sub-problems, namely user scheduling, partial offloading control and bit allocation over time slots, computation resource and bandwidth allocation, and multi-UAV trajectory control until convergence. Moreover, feasibility verification and admission control strategies are proposed to handle overloaded network scenarios. Furthermore, the successive convex approximation (SCA) method is employed to convexify and solve the non-convex computation resource and bandwidth allocation and UAV trajectory control sub-problems. Via extensive numerical studies, we illustrate the effectiveness of our proposed design compared to baselines.
本文研究了基于边缘-云的空-空-地混合集成网络(SAGIN)中的计算卸载问题,对部分计算卸载、无人机(UAV)轨迹控制、用户调度、边缘-云计算、无线电资源分配和准入控制进行了联合优化。具体来说,所考虑的 SAGIN 采用了多个安装有可控无人飞行器轨迹的无人飞行器边缘服务器,以及地面用户(GU)可通过多跳低地球轨道(LEO)卫星通信到达的云服务器。该设计旨在最大限度地减少地面用户和无人机的加权能耗,同时满足底层计算任务的最大延迟约束。为了解决基本的非凸混合整数非线性优化问题,我们采用了交替优化方法,反复求解四个子问题,即用户调度、部分卸载控制和时隙比特分配、计算资源和带宽分配以及多无人机轨迹控制,直至收敛。此外,还提出了可行性验证和接纳控制策略,以处理网络过载情况。此外,我们还采用了连续凸近似(SCA)方法来凸化和求解非凸计算资源和带宽分配以及无人机轨迹控制子问题。通过大量的数值研究,我们说明了与基线相比,我们提出的设计方案的有效性。
{"title":"Integrated Computation Offloading, UAV Trajectory Control, Edge-Cloud and Radio Resource Allocation in SAGIN","authors":"Minh Dat Nguyen;Long Bao Le;André Girard","doi":"10.1109/TCC.2023.3339394","DOIUrl":"https://doi.org/10.1109/TCC.2023.3339394","url":null,"abstract":"In this article, we study the computation offloading problem in hybrid edge-cloud based space-air-ground integrated networks (SAGIN), where joint optimization of partial computation offloading, unmanned aerial vehicle (UAV) trajectory control, user scheduling, edge-cloud computation, radio resource allocation, and admission control is performed. Specifically, the considered SAGIN employs multiple UAV-mounted edge servers with controllable UAV trajectory and a cloud sever which can be reached by ground users (GUs) via multi-hop low-earth-orbit (LEO) satellite communications. This design aims to minimize the weighted energy consumption of the GUs and UAVs while satisfying the maximum delay constraints of underlying computation tasks. To tackle the underlying non-convex mixed integer non-linear optimization problem, we use the alternating optimization approach where we iteratively solve four sub-problems, namely user scheduling, partial offloading control and bit allocation over time slots, computation resource and bandwidth allocation, and multi-UAV trajectory control until convergence. Moreover, feasibility verification and admission control strategies are proposed to handle overloaded network scenarios. Furthermore, the successive convex approximation (SCA) method is employed to convexify and solve the non-convex computation resource and bandwidth allocation and UAV trajectory control sub-problems. Via extensive numerical studies, we illustrate the effectiveness of our proposed design compared to baselines.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2023-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140063507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Publicly Verifiable Outsourcing Matrix Computation Scheme Based on Smart Contracts 基于智能合约的可公开验证的外包矩阵计算方案
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2023-11-30 DOI: 10.1109/TCC.2023.3337848
Hao Wang;Chunpeng Ge;Lu Zhou;Zhe Liu;Dongwan Lan;Xiaozhen Lu;Danni Jiang
Matrix computation is a crucial mathematical tool in scientific fields such as Artificial Intelligence and Cryptographic computation. However, it is difficult for resource-limited devices to execute large-scale matrix computations independently. Outsourcing matrix computation (OMC) is a promising solution that engages a cloud server to process complicated matrix computations for resource-limited devices. However, existing OMC schemes lack public verifiability, and thus resource-limited devices cannot verdict the correctness of the computing results. In this paper, for the first time, we propose a smart contract-based OMC scheme that publicly verifies the outsourcing matrix computation results. In our scheme, a smart contract running over the blockchain serves as a decentralized trusted third party to ensure the correctness of the matrix computation results. To overcome the Verifier's Dilemma in the blockchain, we present a blockchain-compatible matrix verification method that decreases the time complexity from $O(n^{3})$ to $O(n^{2})$ by utilizing a blinding method with the check digit and padding matrices. We make the verification become the form of comparing whether two results are identical rather than naive re-computing. Finally, we perform experiments on Ethereum and ARM Cortex-M4 and give in-depth analysis and performance evaluation, demonstrating our scheme's practicability and effectiveness.
矩阵计算是人工智能和密码计算等科学领域的重要数学工具。然而,资源有限的设备很难独立执行大规模矩阵计算。矩阵计算外包(OMC)是一种很有前景的解决方案,它利用云服务器为资源有限的设备处理复杂的矩阵计算。然而,现有的 OMC 方案缺乏公开可验证性,因此资源有限的设备无法确定计算结果的正确性。在本文中,我们首次提出了一种基于智能合约的 OMC 方案,可公开验证外包矩阵计算结果。在我们的方案中,运行在区块链上的智能合约作为去中心化的可信第三方,确保矩阵计算结果的正确性。为了克服区块链中的验证者困境,我们提出了一种与区块链兼容的矩阵验证方法,通过利用校验码和填充矩阵的盲法,将时间复杂度从$O(n^{3})$降低到$O(n^{2})$。我们将验证变成了比较两个结果是否相同的形式,而不是天真的重新计算。最后,我们在以太坊和 ARM Cortex-M4 上进行了实验,并给出了深入分析和性能评估,证明了我们方案的实用性和有效性。
{"title":"A Publicly Verifiable Outsourcing Matrix Computation Scheme Based on Smart Contracts","authors":"Hao Wang;Chunpeng Ge;Lu Zhou;Zhe Liu;Dongwan Lan;Xiaozhen Lu;Danni Jiang","doi":"10.1109/TCC.2023.3337848","DOIUrl":"https://doi.org/10.1109/TCC.2023.3337848","url":null,"abstract":"Matrix computation is a crucial mathematical tool in scientific fields such as Artificial Intelligence and Cryptographic computation. However, it is difficult for resource-limited devices to execute large-scale matrix computations independently. Outsourcing matrix computation (OMC) is a promising solution that engages a cloud server to process complicated matrix computations for resource-limited devices. However, existing OMC schemes lack public verifiability, and thus resource-limited devices cannot verdict the correctness of the computing results. In this paper, for the first time, we propose a smart contract-based OMC scheme that publicly verifies the outsourcing matrix computation results. In our scheme, a smart contract running over the blockchain serves as a decentralized trusted third party to ensure the correctness of the matrix computation results. To overcome the Verifier's Dilemma in the blockchain, we present a blockchain-compatible matrix verification method that decreases the time complexity from \u0000<inline-formula><tex-math>$O(n^{3})$</tex-math></inline-formula>\u0000 to \u0000<inline-formula><tex-math>$O(n^{2})$</tex-math></inline-formula>\u0000 by utilizing a blinding method with the check digit and padding matrices. We make the verification become the form of comparing whether two results are identical rather than naive re-computing. Finally, we perform experiments on Ethereum and ARM Cortex-M4 and give in-depth analysis and performance evaluation, demonstrating our scheme's practicability and effectiveness.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140063570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Stochastic Approach for Scheduling AI Training Jobs in GPU-Based Systems 在基于 GPU 的系统中调度人工智能训练任务的随机方法
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2023-11-24 DOI: 10.1109/TCC.2023.3336540
Federica Filippini;Jonatha Anselmi;Danilo Ardagna;Bruno Gaujal
In this work, we optimize the scheduling of Deep Learning (DL) training jobs from the perspective of a Cloud Service Provider running a data center, which efficiently selects resources for the execution of each job to minimize the average energy consumption while satisfying time constraints. To model the problem, we first develop a Mixed-Integer Non-Linear Programming formulation. Unfortunately, the computation of an optimal solution is prohibitively expensive, and to overcome this difficulty, we design a heuristic STochastic Scheduler (STS). Exploiting the probability distribution of early termination, STS determines how to adapt the resource assignment during the execution of the jobs to minimize the expected energy cost while meeting the job due dates. The results of an extensive experimental evaluation show that STS guarantees significantly better results than other methods in the literature, effectively avoiding due date violations and yielding a percentage total cost reduction between 32% and 80% on average. We also prove the applicability of our method in real-world scenarios, as obtaining optimal schedules for systems of up to 100 nodes and 400 concurrent jobs requires less than 5 seconds. Finally, we evaluated the effectiveness of GPU sharing, i.e., running multiple jobs in a single GPU. The obtained results demonstrate that depending on the workload and GPU memory, this further reduces the energy cost by 17–29% on average.
在这项工作中,我们从运行数据中心的云服务提供商的角度出发,优化了深度学习(DL)训练作业的调度,为每个作业的执行有效地选择资源,以在满足时间限制的同时最大限度地降低平均能耗。为了给问题建模,我们首先开发了一种混合整数非线性编程公式。不幸的是,计算最优解的成本过高,为了克服这一困难,我们设计了启发式随机调度程序(STS)。利用提前终止的概率分布,STS 可以确定如何在作业执行过程中调整资源分配,从而在满足作业到期日期的同时,最大限度地降低预期能源成本。广泛的实验评估结果表明,与文献中的其他方法相比,STS 能保证明显更好的结果,有效避免违反到期日期的情况,平均降低总成本的百分比在 32% 到 80% 之间。我们还证明了我们的方法在实际场景中的适用性,因为在多达 100 个节点和 400 个并发作业的系统中,获得最佳日程安排所需的时间不到 5 秒。最后,我们评估了 GPU 共享(即在单个 GPU 上运行多个作业)的有效性。结果表明,根据工作负载和 GPU 内存的不同,这种方法平均可进一步降低 17-29% 的能源成本。
{"title":"A Stochastic Approach for Scheduling AI Training Jobs in GPU-Based Systems","authors":"Federica Filippini;Jonatha Anselmi;Danilo Ardagna;Bruno Gaujal","doi":"10.1109/TCC.2023.3336540","DOIUrl":"https://doi.org/10.1109/TCC.2023.3336540","url":null,"abstract":"In this work, we optimize the scheduling of Deep Learning (DL) training jobs from the perspective of a Cloud Service Provider running a data center, which efficiently selects resources for the execution of each job to minimize the average energy consumption while satisfying time constraints. To model the problem, we first develop a Mixed-Integer Non-Linear Programming formulation. Unfortunately, the computation of an optimal solution is prohibitively expensive, and to overcome this difficulty, we design a heuristic STochastic Scheduler (STS). Exploiting the probability distribution of early termination, STS determines how to adapt the resource assignment during the execution of the jobs to minimize the expected energy cost while meeting the job due dates. The results of an extensive experimental evaluation show that STS guarantees significantly better results than other methods in the literature, effectively avoiding due date violations and yielding a percentage total cost reduction between 32% and 80% on average. We also prove the applicability of our method in real-world scenarios, as obtaining optimal schedules for systems of up to 100 nodes and 400 concurrent jobs requires less than 5 seconds. Finally, we evaluated the effectiveness of GPU sharing, i.e., running multiple jobs in a single GPU. The obtained results demonstrate that depending on the workload and GPU memory, this further reduces the energy cost by 17–29% on average.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2023-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140063627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing the Micro-Architectural Performance of the Current and Emerging Edge Infrastructure 优化当前和新兴边缘基础设施的微架构性能
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2023-11-17 DOI: 10.1109/TCC.2023.3333813
Jianda Wang;Zhen Wang;Weili Wu;Yang Hu
The Network Function Virtualization (NFV) is the essential technology proposed to tackle the next-generation mobile system’s various flexibility features. In this article, we implement a thorough micro-architectural performance investigation on the NFV-enabled edge virtual Radio Access Network (vRAN) and the emerging 5G new-radio (nr) platform to unveil the main micro-architectural bottlenecks of the next-generation network’s vRAN system. Based on our experimental results, we find that the high core bound hinders the processing speed of the vRAN and 5G nr platforms. Several solutions alleviating the vRAN’s core bound are proposed to accelerate the vRAN system’s processing speed. Besides, we observe that the current co-location strategy cannot maximize the COTS servers’ CPU utilization and meanwhile eliminate the system hang-up caused by CPU resource contention. We fill this gap by proposing an optimized co-location strategy based on our observed vRAN co-location characterization. Finally, we detect that on the modern hyper-threading-enabled COTS servers, the current pin core policy of 5G nr will cause L3 cache contention, which will lead to severe system hang-up. A novel threads management mechanism is proposed to eliminate this system hang-up on the hyper-threading-enabled COTS servers.
网络功能虚拟化(NFV)是为应对下一代移动系统的各种灵活性特征而提出的基本技术。在本文中,我们对支持 NFV 的边缘虚拟无线接入网(vRAN)和新兴的 5G 新无线电(nr)平台进行了深入的微架构性能研究,以揭示下一代网络 vRAN 系统的主要微架构瓶颈。根据实验结果,我们发现高内核束缚阻碍了 vRAN 和 5G nr 平台的处理速度。为了加快 vRAN 系统的处理速度,我们提出了几种缓解 vRAN 核心约束的解决方案。此外,我们还发现,当前的共址策略无法最大限度地提高 COTS 服务器的 CPU 利用率,同时消除 CPU 资源争用导致的系统挂起。我们根据观察到的 vRAN 共定位特征,提出了优化的共定位策略,填补了这一空白。最后,我们发现,在支持超线程的现代 COTS 服务器上,当前 5G nr 的针核策略会造成 L3 缓存争用,从而导致严重的系统挂起。我们提出了一种新颖的线程管理机制,以消除支持超线程的 COTS 服务器上的系统挂起现象。
{"title":"Optimizing the Micro-Architectural Performance of the Current and Emerging Edge Infrastructure","authors":"Jianda Wang;Zhen Wang;Weili Wu;Yang Hu","doi":"10.1109/TCC.2023.3333813","DOIUrl":"https://doi.org/10.1109/TCC.2023.3333813","url":null,"abstract":"The Network Function Virtualization (NFV) is the essential technology proposed to tackle the next-generation mobile system’s various flexibility features. In this article, we implement a thorough micro-architectural performance investigation on the NFV-enabled edge virtual Radio Access Network (vRAN) and the emerging 5G new-radio (nr) platform to unveil the main micro-architectural bottlenecks of the next-generation network’s vRAN system. Based on our experimental results, we find that the high core bound hinders the processing speed of the vRAN and 5G nr platforms. Several solutions alleviating the vRAN’s core bound are proposed to accelerate the vRAN system’s processing speed. Besides, we observe that the current co-location strategy cannot maximize the COTS servers’ CPU utilization and meanwhile eliminate the system hang-up caused by CPU resource contention. We fill this gap by proposing an optimized co-location strategy based on our observed vRAN co-location characterization. Finally, we detect that on the modern hyper-threading-enabled COTS servers, the current pin core policy of 5G nr will cause L3 cache contention, which will lead to severe system hang-up. A novel threads management mechanism is proposed to eliminate this system hang-up on the hyper-threading-enabled COTS servers.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2023-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140063640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Cloud Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1