首页 > 最新文献

Proceedings of the ... ACM Symposium on Cloud Computing [electronic resource] : SOCC ... ... SoCC (Conference)最新文献

英文 中文
WedgeDB WedgeDB
Abhishek A. Singh, Faisal Nawab
Wide-area Edge Database (WedgeDB) span globally and store data closer to the users. We term this the Global Edge Data management problem. In such an environment, aspects such as data storage, retrieval, transaction processing, and protection from malicious actors need to be addressed by any data management system that aims to consider itself a viable solution. Although blockchain technology (both permissioned and permissionless) has provided a way to address these concerns, transaction processing in these environments is still challenging. WedgeDB is an attempt to address these security problems in edge-cloud data systems [2]. The main goals of WedgeDB are to support distributed transaction processing coupled with secure transaction execution. Data stored in WedgeDB is partitioned into clusters with each cluster handling a unique set of keys. WedgeDB is a collection of such clusters. This is shown in figure 1. In figure 1, each cluster Cx stores a unique set of keys. This partitioning scheme allows us to build a distributed transaction model which runs transactions on a subset of clusters in the network. Transactions in WedgeDB are serializable. In WedgeDB, clients perform read operations as part of a transaction via read requests which can be sent to any of the WedgeDB nodes. The read operations are added to the transaction's history. Write operations are cached by the client until a commit is called which sends the transaction object containing the read history and write operations to a WedgeDB node to be committed. Transactions are processed in batches called Epochs. Each cluster maintains a leader which receives transactions and groups them into epochs. A cluster in WedgeDB contains 3f + 1 nodes (where f is the number of tolerable faulty nodes) and PBFT[1] is used to attain consensus among the nodes when executing transactions. Keys modified during an epoch are added to a Merkle tree which is used to verify changes to keys handled by the cluster. During transaction execution, proof of transaction execution is generated in the form of signed data blocks by the nodes in the cluster. At least f + 1 signed messages must be gathered before an epoch can be committed. These data blocks along with the root of the Merkle tree are stored in an SMR log where each entry in the SMR log corresponds to an epoch. Transactions that contain keys from different clusters are executed via two-phase commit. During the prepare phase a remote cluster executes PBFT within its local cluster and checks for dependency violations before moving ahead with the commit phase. Committed epoch may not have committed transactions and therefore an additional parameter is used to indicate the last committed epoch. This parameter combined with the dependency vector help in finding out serializability violations and abort transactions. With WedgeDB, transactions that affect only a few clusters do not require global consensus to commit. Transactions that read keys from a number of c
{"title":"WedgeDB","authors":"Abhishek A. Singh, Faisal Nawab","doi":"10.1145/3357223.3365444","DOIUrl":"https://doi.org/10.1145/3357223.3365444","url":null,"abstract":"Wide-area Edge Database (WedgeDB) span globally and store data closer to the users. We term this the Global Edge Data management problem. In such an environment, aspects such as data storage, retrieval, transaction processing, and protection from malicious actors need to be addressed by any data management system that aims to consider itself a viable solution. Although blockchain technology (both permissioned and permissionless) has provided a way to address these concerns, transaction processing in these environments is still challenging. WedgeDB is an attempt to address these security problems in edge-cloud data systems [2]. The main goals of WedgeDB are to support distributed transaction processing coupled with secure transaction execution. Data stored in WedgeDB is partitioned into clusters with each cluster handling a unique set of keys. WedgeDB is a collection of such clusters. This is shown in figure 1. In figure 1, each cluster Cx stores a unique set of keys. This partitioning scheme allows us to build a distributed transaction model which runs transactions on a subset of clusters in the network. Transactions in WedgeDB are serializable. In WedgeDB, clients perform read operations as part of a transaction via read requests which can be sent to any of the WedgeDB nodes. The read operations are added to the transaction's history. Write operations are cached by the client until a commit is called which sends the transaction object containing the read history and write operations to a WedgeDB node to be committed. Transactions are processed in batches called Epochs. Each cluster maintains a leader which receives transactions and groups them into epochs. A cluster in WedgeDB contains 3f + 1 nodes (where f is the number of tolerable faulty nodes) and PBFT[1] is used to attain consensus among the nodes when executing transactions. Keys modified during an epoch are added to a Merkle tree which is used to verify changes to keys handled by the cluster. During transaction execution, proof of transaction execution is generated in the form of signed data blocks by the nodes in the cluster. At least f + 1 signed messages must be gathered before an epoch can be committed. These data blocks along with the root of the Merkle tree are stored in an SMR log where each entry in the SMR log corresponds to an epoch. Transactions that contain keys from different clusters are executed via two-phase commit. During the prepare phase a remote cluster executes PBFT within its local cluster and checks for dependency violations before moving ahead with the commit phase. Committed epoch may not have committed transactions and therefore an additional parameter is used to indicate the last committed epoch. This parameter combined with the dependency vector help in finding out serializability violations and abort transactions. With WedgeDB, transactions that affect only a few clusters do not require global consensus to commit. Transactions that read keys from a number of c","PeriodicalId":91949,"journal":{"name":"Proceedings of the ... ACM Symposium on Cloud Computing [electronic resource] : SOCC ... ... SoCC (Conference)","volume":"13 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82473814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Startups That Stand Out from the Cloud 从云计算中脱颖而出的创业公司
Sarah Guo
The transition of traditional IT to cloud architectures and technologies is a trillion-dollar commercial opportunity, and we're still in the early innings of that shift. Venture-backed startups are competing alongside the big three platform providers to bring cloud technologies to market. What types of companies are VC-investable, what advantages do tiny teams have, and what does leading venture capital firm and cloud investor Greylock Partners look for? This talk will orient an academic audience in the considerations of early technology company-building, outline areas of investing interest, and discuss some common pitfalls for startups emerging from academia.
传统IT向云架构和技术的转变是一个价值数万亿美元的商业机会,而我们仍处于这种转变的早期阶段。风投支持的初创公司正在与三大平台提供商竞争,将云技术推向市场。哪些类型的公司是风投可投资的?小团队有什么优势?领先的风投公司和云投资公司Greylock Partners看重什么?本次演讲将引导学术观众了解早期科技公司建设的考虑因素,概述投资兴趣领域,并讨论学术界初创公司的一些常见陷阱。
{"title":"Startups That Stand Out from the Cloud","authors":"Sarah Guo","doi":"10.1145/3357223.3365868","DOIUrl":"https://doi.org/10.1145/3357223.3365868","url":null,"abstract":"The transition of traditional IT to cloud architectures and technologies is a trillion-dollar commercial opportunity, and we're still in the early innings of that shift. Venture-backed startups are competing alongside the big three platform providers to bring cloud technologies to market. What types of companies are VC-investable, what advantages do tiny teams have, and what does leading venture capital firm and cloud investor Greylock Partners look for? This talk will orient an academic audience in the considerations of early technology company-building, outline areas of investing interest, and discuss some common pitfalls for startups emerging from academia.","PeriodicalId":91949,"journal":{"name":"Proceedings of the ... ACM Symposium on Cloud Computing [electronic resource] : SOCC ... ... SoCC (Conference)","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85347381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Isopod: An Expressive DSL for Kubernetes Configuration Isopod:用于Kubernetes配置的表达性DSL
Charles Xu, Dmitry Ilyevskiy
Kubernetes is an open-source cluster orchestration system for containerized workloads to reduce idiosyncrasy across cloud vendors [2]. Using Kubernetes, Cruise has built a multi-tenant platform with thousands of cores and tens of terabytes of memory. Such a scale is possible in part thanks to the declarative abstraction of Kubernetes, where desired states are described in YAML manifests [5]. However, YAML as a data serialization format is unfit for workload specification. Structured data in YAML are untyped and prone to wrong indents and missing fields. Due to poor meta-programming support, composing YAML with control logic---loops and branches---suffers from YAML fragmentation and indentation tracking (example at bit.ly/yml-hell). Moreover, YAML manifests are often generated by filling a shared template with cluster-specific parameters---the image tag and the replica count might differ in development and production environments. Existing templating tools---Helm [11], Kustomize [9], Kapitan [7] and the likes---assume these parameters are statically known and use CLIs to query dynamic ones, such as secrets stored in HashiCorp Vault [10]. Such scheme is hard to test, since side effects escape through CLIs, and highly depends on the execution environment, since CLI versions vary across machines or might not exist. Not least, YAML manifests describe the eventual state but not how existing workloads will be affected. Blindly applying the manifest---for example, from a stale version of code---can be disastrous and cause unexpected outages. Isopod presents an alternative configuration paradigm by treating Kubernetes objects as first-class citizens. Without intermediate YAML artifacts, Isopod renders Kubernetes objects directly in Protocol Buffers [8], so they are strongly typed and consumed directly by the Kubernetes API. With Isopod, configurations are scripted in Starlark [3], a Python dialect by Google also used by Bazel [1] and Buck [4] build systems. To replace CLI dependencies, Isopod extends Starlark with runtime built-ins to access services and utilities such as Vault, Kubernetes apiserver, Base64 encoder, and UUID generator, etc. Isopod uses a separate runtime for unit tests to mock all built-ins, providing test coverage that was not possible before. Isopod is also hermetic and secure. The common reliance on the kubeconfig file for cluster authentication leaks secrets to disk, a security risk if working from a shared host, such as a cluster node or CICD worker. Instead, Isopod builds Oauth2 tokens [6] to the target cluster using the Identity & Access Management (IAM) service of the cloud vendor. Application secrets are stored in Vault and queried at runtime. Hence, no secrets escape to the disk. In fact, Isopod prohibits disk IO except for loading Starlark modules from other scripts. No external libraries can be loaded unless explicitly implemented as an Isopod built-in. Distributed as a single binary, Isopod is self-contained with all dependen
Kubernetes是一个用于容器化工作负载的开源集群编排系统,以减少云供应商之间的特殊性[2]。使用Kubernetes, Cruise构建了一个拥有数千个内核和数十tb内存的多租户平台。这样的规模在某种程度上是可能的,这要归功于Kubernetes的声明性抽象,在YAML清单中描述了所需的状态[5]。然而,YAML作为数据序列化格式不适合工作负载规范。YAML中的结构化数据没有类型,容易出现错误缩进和缺少字段。由于缺乏元编程支持,使用控制逻辑(循环和分支)组合YAML会受到YAML碎片化和缩进跟踪的困扰(例如bit.ly/yml-hell)。此外,YAML清单通常是通过使用特定于集群的参数填充共享模板生成的——在开发环境和生产环境中,图像标记和副本计数可能不同。现有的模板工具——Helm[11]、Kustomize[9]、Kapitan[7]等——假设这些参数是静态已知的,并使用cli查询动态参数,例如存储在HashiCorp Vault[10]中的秘密。这种方案很难测试,因为副作用会通过CLI逃逸,并且高度依赖于执行环境,因为不同机器的CLI版本不同,或者可能不存在。尤其重要的是,YAML清单描述了最终状态,而不是现有工作负载将如何受到影响。盲目地应用清单(例如,从过时的代码版本应用清单)可能是灾难性的,并会导致意外的中断。Isopod通过将Kubernetes对象视为一等公民提供了另一种配置范例。没有中间的YAML工件,Isopod直接在协议缓冲区中呈现Kubernetes对象[8],因此它们是强类型的,并由Kubernetes API直接使用。使用Isopod,配置是用Starlark[3]编写的,Starlark是Google的一种Python方言,也被Bazel[1]和Buck[4]构建系统使用。为了取代对CLI的依赖,Isopod用内置的运行时扩展了Starlark,以访问服务和实用程序,如Vault、Kubernetes apisserver、Base64编码器和UUID生成器等。Isopod为单元测试使用了一个单独的运行时来模拟所有内置组件,从而提供了以前不可能实现的测试覆盖率。Isopod也是密封和安全的。通常依赖kubecconfig文件进行集群身份验证会将秘密泄露到磁盘,如果在共享主机(如集群节点或CICD worker)上工作,则存在安全风险。相反,Isopod使用云供应商的身份与访问管理(Identity & Access Management, IAM)服务构建Oauth2令牌[6]到目标集群。应用程序秘密存储在Vault中,并在运行时查询。因此,没有秘密逃到磁盘。事实上,Isopod禁止磁盘IO,除非从其他脚本加载Starlark模块。除非显式地实现为内置的Isopod,否则不能加载任何外部库。Isopod以单一二进制文件的形式分布,是自包含所有依赖项的。最后,Isopod是可扩展的。将来添加的Kubernetes API组的Protobuf包也可以以同样的方式加载。因为内置插件是模块化和可插拔的,所以用户可以很容易地在Isopod运行时实现和注册新的内置插件,以支持任何Kubernetes供应商。Isopod提供了许多其他特性,例如对象生命周期管理和并行部署到多个集群,如果使用kubecconfig,这是不可能的。在干运行模式下,Isopod将当前代码更改的预期操作显示为针对集群中活动对象的YAML差异,以避免意外的配置更改。自从采用Isopod以来,Cruise的PaaS团队已经迁移了14个应用程序,并在没有中断或回归的情况下添加了另外16个应用程序,总计约10,000行Starlark。由于代码重用、集群并行性和YAML中介体的移除,迁移导致代码大小减少了60%,推出速度提高了80%。所有单元测试的完成时间都不超过10秒。Isopod是开源的,网址是github.com/cruise-automation/isopod。
{"title":"Isopod: An Expressive DSL for Kubernetes Configuration","authors":"Charles Xu, Dmitry Ilyevskiy","doi":"10.1145/3357223.3365759","DOIUrl":"https://doi.org/10.1145/3357223.3365759","url":null,"abstract":"Kubernetes is an open-source cluster orchestration system for containerized workloads to reduce idiosyncrasy across cloud vendors [2]. Using Kubernetes, Cruise has built a multi-tenant platform with thousands of cores and tens of terabytes of memory. Such a scale is possible in part thanks to the declarative abstraction of Kubernetes, where desired states are described in YAML manifests [5]. However, YAML as a data serialization format is unfit for workload specification. Structured data in YAML are untyped and prone to wrong indents and missing fields. Due to poor meta-programming support, composing YAML with control logic---loops and branches---suffers from YAML fragmentation and indentation tracking (example at bit.ly/yml-hell). Moreover, YAML manifests are often generated by filling a shared template with cluster-specific parameters---the image tag and the replica count might differ in development and production environments. Existing templating tools---Helm [11], Kustomize [9], Kapitan [7] and the likes---assume these parameters are statically known and use CLIs to query dynamic ones, such as secrets stored in HashiCorp Vault [10]. Such scheme is hard to test, since side effects escape through CLIs, and highly depends on the execution environment, since CLI versions vary across machines or might not exist. Not least, YAML manifests describe the eventual state but not how existing workloads will be affected. Blindly applying the manifest---for example, from a stale version of code---can be disastrous and cause unexpected outages. Isopod presents an alternative configuration paradigm by treating Kubernetes objects as first-class citizens. Without intermediate YAML artifacts, Isopod renders Kubernetes objects directly in Protocol Buffers [8], so they are strongly typed and consumed directly by the Kubernetes API. With Isopod, configurations are scripted in Starlark [3], a Python dialect by Google also used by Bazel [1] and Buck [4] build systems. To replace CLI dependencies, Isopod extends Starlark with runtime built-ins to access services and utilities such as Vault, Kubernetes apiserver, Base64 encoder, and UUID generator, etc. Isopod uses a separate runtime for unit tests to mock all built-ins, providing test coverage that was not possible before. Isopod is also hermetic and secure. The common reliance on the kubeconfig file for cluster authentication leaks secrets to disk, a security risk if working from a shared host, such as a cluster node or CICD worker. Instead, Isopod builds Oauth2 tokens [6] to the target cluster using the Identity & Access Management (IAM) service of the cloud vendor. Application secrets are stored in Vault and queried at runtime. Hence, no secrets escape to the disk. In fact, Isopod prohibits disk IO except for loading Starlark modules from other scripts. No external libraries can be loaded unless explicitly implemented as an Isopod built-in. Distributed as a single binary, Isopod is self-contained with all dependen","PeriodicalId":91949,"journal":{"name":"Proceedings of the ... ACM Symposium on Cloud Computing [electronic resource] : SOCC ... ... SoCC (Conference)","volume":"28 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76127529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
TagSniff
Bertty Contreras-Rojas, Jorge-Arnulfo Quiané-Ruiz, Zoi Kaoudi, Saravanan Thirumuruganathan
Although big data processing has become dramatically easier over the last decade, there has not been matching progress over big data debugging. It is estimated that users spend more than 50% of their time debugging their big data applications, wasting machine resources and taking longer to reach valuable insights. One cannot simply transplant traditional debugging techniques to big data. In this paper, we propose the TagSniff model, which can dramatically simplify data debugging for dataflows (the de-facto programming model for big data). It is based on two primitives -- tag and sniff -- that are flexible and expressive enough to model all common big data debugging scenarios. We then present Snoopy -- a general purpose monitoring and debugging system based on the TagSniff model. It supports both online and post-hoc debugging modes. Our experimental evaluation shows that Snoopy incurs a very low overhead on the main dataflow, 6% on average, as well as it is highly responsive to system events and users instructions.
{"title":"TagSniff","authors":"Bertty Contreras-Rojas, Jorge-Arnulfo Quiané-Ruiz, Zoi Kaoudi, Saravanan Thirumuruganathan","doi":"10.1145/3357223.3362738","DOIUrl":"https://doi.org/10.1145/3357223.3362738","url":null,"abstract":"Although big data processing has become dramatically easier over the last decade, there has not been matching progress over big data debugging. It is estimated that users spend more than 50% of their time debugging their big data applications, wasting machine resources and taking longer to reach valuable insights. One cannot simply transplant traditional debugging techniques to big data. In this paper, we propose the TagSniff model, which can dramatically simplify data debugging for dataflows (the de-facto programming model for big data). It is based on two primitives -- tag and sniff -- that are flexible and expressive enough to model all common big data debugging scenarios. We then present Snoopy -- a general purpose monitoring and debugging system based on the TagSniff model. It supports both online and post-hoc debugging modes. Our experimental evaluation shows that Snoopy incurs a very low overhead on the main dataflow, 6% on average, as well as it is highly responsive to system events and users instructions.","PeriodicalId":91949,"journal":{"name":"Proceedings of the ... ACM Symposium on Cloud Computing [electronic resource] : SOCC ... ... SoCC (Conference)","volume":"57 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76958713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Repeatable Oblivious Shuffling of Large Outsourced Data Blocks 大型外包数据块的可重复遗忘洗牌
Zhilin Zhang, Ke Wang, Weipeng Lin, A. Fu, R. C. Wong
As data outsourcing becomes popular, oblivious algorithms have raised extensive attentions. Their control flow and data access pattern appear to be independent of the input data they compute on. Oblivious algorithms, therefore, are especially suitable for secure processing in outsourced environments. In this work, we focus on oblivious shuffling algorithms that aim to shuffle encrypted data blocks outsourced to a cloud server without disclosing the actual permutation of blocks to the server. Existing oblivious shuffling algorithms suffer from issues of heavy communication cost and client computation cost for shuffling large-sized blocks because all outsourced blocks must be downloaded to the client for shuffling or peeling off extra encryption layers. To help eliminate this void, we introduce the "repeatable oblivious shuffling" notation that avoids moving blocks to the client and thus restricts the communication and client computation costs to be independent of the block size. For the first time, we present a concrete construction of repeatable oblivious shuffling using additively homomorphic encryption. The comprehensive evaluation of our construction shows its effective usability in practice for shuffling large-sized blocks.
随着数据外包的普及,遗忘算法引起了广泛的关注。它们的控制流和数据访问模式似乎独立于它们所计算的输入数据。因此,遗忘算法特别适合于外包环境中的安全处理。在这项工作中,我们专注于遗忘洗牌算法,该算法旨在洗牌外包给云服务器的加密数据块,而不向服务器披露块的实际排列。由于所有外包块都必须下载到客户端进行洗牌或剥离额外的加密层,现有的无关洗牌算法在洗牌大块时存在通信成本和客户端计算成本高的问题。为了帮助消除这个空白,我们引入了“可重复遗忘洗牌”符号,避免将块移动到客户端,从而限制通信和客户端计算成本与块大小无关。本文首次提出了一种使用加性同态加密的可重复遗忘洗牌的具体构造。综合评价表明,该结构在大型块的洗牌实践中具有有效的可用性。
{"title":"Repeatable Oblivious Shuffling of Large Outsourced Data Blocks","authors":"Zhilin Zhang, Ke Wang, Weipeng Lin, A. Fu, R. C. Wong","doi":"10.1145/3357223.3362732","DOIUrl":"https://doi.org/10.1145/3357223.3362732","url":null,"abstract":"As data outsourcing becomes popular, oblivious algorithms have raised extensive attentions. Their control flow and data access pattern appear to be independent of the input data they compute on. Oblivious algorithms, therefore, are especially suitable for secure processing in outsourced environments. In this work, we focus on oblivious shuffling algorithms that aim to shuffle encrypted data blocks outsourced to a cloud server without disclosing the actual permutation of blocks to the server. Existing oblivious shuffling algorithms suffer from issues of heavy communication cost and client computation cost for shuffling large-sized blocks because all outsourced blocks must be downloaded to the client for shuffling or peeling off extra encryption layers. To help eliminate this void, we introduce the \"repeatable oblivious shuffling\" notation that avoids moving blocks to the client and thus restricts the communication and client computation costs to be independent of the block size. For the first time, we present a concrete construction of repeatable oblivious shuffling using additively homomorphic encryption. The comprehensive evaluation of our construction shows its effective usability in practice for shuffling large-sized blocks.","PeriodicalId":91949,"journal":{"name":"Proceedings of the ... ACM Symposium on Cloud Computing [electronic resource] : SOCC ... ... SoCC (Conference)","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79682176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Practical Cloud Workloads for Serverless FaaS 无服务器FaaS的实用云工作负载
Jeongchul Kim, Kyungyong Lee
Serverless computing is gaining popularity with the Function-asa-Service (FaaS) execution model. Without incurring overheads involved in provisioning cloud instances and with high availability and scalability, serverless computing allows developers to focus on implementation of core application logic using other well-developed cloud services. By abstracting the complex resource management task, serverless computing opens new opportunities for the cloud service adoption even to non-cloud experts [2]. With the popularity, many research results have been published using the FaaS execution model. They include investigation of serverless computing opportunities [1], proposing new serverless applications, function run-time optimization, and public service comparison. Without a common test benchmark suite, authors in the previous work had evaluated proposed systems using fairly simple FaaS applications, such as micro-benchmarks that emphasize specific resources exclusively, e.g., CPU, disk I/O, and network. However, such simple workloads do not represent realistic FaaS system applications, and the evaluations might not compare proposed systems appropriately. To overcome the limitation of lacking a comprehensive benchmark suite for the serverless computing and FaaS execution model, the authors create FunctionBench that provides various FaaS workloads that are ready to be executed on public cloud function execution services - AWS Lambda, Google Cloud Functions, and Azure functions1. Since the inception of serving the FaaS workloads, we keep working to expand the supported applications and add scenarios in big-data processing, back-end web applications, and security. To represent big-data applications, we add a MapReduce WordCount workload, which counts the number of occurrences of each word in a given partitioned input dataset from Wikipedia. To cover web back-end applications, we add Chameleon. The application renders a template using the Chameleon module in Python PIP library to create an HTML table of N rows and M columns that are provided as input arguments. Another web-related application is JSON serialize-deserialize module. The application performs JSON deserialization using a JSON-encoded string dataset (Awesome JSON Dataset) downloaded from a public object storage service, and it serializes the JSON object again. To represent security-related applications, we add Pyaes benchmark that performs private key-based encryption and decryption. It is a pure-Python implementation of the AES block-cipher algorithm in CTR mode. We also add gzip-compression benchmark to represent realistic disk IO-heavy applications. The degree (High, Medium, Low) of resource usage characteristics of newly proposed applications are summarized in Table 1. Please refer to [3] to read the description of comprehensive applications list. The proposed FunctionBench provides a variety of FaaS applications in multiple categories, and we are sure that it will enable fair evaluation o
随着功能即服务(FaaS)执行模型的出现,无服务器计算越来越受欢迎。无服务器计算不会产生用于配置云实例的开销,并且具有高可用性和可伸缩性,因此开发人员可以使用其他开发良好的云服务专注于实现核心应用程序逻辑。通过抽象复杂的资源管理任务,无服务器计算为云服务的采用(甚至是非云专家)开辟了新的机会。随着FaaS执行模型的普及,许多使用FaaS执行模型的研究成果已经发表。它们包括对无服务器计算机会的调查[1]、提出新的无服务器应用程序、功能运行时优化和公共服务比较。在没有通用测试基准套件的情况下,作者在之前的工作中使用相当简单的FaaS应用程序来评估被提议的系统,例如专门强调特定资源的微基准测试,例如CPU、磁盘I/O和网络。然而,这种简单的工作负载并不代表实际的FaaS系统应用程序,并且评估可能无法适当地比较所建议的系统。为了克服缺乏针对无服务器计算和FaaS执行模型的全面基准套件的限制,作者创建了FunctionBench,它提供了各种FaaS工作负载,这些工作负载可以在公共云功能执行服务(AWS Lambda、谷歌cloud Functions和Azure Functions)上执行。自开始服务FaaS工作负载以来,我们一直在努力扩展支持的应用程序,并在大数据处理、后端web应用程序和安全方面添加场景。为了表示大数据应用程序,我们添加了一个MapReduce WordCount工作负载,它计算来自Wikipedia的给定分区输入数据集中每个单词的出现次数。为了覆盖web后端应用程序,我们添加了Chameleon。该应用程序使用Python PIP库中的Chameleon模块呈现一个模板,以创建一个包含N行和M列的HTML表,这些表作为输入参数提供。另一个与web相关的应用是JSON序列化-反序列化模块。应用程序使用从公共对象存储服务下载的JSON编码字符串数据集(Awesome JSON dataset)执行JSON反序列化,并再次序列化JSON对象。为了表示与安全相关的应用程序,我们添加了执行基于私钥的加密和解密的Pyaes基准。它是CTR模式下AES块密码算法的纯python实现。我们还添加了gzip压缩基准来表示实际的磁盘io较多的应用程序。表1总结了新提出的应用程序的资源使用特征的程度(高、中、低)。请参阅[3]阅读综合应用程序列表的描述。提议的FunctionBench提供了多个类别的各种FaaS应用,我们确信它将在实际应用场景中对相关领域的新研究工作进行公平评估。
{"title":"Practical Cloud Workloads for Serverless FaaS","authors":"Jeongchul Kim, Kyungyong Lee","doi":"10.1145/3357223.3365439","DOIUrl":"https://doi.org/10.1145/3357223.3365439","url":null,"abstract":"Serverless computing is gaining popularity with the Function-asa-Service (FaaS) execution model. Without incurring overheads involved in provisioning cloud instances and with high availability and scalability, serverless computing allows developers to focus on implementation of core application logic using other well-developed cloud services. By abstracting the complex resource management task, serverless computing opens new opportunities for the cloud service adoption even to non-cloud experts [2]. With the popularity, many research results have been published using the FaaS execution model. They include investigation of serverless computing opportunities [1], proposing new serverless applications, function run-time optimization, and public service comparison. Without a common test benchmark suite, authors in the previous work had evaluated proposed systems using fairly simple FaaS applications, such as micro-benchmarks that emphasize specific resources exclusively, e.g., CPU, disk I/O, and network. However, such simple workloads do not represent realistic FaaS system applications, and the evaluations might not compare proposed systems appropriately. To overcome the limitation of lacking a comprehensive benchmark suite for the serverless computing and FaaS execution model, the authors create FunctionBench that provides various FaaS workloads that are ready to be executed on public cloud function execution services - AWS Lambda, Google Cloud Functions, and Azure functions1. Since the inception of serving the FaaS workloads, we keep working to expand the supported applications and add scenarios in big-data processing, back-end web applications, and security. To represent big-data applications, we add a MapReduce WordCount workload, which counts the number of occurrences of each word in a given partitioned input dataset from Wikipedia. To cover web back-end applications, we add Chameleon. The application renders a template using the Chameleon module in Python PIP library to create an HTML table of N rows and M columns that are provided as input arguments. Another web-related application is JSON serialize-deserialize module. The application performs JSON deserialization using a JSON-encoded string dataset (Awesome JSON Dataset) downloaded from a public object storage service, and it serializes the JSON object again. To represent security-related applications, we add Pyaes benchmark that performs private key-based encryption and decryption. It is a pure-Python implementation of the AES block-cipher algorithm in CTR mode. We also add gzip-compression benchmark to represent realistic disk IO-heavy applications. The degree (High, Medium, Low) of resource usage characteristics of newly proposed applications are summarized in Table 1. Please refer to [3] to read the description of comprehensive applications list. The proposed FunctionBench provides a variety of FaaS applications in multiple categories, and we are sure that it will enable fair evaluation o","PeriodicalId":91949,"journal":{"name":"Proceedings of the ... ACM Symposium on Cloud Computing [electronic resource] : SOCC ... ... SoCC (Conference)","volume":"255 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79508562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Agni 阿格尼
Kunal Lillaney, Vasily Tarasov, David A. Pease, Randal C. Burns
Object storage is a low-cost, scalable component of cloud ecosystems. However, interface incompatibilities and performance limitations inhibit its adoption for emerging cloud-based workloads. Users are compelled to either run their applications over expensive block storage-based file systems or use inefficient file connectors over object stores. Dual access, the ability to read and write the same data through file systems interfaces and object storage APIs, has promise to improve performance and eliminate storage sprawl. We design and implement Agni1, an efficient, distributed, dual-access object storage file system (OSFS), that uses standard object storage APIs and cloud microservices. Our system overcomes the performance shortcomings of existing approaches by implementing a multi-tier write aggregating data structure and by integrating with existing cloud-native services. Moreover, Agni provides distributed access and a coherent namespace. Our experiments demonstrate that for representative workloads Agni improves performance by 20%--60% when compared with existing approaches.
{"title":"Agni","authors":"Kunal Lillaney, Vasily Tarasov, David A. Pease, Randal C. Burns","doi":"10.1145/3357223.3362703","DOIUrl":"https://doi.org/10.1145/3357223.3362703","url":null,"abstract":"Object storage is a low-cost, scalable component of cloud ecosystems. However, interface incompatibilities and performance limitations inhibit its adoption for emerging cloud-based workloads. Users are compelled to either run their applications over expensive block storage-based file systems or use inefficient file connectors over object stores. Dual access, the ability to read and write the same data through file systems interfaces and object storage APIs, has promise to improve performance and eliminate storage sprawl. We design and implement Agni1, an efficient, distributed, dual-access object storage file system (OSFS), that uses standard object storage APIs and cloud microservices. Our system overcomes the performance shortcomings of existing approaches by implementing a multi-tier write aggregating data structure and by integrating with existing cloud-native services. Moreover, Agni provides distributed access and a coherent namespace. Our experiments demonstrate that for representative workloads Agni improves performance by 20%--60% when compared with existing approaches.","PeriodicalId":91949,"journal":{"name":"Proceedings of the ... ACM Symposium on Cloud Computing [electronic resource] : SOCC ... ... SoCC (Conference)","volume":"28 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78173490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Composing SDN Controller Enhancements with Mozart 作曲SDN控制器增强与莫扎特
Zhenyu Zhou, Theophilus A. Benson
Over the last few years, we have experienced a massive transformation of the Software Defined Networking ecosystem with the development of SDNEnhancements, e.g., Statesman, ESPRES, Pane, and Pyretic, to provide better composability, better utilization of TCAM, consistent network updates, or congestion free updates. The end-result of this organic evolution is a disconnect between the SDN applications and the data-plane. A disconnect which can impact an SDN application's performance and efficacy. In this paper, we present the first systematic study of the interactions between SDNEnhancements and SDN applications -- we show that an SDN application's performance can be significantly impacted by these SDNEnhancements: for example, we observed that the efficiency of a traffic engineering SDN application was reduced by 24.8%. Motivated by these insights, we present, Mozart, a redesigned SDN controller centered around mitigating and reducing the impact of these SDNEnhancements. Using two prototypes interoperating with seven SDN applications and two SDNEnhancements, we demonstrate that our abstractions require minimal changes and can restore an SDN application's performance. We analyzed Mozart's scalability and overhead using large scale simulations of modern cloud networks and observed them to be negligible.
在过去的几年里,我们经历了软件定义网络生态系统的巨大转变,随着sdn增强(例如Statesman、ESPRES、Pane和Pyretic)的发展,提供了更好的可组合性、更好地利用TCAM、一致的网络更新或无拥塞更新。这种有机演变的最终结果是SDN应用程序和数据平面之间的脱节。断开连接会影响SDN应用程序的性能和效率。在本文中,我们首次系统地研究了SDN增强与SDN应用之间的相互作用——我们表明SDN应用的性能会受到这些SDN增强的显著影响:例如,我们观察到流量工程SDN应用的效率降低了24.8%。在这些见解的激励下,我们提出了莫扎特,一个重新设计的SDN控制器,旨在减轻和减少这些SDN增强的影响。使用两个原型与七个SDN应用程序和两个SDN增强进行互操作,我们证明了我们的抽象需要最小的更改,并且可以恢复SDN应用程序的性能。我们使用现代云网络的大规模模拟分析了莫扎特的可伸缩性和开销,发现它们可以忽略不计。
{"title":"Composing SDN Controller Enhancements with Mozart","authors":"Zhenyu Zhou, Theophilus A. Benson","doi":"10.1145/3357223.3362712","DOIUrl":"https://doi.org/10.1145/3357223.3362712","url":null,"abstract":"Over the last few years, we have experienced a massive transformation of the Software Defined Networking ecosystem with the development of SDNEnhancements, e.g., Statesman, ESPRES, Pane, and Pyretic, to provide better composability, better utilization of TCAM, consistent network updates, or congestion free updates. The end-result of this organic evolution is a disconnect between the SDN applications and the data-plane. A disconnect which can impact an SDN application's performance and efficacy. In this paper, we present the first systematic study of the interactions between SDNEnhancements and SDN applications -- we show that an SDN application's performance can be significantly impacted by these SDNEnhancements: for example, we observed that the efficiency of a traffic engineering SDN application was reduced by 24.8%. Motivated by these insights, we present, Mozart, a redesigned SDN controller centered around mitigating and reducing the impact of these SDNEnhancements. Using two prototypes interoperating with seven SDN applications and two SDNEnhancements, we demonstrate that our abstractions require minimal changes and can restore an SDN application's performance. We analyzed Mozart's scalability and overhead using large scale simulations of modern cloud networks and observed them to be negligible.","PeriodicalId":91949,"journal":{"name":"Proceedings of the ... ACM Symposium on Cloud Computing [electronic resource] : SOCC ... ... SoCC (Conference)","volume":"540 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87917602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Seamless Offloading of Web App Computations From Mobile Device to Edge Clouds via HTML5 Web Worker Migration 通过HTML5 Web Worker迁移将Web应用程序计算从移动设备无缝卸载到边缘云
H. Jeong, C. Shin, K. Shin, Hyeon-Jae Lee, Soo-Mook Moon
Future mobile applications, such as mobile cloud gaming or augmented reality, require not only high computation power but strict latency constraints. To provide computing resources with ultra-low latency, a new form of cloud infrastructure called edge cloud has been proposed, which distributes computing servers at the edges of the network. A primary concern of edge cloud is that a physical server running a service can change as the client moves, so the service has to be quickly migrated between servers for seamless computation offloading. This paper tackles the issue in the context of web applications, whose computation-intensive codes are written in JavaScript and webassembly. The basic building block of our system is a mobile web worker, which extends HTML5 web worker to support migration across the client, edge, and cloud servers. Our system migrates a mobile web worker from mobile device to an edge server to minimize execution latency. The immigrated worker can move again to other servers for better performance or service recovery. To implement the runtime migration of the worker, we use a novel serialization algorithm that captures the web worker state where webassembly functions and JavaScript objects are intermingled. Experimental result showed that our system could successfully migrate a non-trivial web worker running webassembly-version OpenCV within a few seconds, and achieved up to 8.4x speedup compared to offloading of pure JavaScript.
未来的移动应用程序,如移动云游戏或增强现实,不仅需要高计算能力,而且需要严格的延迟限制。为了提供超低延迟的计算资源,提出了一种新的云基础设施形式——边缘云,它将计算服务器分布在网络的边缘。边缘云的一个主要问题是,运行服务的物理服务器可能会随着客户端的移动而变化,因此服务必须在服务器之间快速迁移,以实现无缝的计算卸载。本文在web应用程序的背景下解决这个问题,这些应用程序的计算密集型代码是用JavaScript和webassembly编写的。我们系统的基本构建块是一个移动web worker,它扩展了HTML5 web worker以支持跨客户端、边缘和云服务器的迁移。我们的系统将移动web工作者从移动设备迁移到边缘服务器,以最小化执行延迟。迁移的工作人员可以再次移动到其他服务器以获得更好的性能或服务恢复。为了实现worker的运行时迁移,我们使用了一种新的序列化算法来捕获webassembly函数和JavaScript对象混合在一起的web worker状态。实验结果表明,我们的系统可以在几秒钟内成功迁移一个运行webassembly-version OpenCV的非平凡web worker,并且与纯JavaScript卸载相比,实现了高达8.4倍的加速。
{"title":"Seamless Offloading of Web App Computations From Mobile Device to Edge Clouds via HTML5 Web Worker Migration","authors":"H. Jeong, C. Shin, K. Shin, Hyeon-Jae Lee, Soo-Mook Moon","doi":"10.1145/3357223.3362735","DOIUrl":"https://doi.org/10.1145/3357223.3362735","url":null,"abstract":"Future mobile applications, such as mobile cloud gaming or augmented reality, require not only high computation power but strict latency constraints. To provide computing resources with ultra-low latency, a new form of cloud infrastructure called edge cloud has been proposed, which distributes computing servers at the edges of the network. A primary concern of edge cloud is that a physical server running a service can change as the client moves, so the service has to be quickly migrated between servers for seamless computation offloading. This paper tackles the issue in the context of web applications, whose computation-intensive codes are written in JavaScript and webassembly. The basic building block of our system is a mobile web worker, which extends HTML5 web worker to support migration across the client, edge, and cloud servers. Our system migrates a mobile web worker from mobile device to an edge server to minimize execution latency. The immigrated worker can move again to other servers for better performance or service recovery. To implement the runtime migration of the worker, we use a novel serialization algorithm that captures the web worker state where webassembly functions and JavaScript objects are intermingled. Experimental result showed that our system could successfully migrate a non-trivial web worker running webassembly-version OpenCV within a few seconds, and achieved up to 8.4x speedup compared to offloading of pure JavaScript.","PeriodicalId":91949,"journal":{"name":"Proceedings of the ... ACM Symposium on Cloud Computing [electronic resource] : SOCC ... ... SoCC (Conference)","volume":"64 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75214399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Libra and the Art of Task Sizing in Big-Data Analytic Systems 天秤座与大数据分析系统中任务规模的艺术
Ruikang Li, Peizhen Guo, Bo Hu, Wenjun Hu
Despite extensive investigation of job scheduling in data-intensive computation frameworks, less consideration has been given to optimizing job partitioning for resource utilization and efficient processing. Instead, partitioning and job sizing are a form of dark art, typically left to developer intuition and trial-and-error style experimentation. In this work, we propose that just as job scheduling and resource allocation are out-sourced to a trusted mechanism external to the workload, so too should be the responsibility for partitioning data as a determinant for task size. Job partitioning essentially involves determining the partition sizes to match the resource allocation at the finest granularity. This is a complex, multi-dimensional problem that is highly application specific: resource allocation, computational runtime, shuffle and reduce communication requirements, and task startup overheads all have strong influence on the most effective task size for efficient processing. Depending on the partition size, the job completion time can differ by as much as 10 times! Fortunately, we observe a general trend underlying the tradeoff between full resource utilization and system overhead across different settings. The optimal job partition size balances these two conflicting forces. Given this trend, we design Libra to automate job partitioning as a framework extension. We integrate Libra with Spark and evaluate its performance on EC2. Compared to state-of-the-art techniques, Libra can reduce the individual job execution time by 25% to 70%.
尽管对数据密集型计算框架中的作业调度进行了广泛的研究,但对优化作业分区以提高资源利用率和处理效率的考虑较少。相反,划分和作业大小是一种黑暗艺术,通常留给开发人员的直觉和试错风格的实验。在这项工作中,我们建议将作业调度和资源分配外包给工作负载外部的可信机制,因此也应该负责将数据分区作为任务大小的决定因素。作业分区本质上包括确定分区大小,以最细粒度匹配资源分配。这是一个复杂的多维问题,高度特定于应用程序:资源分配、计算运行时、shuffle和减少通信需求以及任务启动开销都对有效处理的最有效任务大小有很大影响。根据分区大小,作业完成时间可能相差多达10倍!幸运的是,我们观察到在不同的设置中,在充分的资源利用和系统开销之间进行权衡的一般趋势。最佳作业分区大小平衡了这两种相互冲突的力量。鉴于这一趋势,我们将Libra设计为自动化作业分区作为框架扩展。我们将Libra与Spark集成,并评估其在EC2上的性能。与最先进的技术相比,Libra可以将单个作业的执行时间减少25%到70%。
{"title":"Libra and the Art of Task Sizing in Big-Data Analytic Systems","authors":"Ruikang Li, Peizhen Guo, Bo Hu, Wenjun Hu","doi":"10.1145/3357223.3362720","DOIUrl":"https://doi.org/10.1145/3357223.3362720","url":null,"abstract":"Despite extensive investigation of job scheduling in data-intensive computation frameworks, less consideration has been given to optimizing job partitioning for resource utilization and efficient processing. Instead, partitioning and job sizing are a form of dark art, typically left to developer intuition and trial-and-error style experimentation. In this work, we propose that just as job scheduling and resource allocation are out-sourced to a trusted mechanism external to the workload, so too should be the responsibility for partitioning data as a determinant for task size. Job partitioning essentially involves determining the partition sizes to match the resource allocation at the finest granularity. This is a complex, multi-dimensional problem that is highly application specific: resource allocation, computational runtime, shuffle and reduce communication requirements, and task startup overheads all have strong influence on the most effective task size for efficient processing. Depending on the partition size, the job completion time can differ by as much as 10 times! Fortunately, we observe a general trend underlying the tradeoff between full resource utilization and system overhead across different settings. The optimal job partition size balances these two conflicting forces. Given this trend, we design Libra to automate job partitioning as a framework extension. We integrate Libra with Spark and evaluate its performance on EC2. Compared to state-of-the-art techniques, Libra can reduce the individual job execution time by 25% to 70%.","PeriodicalId":91949,"journal":{"name":"Proceedings of the ... ACM Symposium on Cloud Computing [electronic resource] : SOCC ... ... SoCC (Conference)","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75929075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
Proceedings of the ... ACM Symposium on Cloud Computing [electronic resource] : SOCC ... ... SoCC (Conference)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1