首页 > 最新文献

Proceedings of the Sixth ACM Symposium on Cloud Computing最新文献

英文 中文
Evaluating the impact of fine-scale burstiness on cloud elasticity 评估精细尺度爆炸对云弹性的影响
Pub Date : 2015-08-27 DOI: 10.1145/2806777.2806846
S. Islam, S. Venugopal, Anna Liu
Elasticity is the defining feature of cloud computing. Performance analysts and adaptive system designers rely on representative benchmarks for evaluating elasticity for cloud applications under realistic reproducible workloads. A key feature of web workloads is burstiness or high variability at fine timescales. In this paper, we explore the innate interaction between fine-scale burstiness and elasticity and quantify the impact from the cloud consumer's perspective. We propose a novel methodology to model workloads with fine-scale burstiness so that they can resemble the empirical stylized facts of the arrival process. Through an experimental case study, we extract insights about the implications of fine-scale burstiness for elasticity penalty and adaptive resource scaling. Our findings demonstrate the detrimental effect of fine-scale burstiness on the elasticity of cloud applications.
弹性是云计算的定义特性。性能分析师和自适应系统设计人员依靠代表性基准来评估实际可再现工作负载下云应用程序的弹性。web工作负载的一个关键特征是在精细时间尺度上的突发性或高可变性。在本文中,我们探讨了精细尺度突发性和弹性之间的内在相互作用,并从云消费者的角度量化了其影响。我们提出了一种新的方法来模拟工作负载与精细尺度爆发,使他们可以类似于经验程式化的事实的到达过程。通过一个实验案例研究,我们提取了精细尺度突发性对弹性惩罚和自适应资源缩放的影响的见解。我们的研究结果证明了精细尺度的突发性对云应用弹性的有害影响。
{"title":"Evaluating the impact of fine-scale burstiness on cloud elasticity","authors":"S. Islam, S. Venugopal, Anna Liu","doi":"10.1145/2806777.2806846","DOIUrl":"https://doi.org/10.1145/2806777.2806846","url":null,"abstract":"Elasticity is the defining feature of cloud computing. Performance analysts and adaptive system designers rely on representative benchmarks for evaluating elasticity for cloud applications under realistic reproducible workloads. A key feature of web workloads is burstiness or high variability at fine timescales. In this paper, we explore the innate interaction between fine-scale burstiness and elasticity and quantify the impact from the cloud consumer's perspective. We propose a novel methodology to model workloads with fine-scale burstiness so that they can resemble the empirical stylized facts of the arrival process. Through an experimental case study, we extract insights about the implications of fine-scale burstiness for elasticity penalty and adaptive resource scaling. Our findings demonstrate the detrimental effect of fine-scale burstiness on the elasticity of cloud applications.","PeriodicalId":275158,"journal":{"name":"Proceedings of the Sixth ACM Symposium on Cloud Computing","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121447727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
FastLane: making short flows shorter with agile drop notification FastLane:通过灵活的掉落通知缩短短流程
Pub Date : 2015-08-27 DOI: 10.1145/2806777.2806852
David Zats, A. Iyer, G. Ananthanarayanan, R. Agarwal, R. Katz, I. Stoica, Amin Vahdat
The drive towards richer and more interactive web content places increasingly stringent requirements on datacenter network performance. Applications running atop these networks typically partition an incoming query into multiple subqueries, and generate the final result by aggregating the responses for these subqueries. As a result, a large fraction --- as high as 80% --- of the network flows in such workloads are short and latency-sensitive. The speed with which existing networks respond to packet drops limits their ability to meet high-percentile flow completion time SLOs. Indirect notifications indicating packet drops (e.g., duplicates in an end-to-end acknowledgement sequence) are an important limitation to the agility of response to packet drops. This paper proposes FastLane, an in-network drop notification mechanism. FastLane enhances switches to send high-priority drop notifications to sources, thus informing sources as quickly as possible. Consequently, sources can retransmit packets sooner and throttle transmission rates earlier, thus reducing high-percentile flow completion times. We demonstrate, through simulation and implementation, that FastLane reduces 99.9th percentile completion times of short flows by up to 81%. These benefits come at minimal cost --- safeguards ensure that FastLane consume no more than 1% of bandwidth and 2.5% of buffers.
对更丰富和更具交互性的web内容的追求对数据中心网络性能提出了越来越严格的要求。运行在这些网络之上的应用程序通常将传入查询划分为多个子查询,并通过聚合这些子查询的响应来生成最终结果。因此,在这种工作负载中,很大一部分(高达80%)的网络流都很短,并且对延迟敏感。现有网络对丢包的响应速度限制了它们满足高百分位数流完成时间slo的能力。指示丢包的间接通知(例如,端到端确认序列中的重复)是对丢包响应敏捷性的重要限制。本文提出了一种网络内丢包通知机制FastLane。FastLane增强了交换机向源发送高优先级丢弃通知,从而尽可能快地通知源。因此,源可以更快地重传数据包并更早地限制传输速率,从而减少高百分位数流完成时间。通过模拟和实现,我们证明,FastLane将短流完井时间缩短了99.9%,最多可缩短81%。这些好处的成本最低——安全措施确保FastLane消耗的带宽不超过1%,缓冲不超过2.5%。
{"title":"FastLane: making short flows shorter with agile drop notification","authors":"David Zats, A. Iyer, G. Ananthanarayanan, R. Agarwal, R. Katz, I. Stoica, Amin Vahdat","doi":"10.1145/2806777.2806852","DOIUrl":"https://doi.org/10.1145/2806777.2806852","url":null,"abstract":"The drive towards richer and more interactive web content places increasingly stringent requirements on datacenter network performance. Applications running atop these networks typically partition an incoming query into multiple subqueries, and generate the final result by aggregating the responses for these subqueries. As a result, a large fraction --- as high as 80% --- of the network flows in such workloads are short and latency-sensitive. The speed with which existing networks respond to packet drops limits their ability to meet high-percentile flow completion time SLOs. Indirect notifications indicating packet drops (e.g., duplicates in an end-to-end acknowledgement sequence) are an important limitation to the agility of response to packet drops. This paper proposes FastLane, an in-network drop notification mechanism. FastLane enhances switches to send high-priority drop notifications to sources, thus informing sources as quickly as possible. Consequently, sources can retransmit packets sooner and throttle transmission rates earlier, thus reducing high-percentile flow completion times. We demonstrate, through simulation and implementation, that FastLane reduces 99.9th percentile completion times of short flows by up to 81%. These benefits come at minimal cost --- safeguards ensure that FastLane consume no more than 1% of bandwidth and 2.5% of buffers.","PeriodicalId":275158,"journal":{"name":"Proceedings of the Sixth ACM Symposium on Cloud Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129109218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Centiman: elastic, high performance optimistic concurrency control by watermarking 通过水印实现弹性、高性能、乐观的并发控制
Pub Date : 2015-08-27 DOI: 10.1145/2806777.2806837
B. Ding, Lucja Kot, A. Demers, J. Gehrke
We present Centiman, a system for high performance, elastic transaction processing in the cloud. Centiman provides serializability on top of a key-value store with a lightweight protocol based on optimistic concurrency control (OCC). Centiman is designed for the cloud setting, with an architecture that is loosely coupled and avoids synchronization wherever possible. Centiman supports sharded transaction validation; validators can be added or removed on-the-fly in an elastic manner. Processors and validators scale independently of each other and recover from failure transparently to each other. Centiman's loosely coupled design creates some challenges: it can cause spurious aborts and it makes it difficult to implement common performance optimizations for read-only transactions. To deal with these issues, Centiman uses a watermark abstraction to asynchronously propagate information about transaction commits through the system. In an extensive evaluation we show that Centiman provides fast elastic scaling, low-overhead serializability for read-heavy workloads, and scales to millions of operations per second.
我们介绍了Centiman,一个在云中进行高性能、弹性事务处理的系统。Centiman通过基于乐观并发控制(OCC)的轻量级协议,在键值存储的基础上提供可序列化性。Centiman是为云环境设计的,其架构是松散耦合的,尽可能避免同步。Centiman支持分片交易验证;可以以一种灵活的方式动态地添加或删除验证器。处理器和验证器相互独立地扩展,并且可以透明地从故障中恢复。Centiman的松耦合设计带来了一些挑战:它可能导致虚假的中止,并且难以为只读事务实现常见的性能优化。为了解决这些问题,Centiman使用水印抽象在系统中异步传播有关事务提交的信息。在一个广泛的评估中,我们证明了Centiman提供了快速的弹性扩展,低开销的序列化性,适用于读取繁重的工作负载,并且可以扩展到每秒数百万个操作。
{"title":"Centiman: elastic, high performance optimistic concurrency control by watermarking","authors":"B. Ding, Lucja Kot, A. Demers, J. Gehrke","doi":"10.1145/2806777.2806837","DOIUrl":"https://doi.org/10.1145/2806777.2806837","url":null,"abstract":"We present Centiman, a system for high performance, elastic transaction processing in the cloud. Centiman provides serializability on top of a key-value store with a lightweight protocol based on optimistic concurrency control (OCC). Centiman is designed for the cloud setting, with an architecture that is loosely coupled and avoids synchronization wherever possible. Centiman supports sharded transaction validation; validators can be added or removed on-the-fly in an elastic manner. Processors and validators scale independently of each other and recover from failure transparently to each other. Centiman's loosely coupled design creates some challenges: it can cause spurious aborts and it makes it difficult to implement common performance optimizations for read-only transactions. To deal with these issues, Centiman uses a watermark abstraction to asynchronously propagate information about transaction commits through the system. In an extensive evaluation we show that Centiman provides fast elastic scaling, low-overhead serializability for read-heavy workloads, and scales to millions of operations per second.","PeriodicalId":275158,"journal":{"name":"Proceedings of the Sixth ACM Symposium on Cloud Computing","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129788810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Interactive data analytics: the new frontier 交互式数据分析:新前沿
Pub Date : 2015-08-27 DOI: 10.1145/2806777.2809956
S. Madden
Data analytics often involves data exploration, where a data set is repeatedly analyzed to understand root causes, find patterns, or extract insights. Such analysis is frequently bottlenecked by the underlying data processing system, as analysts wait for their queries to complete against a complex multilayered software stack. In this talk, I'll describe some exploratory analytics applications we've build in the MIT database group over the past few years, and will then describe some of the challenges and opportunities that arise when building more efficient data exploration systems that will allow these applications to become truly interactive, even when processing billions of data points.
数据分析通常涉及数据探索,其中反复分析数据集以了解根本原因、查找模式或提取见解。这种分析经常受到底层数据处理系统的瓶颈,因为分析人员需要等待他们的查询在复杂的多层软件堆栈上完成。在这次演讲中,我将描述我们在过去几年中在麻省理工学院数据库组中建立的一些探索性分析应用程序,然后将描述在构建更有效的数据探索系统时出现的一些挑战和机遇,这些系统将允许这些应用程序成为真正的交互,即使在处理数十亿数据点时。
{"title":"Interactive data analytics: the new frontier","authors":"S. Madden","doi":"10.1145/2806777.2809956","DOIUrl":"https://doi.org/10.1145/2806777.2809956","url":null,"abstract":"Data analytics often involves data exploration, where a data set is repeatedly analyzed to understand root causes, find patterns, or extract insights. Such analysis is frequently bottlenecked by the underlying data processing system, as analysts wait for their queries to complete against a complex multilayered software stack. In this talk, I'll describe some exploratory analytics applications we've build in the MIT database group over the past few years, and will then describe some of the challenges and opportunities that arise when building more efficient data exploration systems that will allow these applications to become truly interactive, even when processing billions of data points.","PeriodicalId":275158,"journal":{"name":"Proceedings of the Sixth ACM Symposium on Cloud Computing","volume":"13 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123303762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Forecasting the cost of processing multi-join queries via hashing for main-memory databases 通过对主存数据库的散列预测处理多连接查询的成本
Pub Date : 2015-07-11 DOI: 10.1145/2806777.2806944
Feilong Liu, Spyros Blanas
Database management systems (DBMSs) carefully optimize complex multi-join queries to avoid expensive disk I/O. As servers today feature tens or hundreds of gigabytes of RAM, a significant fraction of many analytic databases becomes memory-resident. Even after careful tuning for an in-memory environment, a linear disk I/O model such as the one implemented in PostgreSQL may make query response time predictions that are up to 2× slower than the optimal multi-join query plan over memory-resident data. This paper introduces a memory I/O cost model to identify good evaluation strategies for complex query plans with multiple hash-based equi-joins over memory-resident data. The proposed cost model is carefully validated for accuracy using three different systems, including an Amazon EC2 instance, to control for hardware-specific differences. Prior work in parallel query evaluation has advocated right-deep and bushy trees for multi-join queries due to their greater parallelization and pipelining potential. A surprising finding is that the conventional wisdom from shared-nothing disk-based systems does not directly apply to the modern shared-everything memory hierarchy. As corroborated by our model, the performance gap between the optimal left-deep and right-deep query plan can grow to about 10× as the number of joins in the query increases.
数据库管理系统(dbms)仔细地优化复杂的多连接查询,以避免昂贵的磁盘I/O。由于今天的服务器具有数十或数百gb的RAM,因此许多分析数据库的很大一部分都是内存驻留的。即使在对内存环境进行仔细调优之后,线性磁盘I/O模型(如PostgreSQL中实现的模型)的查询响应时间预测也可能比针对内存驻留数据的最佳多连接查询计划慢2倍。本文引入了一个内存I/O成本模型,用于识别对驻留内存数据具有多个基于散列的等同连接的复杂查询计划的良好评估策略。使用三个不同的系统(包括Amazon EC2实例)仔细验证了所建议的成本模型的准确性,以控制特定于硬件的差异。先前在并行查询计算方面的工作提倡对多连接查询使用右深树和稠密树,因为它们具有更大的并行化和管道化潜力。一个令人惊讶的发现是,基于磁盘的无共享系统的传统智慧并不直接适用于现代的无共享内存层次结构。我们的模型证实,随着查询中连接数量的增加,最优左深查询计划和右深查询计划之间的性能差距可以增长到大约10倍。
{"title":"Forecasting the cost of processing multi-join queries via hashing for main-memory databases","authors":"Feilong Liu, Spyros Blanas","doi":"10.1145/2806777.2806944","DOIUrl":"https://doi.org/10.1145/2806777.2806944","url":null,"abstract":"Database management systems (DBMSs) carefully optimize complex multi-join queries to avoid expensive disk I/O. As servers today feature tens or hundreds of gigabytes of RAM, a significant fraction of many analytic databases becomes memory-resident. Even after careful tuning for an in-memory environment, a linear disk I/O model such as the one implemented in PostgreSQL may make query response time predictions that are up to 2× slower than the optimal multi-join query plan over memory-resident data. This paper introduces a memory I/O cost model to identify good evaluation strategies for complex query plans with multiple hash-based equi-joins over memory-resident data. The proposed cost model is carefully validated for accuracy using three different systems, including an Amazon EC2 instance, to control for hardware-specific differences. Prior work in parallel query evaluation has advocated right-deep and bushy trees for multi-join queries due to their greater parallelization and pipelining potential. A surprising finding is that the conventional wisdom from shared-nothing disk-based systems does not directly apply to the modern shared-everything memory hierarchy. As corroborated by our model, the performance gap between the optimal left-deep and right-deep query plan can grow to about 10× as the number of joins in the query increases.","PeriodicalId":275158,"journal":{"name":"Proceedings of the Sixth ACM Symposium on Cloud Computing","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126413671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
On data skewness, stragglers, and MapReduce progress indicators 关于数据偏度,掉队,MapReduce进度指标
Pub Date : 2015-03-31 DOI: 10.1145/2806777.2806843
Emilio Coppa, Irene Finocchi
We tackle the problem of predicting the performance of MapReduce applications designing accurate progress indicators, which keep programmers informed on the percentage of completed computation time during the execution of a job. This is especially important in pay-as-you-go cloud environments, where slow jobs can be aborted in order to avoid excessive costs. Performance predictions can also serve as a building block for several profile-guided optimizations. By assuming that the running time depends linearly on the input size, state-of-the-art techniques can be seriously harmed by data skewness, load unbalancing, and straggling tasks. We thus design a novel profile-guided progress indicator, called NearestFit, that operates without the linear hypothesis assumption in a fully online way (i.e., without resorting to profile data collected from previous executions). NearestFit exploits a careful combination of nearest neighbor regression and statistical curve fitting techniques. Fine-grained profiles required by our theoretical progress model are approximated through space- and time-efficient data streaming algorithms. We implemented NearestFit on top of Hadoop 2.6.0. An extensive empirical assessment over the Amazon EC2 platform on a variety of benchmarks shows that its accuracy is very good, even when competitors incur non-negligible errors and wide prediction fluctuations.
我们解决了预测MapReduce应用程序性能的问题,设计了精确的进度指标,让程序员了解在执行任务期间完成的计算时间的百分比。这在现收现付的云环境中尤其重要,在这种环境中,可以终止慢速作业,以避免过高的成本。性能预测还可以作为若干配置文件引导优化的构建块。假设运行时间线性地依赖于输入大小,那么最先进的技术可能会受到数据偏态、负载不平衡和分散任务的严重损害。因此,我们设计了一种新的概要文件引导的进度指示器,称为NearestFit,它以完全在线的方式运行,而不需要线性假设(即,不依赖于从以前执行中收集的概要文件数据)。NearestFit利用了最近邻回归和统计曲线拟合技术的仔细组合。我们的理论进展模型所需要的细粒度剖面是通过空间和时间效率高的数据流算法来近似的。我们在Hadoop 2.6.0之上实现了NearestFit。对Amazon EC2平台在各种基准测试上的广泛经验评估表明,即使竞争对手产生不可忽略的错误和广泛的预测波动,其准确性也非常好。
{"title":"On data skewness, stragglers, and MapReduce progress indicators","authors":"Emilio Coppa, Irene Finocchi","doi":"10.1145/2806777.2806843","DOIUrl":"https://doi.org/10.1145/2806777.2806843","url":null,"abstract":"We tackle the problem of predicting the performance of MapReduce applications designing accurate progress indicators, which keep programmers informed on the percentage of completed computation time during the execution of a job. This is especially important in pay-as-you-go cloud environments, where slow jobs can be aborted in order to avoid excessive costs. Performance predictions can also serve as a building block for several profile-guided optimizations. By assuming that the running time depends linearly on the input size, state-of-the-art techniques can be seriously harmed by data skewness, load unbalancing, and straggling tasks. We thus design a novel profile-guided progress indicator, called NearestFit, that operates without the linear hypothesis assumption in a fully online way (i.e., without resorting to profile data collected from previous executions). NearestFit exploits a careful combination of nearest neighbor regression and statistical curve fitting techniques. Fine-grained profiles required by our theoretical progress model are approximated through space- and time-efficient data streaming algorithms. We implemented NearestFit on top of Hadoop 2.6.0. An extensive empirical assessment over the Amazon EC2 platform on a variety of benchmarks shows that its accuracy is very good, even when competitors incur non-negligible errors and wide prediction fluctuations.","PeriodicalId":275158,"journal":{"name":"Proceedings of the Sixth ACM Symposium on Cloud Computing","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131108807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
期刊
Proceedings of the Sixth ACM Symposium on Cloud Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1