首页 > 最新文献

2019 IEEE/ACM Workflows in Support of Large-Scale Science (WORKS)最新文献

英文 中文
Comparing GPU Power and Frequency Capping: A Case Study with the MuMMI Workflow 比较GPU功率和频率封顶:与MuMMI工作流的案例研究
Pub Date : 2019-11-01 DOI: 10.1109/WORKS49585.2019.00009
Tapasya Patki, Zachary Frye, H. Bhatia, F. Natale, J. Glosli, Helgi I. Ingólfsson, B. Rountree
Accomplishing the goal of exascale computing under a potential power limit requires HPC clusters to maximize both parallel efficiency and power efficiency. As modern HPC systems embark on a trend toward extreme heterogeneity leveraging multiple GPUs per node, power management becomes even more challenging, especially when catering to scientific workflows with co-scheduled components. The impact of managing GPU power on workflow performance and run-to-run reproducibility has not been adequately studied. In this paper, we present a first-of-its-kind research to study the impact of the two power management knobs that are available on NVIDIA Volta GPUs: frequency capping and power capping. We analyzed performance and power metrics of GPU’s on a top-10 supercomputer by tuning these knobs for more than 5,300 runs in a scientific workflow. Our data found that GPU power capping in a scientific workflow is an effective way of improving power efficiency while preserving performance, while GPU frequency capping is a demonstrably unpredictable way of reducing power consumption. Additionally, we identified that frequency capping results in higher variation and anomalous behavior on GPUs, which is counterintuitive to what has been observed in the research conducted on CPUs.
在潜在的功率限制下实现百亿亿次计算的目标需要高性能计算集群最大限度地提高并行效率和功率效率。随着现代HPC系统开始向每个节点使用多个gpu的极端异构趋势发展,电源管理变得更加具有挑战性,特别是在满足具有协同调度组件的科学工作流时。管理GPU功率对工作流性能和运行到运行的再现性的影响尚未得到充分的研究。在本文中,我们提出了一项开创性的研究,以研究NVIDIA Volta gpu上可用的两个电源管理旋钮:频率封顶和功率封顶的影响。我们在一个科学的工作流程中,通过调整这些旋钮,在一台排名前10的超级计算机上分析了GPU的性能和功耗指标,运行了5300多次。我们的数据发现,在科学的工作流程中,GPU功率封顶是在保持性能的同时提高功率效率的有效方法,而GPU频率封顶显然是一种不可预测的降低功耗的方法。此外,我们发现频率上限导致gpu上的更高变化和异常行为,这与在cpu上进行的研究中观察到的情况是违反直觉的。
{"title":"Comparing GPU Power and Frequency Capping: A Case Study with the MuMMI Workflow","authors":"Tapasya Patki, Zachary Frye, H. Bhatia, F. Natale, J. Glosli, Helgi I. Ingólfsson, B. Rountree","doi":"10.1109/WORKS49585.2019.00009","DOIUrl":"https://doi.org/10.1109/WORKS49585.2019.00009","url":null,"abstract":"Accomplishing the goal of exascale computing under a potential power limit requires HPC clusters to maximize both parallel efficiency and power efficiency. As modern HPC systems embark on a trend toward extreme heterogeneity leveraging multiple GPUs per node, power management becomes even more challenging, especially when catering to scientific workflows with co-scheduled components. The impact of managing GPU power on workflow performance and run-to-run reproducibility has not been adequately studied. In this paper, we present a first-of-its-kind research to study the impact of the two power management knobs that are available on NVIDIA Volta GPUs: frequency capping and power capping. We analyzed performance and power metrics of GPU’s on a top-10 supercomputer by tuning these knobs for more than 5,300 runs in a scientific workflow. Our data found that GPU power capping in a scientific workflow is an effective way of improving power efficiency while preserving performance, while GPU frequency capping is a demonstrably unpredictable way of reducing power consumption. Additionally, we identified that frequency capping results in higher variation and anomalous behavior on GPUs, which is counterintuitive to what has been observed in the research conducted on CPUs.","PeriodicalId":436926,"journal":{"name":"2019 IEEE/ACM Workflows in Support of Large-Scale Science (WORKS)","volume":"218 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130391624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
A Top-Down Performance Analysis Methodology for Workflows: Tracking Performance Issues from Overview to Individual Operations 工作流的自顶向下性能分析方法:从总体到单个操作跟踪性能问题
Pub Date : 2019-11-01 DOI: 10.1109/WORKS49585.2019.00008
Ronny Tschüter, C. Herold, William Williams, Maximilian Knespel, Matthias Weber
Scientific workflows are well established in parallel computing. A workflow represents a conceptual description of work items and their dependencies. Researchers can use workflows to abstract away implementation details or resources to focus on the high-level behavior of their work items. Due to these abstractions and the complexity of scientific workflows, finding performance bottlenecks along with their root causes can quickly become involving. This work presents a top-down methodology for performance analysis of workflows to support users in this challenging task. Our work provides summarized performance metrics covering different workflow perspectives, from general overview to individual jobs and their job steps. These summaries allow to identify inefficiencies and determine the responsible job steps. In addition, we record detailed performance data about job steps, enabling a fine-grained analysis of the associated execution to exactly pinpoint performance issues. The introduced methodology provides a powerful tool for comprehensive performance analysis of complex workflows.
科学工作流程在并行计算中得到了很好的建立。工作流表示工作项及其依赖关系的概念性描述。研究人员可以使用工作流抽象出实现细节或资源,以关注其工作项的高级行为。由于这些抽象和科学工作流的复杂性,找到性能瓶颈及其根本原因可能很快就会变得复杂起来。这项工作提出了一种自顶向下的工作流性能分析方法,以支持用户完成这项具有挑战性的任务。我们的工作提供了涵盖不同工作流透视图的总结性能指标,从总体概述到单个作业及其作业步骤。这些总结有助于识别效率低下的地方,并确定负责任的工作步骤。此外,我们还记录了有关作业步骤的详细性能数据,从而能够对相关执行进行细粒度分析,从而精确定位性能问题。所介绍的方法为复杂工作流的综合性能分析提供了一个强大的工具。
{"title":"A Top-Down Performance Analysis Methodology for Workflows: Tracking Performance Issues from Overview to Individual Operations","authors":"Ronny Tschüter, C. Herold, William Williams, Maximilian Knespel, Matthias Weber","doi":"10.1109/WORKS49585.2019.00008","DOIUrl":"https://doi.org/10.1109/WORKS49585.2019.00008","url":null,"abstract":"Scientific workflows are well established in parallel computing. A workflow represents a conceptual description of work items and their dependencies. Researchers can use workflows to abstract away implementation details or resources to focus on the high-level behavior of their work items. Due to these abstractions and the complexity of scientific workflows, finding performance bottlenecks along with their root causes can quickly become involving. This work presents a top-down methodology for performance analysis of workflows to support users in this challenging task. Our work provides summarized performance metrics covering different workflow perspectives, from general overview to individual jobs and their job steps. These summaries allow to identify inefficiencies and determine the responsible job steps. In addition, we record detailed performance data about job steps, enabling a fine-grained analysis of the associated execution to exactly pinpoint performance issues. The introduced methodology provides a powerful tool for comprehensive performance analysis of complex workflows.","PeriodicalId":436926,"journal":{"name":"2019 IEEE/ACM Workflows in Support of Large-Scale Science (WORKS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129249331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A Codesign Framework for Online Data Analysis and Reduction 联机数据分析与简化的协同设计框架
Pub Date : 2019-11-01 DOI: 10.1109/WORKS49585.2019.00007
Kshitij Mehta, Ian T Foster, S. Klasky, B. Allen, M. Wolf, Jeremy S. Logan, E. Suchyta, J. Choi, Keichi Takahashi, I. Yakushin, T. Munson
In this paper we discuss our design of a toolset for automating performance studies of composed HPC applications that perform online data reduction and analysis. We describe Cheetah, a new framework for performing parametric studies on coupled applications. Cheetah facilitates understanding the impact of various factors such as process placement, synchronicity of algorithms, and storage vs. compute requirements for online analysis of large data. Ultimately, we aim to create a catalog of performance results that can help scientists understand tradeoffs when designing next-generation simulations that make use of online processing techniques. We illustrate the design choices of Cheetah by using a reaction-diffusion simulation (Gray-Scott) paired with an analysis application to demonstrate initial results of fine-grained process placement on Summit, a pre-exascale supercomputer at Oak Ridge National Laboratory.
在本文中,我们讨论了我们设计的一个工具集,用于自动研究执行在线数据缩减和分析的组合HPC应用程序的性能。我们描述了Cheetah,一个在耦合应用中进行参数化研究的新框架。Cheetah有助于理解各种因素的影响,例如进程布局、算法的同步性以及在线大数据分析的存储与计算需求。最终,我们的目标是创建一个性能结果目录,可以帮助科学家在设计利用在线处理技术的下一代模拟时了解权衡。我们通过使用反应扩散模拟(Gray-Scott)和分析应用程序来说明Cheetah的设计选择,以演示在橡树岭国家实验室的前百亿次超级计算机Summit上细粒度过程放置的初步结果。
{"title":"A Codesign Framework for Online Data Analysis and Reduction","authors":"Kshitij Mehta, Ian T Foster, S. Klasky, B. Allen, M. Wolf, Jeremy S. Logan, E. Suchyta, J. Choi, Keichi Takahashi, I. Yakushin, T. Munson","doi":"10.1109/WORKS49585.2019.00007","DOIUrl":"https://doi.org/10.1109/WORKS49585.2019.00007","url":null,"abstract":"In this paper we discuss our design of a toolset for automating performance studies of composed HPC applications that perform online data reduction and analysis. We describe Cheetah, a new framework for performing parametric studies on coupled applications. Cheetah facilitates understanding the impact of various factors such as process placement, synchronicity of algorithms, and storage vs. compute requirements for online analysis of large data. Ultimately, we aim to create a catalog of performance results that can help scientists understand tradeoffs when designing next-generation simulations that make use of online processing techniques. We illustrate the design choices of Cheetah by using a reaction-diffusion simulation (Gray-Scott) paired with an analysis application to demonstrate initial results of fine-grained process placement on Summit, a pre-exascale supercomputer at Oak Ridge National Laboratory.","PeriodicalId":436926,"journal":{"name":"2019 IEEE/ACM Workflows in Support of Large-Scale Science (WORKS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116445760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
On a Parallel Spark Workflow for Frequent Itemset Mining Based on Array Prefix-Tree 基于数组前缀树的频繁项集挖掘并行Spark工作流研究
Pub Date : 2019-11-01 DOI: 10.1109/WORKS49585.2019.00011
Xinzheng Niu, Mideng Qian, C. Wu, Aiqin Hou
Frequent Itemset Mining (FIM) is a fundamental procedure in various data mining techniques such as association rule mining. Among many existing algorithms, FP-Growth is considered as a milestone achievement that discovers frequenti temsets without generating candidates. However, due to the high complexity of its mining process and the high cost of its memory usage, FP-Growth still suffers from a performance bottleneck when dealing with large datasets. In this paper, we design a new Array Prefix-Tree structure, and based on that, propose an Array Prefix-Tree Growth (APT-Growth) algorithm, which explicitly obviates the need of recursively constructing conditional FP-Tree as required by FP-Growth. To support big data analytics, we further design and implement a parallel version of APTGrowth, referred to as PAPT-Growth, as a Spark workflow. We conduct FIM workflow experiments on both real-life and synthetic datasets for performance evaluation, and extensive results show that PAPT-Growth outperforms other representative parallel FIM algorithms in terms of execution time, which sheds light on its potential applications to big data mining.
频繁项集挖掘(FIM)是关联规则挖掘等各种数据挖掘技术中的一个基本过程。在现有的许多算法中,FP-Growth算法被认为是一项里程碑式的成就,它可以在不生成候选样本的情况下发现频率样本集。然而,由于其挖掘过程的高复杂性和内存使用的高成本,FP-Growth在处理大型数据集时仍然存在性能瓶颈。本文设计了一种新的Array Prefix-Tree结构,并在此基础上提出了一种Array Prefix-Tree Growth (APT-Growth)算法,该算法明确地避免了FP-Growth所需的递归构造条件FP-Tree的需要。为了支持大数据分析,我们进一步设计并实现了APTGrowth的并行版本,称为PAPT-Growth,作为Spark工作流。我们在真实数据集和合成数据集上进行了FIM工作流实验以进行性能评估,广泛的结果表明,PAPT-Growth在执行时间方面优于其他代表性的并行FIM算法,这揭示了其在大数据挖掘中的潜在应用。
{"title":"On a Parallel Spark Workflow for Frequent Itemset Mining Based on Array Prefix-Tree","authors":"Xinzheng Niu, Mideng Qian, C. Wu, Aiqin Hou","doi":"10.1109/WORKS49585.2019.00011","DOIUrl":"https://doi.org/10.1109/WORKS49585.2019.00011","url":null,"abstract":"Frequent Itemset Mining (FIM) is a fundamental procedure in various data mining techniques such as association rule mining. Among many existing algorithms, FP-Growth is considered as a milestone achievement that discovers frequenti temsets without generating candidates. However, due to the high complexity of its mining process and the high cost of its memory usage, FP-Growth still suffers from a performance bottleneck when dealing with large datasets. In this paper, we design a new Array Prefix-Tree structure, and based on that, propose an Array Prefix-Tree Growth (APT-Growth) algorithm, which explicitly obviates the need of recursively constructing conditional FP-Tree as required by FP-Growth. To support big data analytics, we further design and implement a parallel version of APTGrowth, referred to as PAPT-Growth, as a Spark workflow. We conduct FIM workflow experiments on both real-life and synthetic datasets for performance evaluation, and extensive results show that PAPT-Growth outperforms other representative parallel FIM algorithms in terms of execution time, which sheds light on its potential applications to big data mining.","PeriodicalId":436926,"journal":{"name":"2019 IEEE/ACM Workflows in Support of Large-Scale Science (WORKS)","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122058507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Provenance Data in the Machine Learning Lifecycle in Computational Science and Engineering 计算科学与工程中机器学习生命周期中的来源数据
Pub Date : 2019-10-09 DOI: 10.1109/WORKS49585.2019.00006
Renan Souza, L. Azevedo, Vítor Lourenço, E. Soares, R. Thiago, R. Brandão, D. Civitarese, E. V. Brazil, M. Moreno, P. Valduriez, M. Mattoso, Renato Cerqueira, M. Netto
Machine Learning (ML) has become essential in several industries. In Computational Science and Engineering (CSE), the complexity of the ML lifecycle comes from the large variety of data, scientists' expertise, tools, and workflows. If data are not tracked properly during the lifecycle, it becomes unfeasible to recreate a ML model from scratch or to explain to stackholders how it was created. The main limitation of provenance tracking solutions is that they cannot cope with provenance capture and integration of domain and ML data processed in the multiple workflows in the lifecycle, while keeping the provenance capture overhead low. To handle this problem, in this paper we contribute with a detailed characterization of provenance data in the ML lifecycle in CSE; a new provenance data representation, called PROV-ML, built on top of W3C PROV and ML Schema; and extensions to a system that tracks provenance from multiple workflows to address the characteristics of ML and CSE, and to allow for provenance queries with a standard vocabulary. We show a practical use in a real case in the O&G industry, along with its evaluation using 239,616 CUDA cores in parallel.
机器学习(ML)在许多行业已经变得必不可少。在计算科学与工程(CSE)中,机器学习生命周期的复杂性来自于大量的数据、科学家的专业知识、工具和工作流程。如果在生命周期中没有正确地跟踪数据,那么从头开始重新创建ML模型或向涉众解释它是如何创建的就变得不可行的。来源跟踪解决方案的主要限制是它们不能处理在生命周期中多个工作流中处理的来源捕获和领域和ML数据的集成,同时保持较低的来源捕获开销。为了解决这一问题,本文对CSE中ML生命周期中的来源数据进行了详细的描述;一种新的来源数据表示,称为provi -ML,建立在W3C provv和ML模式之上;以及对系统的扩展,该系统跟踪来自多个工作流的来源,以解决ML和CSE的特征,并允许使用标准词汇表进行来源查询。我们在油气行业的一个实际案例中展示了它的实际应用,以及并行使用239,616个CUDA内核的评估。
{"title":"Provenance Data in the Machine Learning Lifecycle in Computational Science and Engineering","authors":"Renan Souza, L. Azevedo, Vítor Lourenço, E. Soares, R. Thiago, R. Brandão, D. Civitarese, E. V. Brazil, M. Moreno, P. Valduriez, M. Mattoso, Renato Cerqueira, M. Netto","doi":"10.1109/WORKS49585.2019.00006","DOIUrl":"https://doi.org/10.1109/WORKS49585.2019.00006","url":null,"abstract":"Machine Learning (ML) has become essential in several industries. In Computational Science and Engineering (CSE), the complexity of the ML lifecycle comes from the large variety of data, scientists' expertise, tools, and workflows. If data are not tracked properly during the lifecycle, it becomes unfeasible to recreate a ML model from scratch or to explain to stackholders how it was created. The main limitation of provenance tracking solutions is that they cannot cope with provenance capture and integration of domain and ML data processed in the multiple workflows in the lifecycle, while keeping the provenance capture overhead low. To handle this problem, in this paper we contribute with a detailed characterization of provenance data in the ML lifecycle in CSE; a new provenance data representation, called PROV-ML, built on top of W3C PROV and ML Schema; and extensions to a system that tracks provenance from multiple workflows to address the characteristics of ML and CSE, and to allow for provenance queries with a standard vocabulary. We show a practical use in a real case in the O&G industry, along with its evaluation using 239,616 CUDA cores in parallel.","PeriodicalId":436926,"journal":{"name":"2019 IEEE/ACM Workflows in Support of Large-Scale Science (WORKS)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126557660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
A Performance Comparison of Dask and Apache Spark for Data-Intensive Neuroimaging Pipelines 数据密集型神经成像管道中Dask和Apache Spark的性能比较
Pub Date : 2019-07-30 DOI: 10.1109/WORKS49585.2019.00010
Mathieu Dugré, Valérie Hayot-Sasson, T. Glatard
In the past few years, neuroimaging has entered the Big Data era due to the joint increase in image resolution, data sharing, and study sizes. However, no particular Big Data engines have emerged in this field, and several alternatives remain available. We compare two popular Big Data engines with Python APIs, Apache Spark and Dask, for their runtime performance in processing neuroimaging pipelines. Our evaluation uses two synthetic pipelines processing the 81GB BigBrain image, and a real pipeline processing anatomical data from more than 1,000 subjects. We benchmark these pipelines using various combinations of task durations, data sizes, and numbers of workers, deployed on an 8-node (8 cores ea.) compute cluster in Compute Canada's Arbutus cloud. We evaluate PySpark's RDD API against Dask's Bag, Delayed and Futures. Results show that despite slight differences between Spark and Dask, both engines perform comparably. However, Dask pipelines risk being limited by Python's GIL depending on task type and cluster configuration. In all cases, the major limiting factor was data transfer. While either engine is suitable for neuroimaging pipelines, more effort needs to be placed in reducing data transfer time.
在过去的几年里,由于图像分辨率、数据共享和研究规模的共同提高,神经影像学进入了大数据时代。然而,在这个领域还没有特别的大数据引擎出现,还有一些替代方案可供选择。我们比较了两种流行的使用Python api的大数据引擎,Apache Spark和Dask,它们在处理神经成像管道方面的运行时性能。我们的评估使用两个合成管道处理81GB的BigBrain图像,一个真实管道处理来自1000多名受试者的解剖数据。我们使用任务持续时间、数据大小和工作人员数量的各种组合对这些管道进行基准测试,这些管道部署在compute Canada的Arbutus云中的8节点(8核)计算集群上。我们将PySpark的RDD API与Dask的Bag、Delayed和Futures进行比较。结果表明,尽管Spark和Dask之间存在细微差异,但这两个引擎的性能相当。然而,根据任务类型和集群配置,任务管道可能会受到Python GIL的限制。在所有情况下,主要的限制因素是数据传输。虽然这两种引擎都适用于神经成像管道,但在减少数据传输时间方面需要付出更多的努力。
{"title":"A Performance Comparison of Dask and Apache Spark for Data-Intensive Neuroimaging Pipelines","authors":"Mathieu Dugré, Valérie Hayot-Sasson, T. Glatard","doi":"10.1109/WORKS49585.2019.00010","DOIUrl":"https://doi.org/10.1109/WORKS49585.2019.00010","url":null,"abstract":"In the past few years, neuroimaging has entered the Big Data era due to the joint increase in image resolution, data sharing, and study sizes. However, no particular Big Data engines have emerged in this field, and several alternatives remain available. We compare two popular Big Data engines with Python APIs, Apache Spark and Dask, for their runtime performance in processing neuroimaging pipelines. Our evaluation uses two synthetic pipelines processing the 81GB BigBrain image, and a real pipeline processing anatomical data from more than 1,000 subjects. We benchmark these pipelines using various combinations of task durations, data sizes, and numbers of workers, deployed on an 8-node (8 cores ea.) compute cluster in Compute Canada's Arbutus cloud. We evaluate PySpark's RDD API against Dask's Bag, Delayed and Futures. Results show that despite slight differences between Spark and Dask, both engines perform comparably. However, Dask pipelines risk being limited by Python's GIL depending on task type and cluster configuration. In all cases, the major limiting factor was data transfer. While either engine is suitable for neuroimaging pipelines, more effort needs to be placed in reducing data transfer time.","PeriodicalId":436926,"journal":{"name":"2019 IEEE/ACM Workflows in Support of Large-Scale Science (WORKS)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133844088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
期刊
2019 IEEE/ACM Workflows in Support of Large-Scale Science (WORKS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1