首页 > 最新文献

SWEET '12最新文献

英文 中文
DAGwoman: enabling DAGMan-like workflows on non-Condor platforms DAGwoman:在非condor平台上启用类似dagman的工作流
Pub Date : 2012-05-20 DOI: 10.1145/2443416.2443419
Thomas Tschager, H. Schmidt
Scientific analyses have grown more and more complex. Thus, scientific workflows gained much interest and importance to automate and handle complex analyses. Tools abound to ease generation, handling and enactment of scientific workflows on distributed compute resources. Among the different workflow engines DAGMan seems to be widely available and supported by a number of tools. Unfortunately, if Condor is not installed users lack the possibility to use DAGMan. A new workflow engine, DAGwoman, is presented which can be run in user-space and allows to run DAGMan-formatted workflows. Using an artificial and two bioinformatics workflows DAGwoman is compared to GridWay's GWDAG engine and to DAGMan based on Condor-G. Showing good results with respect to workflow engine delay and features richness DAGwoman offers a complementary tool to efficiently run DAGMan-workflows if Condor is not available.
科学分析变得越来越复杂。因此,科学工作流对于自动化和处理复杂的分析获得了很大的兴趣和重要性。有很多工具可以简化分布式计算资源上科学工作流的生成、处理和实施。在不同的工作流引擎中,DAGMan似乎广泛可用,并得到许多工具的支持。不幸的是,如果没有安装Condor,用户就无法使用DAGMan。提出了一个新的工作流引擎DAGwoman,它可以在用户空间中运行,并允许运行dagman格式的工作流。使用人工和两个生物信息学工作流程,DAGwoman与GridWay的GWDAG引擎和基于Condor-G的DAGMan进行了比较。在工作流引擎延迟和功能丰富方面,DAGwoman提供了一个补充工具,如果Condor不可用,可以有效地运行dagman -工作流。
{"title":"DAGwoman: enabling DAGMan-like workflows on non-Condor platforms","authors":"Thomas Tschager, H. Schmidt","doi":"10.1145/2443416.2443419","DOIUrl":"https://doi.org/10.1145/2443416.2443419","url":null,"abstract":"Scientific analyses have grown more and more complex. Thus, scientific workflows gained much interest and importance to automate and handle complex analyses. Tools abound to ease generation, handling and enactment of scientific workflows on distributed compute resources. Among the different workflow engines DAGMan seems to be widely available and supported by a number of tools. Unfortunately, if Condor is not installed users lack the possibility to use DAGMan. A new workflow engine, DAGwoman, is presented which can be run in user-space and allows to run DAGMan-formatted workflows. Using an artificial and two bioinformatics workflows DAGwoman is compared to GridWay's GWDAG engine and to DAGMan based on Condor-G. Showing good results with respect to workflow engine delay and features richness DAGwoman offers a complementary tool to efficiently run DAGMan-workflows if Condor is not available.","PeriodicalId":143151,"journal":{"name":"SWEET '12","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122255828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Turbine: a distributed-memory dataflow engine for extreme-scale many-task applications Turbine:一个分布式内存数据流引擎,用于极端规模的多任务应用程序
Pub Date : 2012-05-20 DOI: 10.1145/2443416.2443421
J. Wozniak, Timothy G. Armstrong, K. Maheshwari, E. Lusk, D. Katz, M. Wilde, Ian T Foster
Efficiently utilizing the rapidly increasing concurrency of multi-petaflop computing systems is a significant programming challenge. One approach is to structure applications with an upper layer of many loosely-coupled coarse-grained tasks, each comprising a tightly-coupled parallel function or program. "Many-task" programming models such as functional parallel dataflow may be used at the upper layer to generate massive numbers of tasks, each of which generates significant tighly-coupled parallelism at the lower level via multithreading, message passing, and/or partitioned global address spaces. At large scales, however, the management of task distribution, data dependencies, and inter-task data movement is a significant performance challenge. In this work, we describe Turbine, a new highly scalable and distributed many-task dataflow engine. Turbine executes a generalized many-task intermediate representation with automated self-distribution, and is scalable to multi-petaflop infrastructures. We present here the architecture of Turbine and its performance on highly concurrent systems.
有效地利用快速增加的千万亿次计算系统的并发性是一个重大的编程挑战。一种方法是使用由许多松散耦合的粗粒度任务组成的上层来构建应用程序,每个任务都包含一个紧密耦合的并行功能或程序。“多任务”编程模型(如功能并行数据流)可以在上层用于生成大量任务,每个任务通过多线程、消息传递和/或分区的全局地址空间在低层生成重要的紧密耦合并行性。然而,在大范围内,任务分布、数据依赖关系和任务间数据移动的管理是一个重大的性能挑战。在这项工作中,我们描述了涡轮,一个新的高度可扩展和分布式多任务数据流引擎。涡轮机执行一种具有自动自分配的广义多任务中间表示,可扩展到千万亿次的基础设施。本文介绍了汽轮机的结构及其在高并发系统上的性能。
{"title":"Turbine: a distributed-memory dataflow engine for extreme-scale many-task applications","authors":"J. Wozniak, Timothy G. Armstrong, K. Maheshwari, E. Lusk, D. Katz, M. Wilde, Ian T Foster","doi":"10.1145/2443416.2443421","DOIUrl":"https://doi.org/10.1145/2443416.2443421","url":null,"abstract":"Efficiently utilizing the rapidly increasing concurrency of multi-petaflop computing systems is a significant programming challenge. One approach is to structure applications with an upper layer of many loosely-coupled coarse-grained tasks, each comprising a tightly-coupled parallel function or program. \"Many-task\" programming models such as functional parallel dataflow may be used at the upper layer to generate massive numbers of tasks, each of which generates significant tighly-coupled parallelism at the lower level via multithreading, message passing, and/or partitioned global address spaces. At large scales, however, the management of task distribution, data dependencies, and inter-task data movement is a significant performance challenge. In this work, we describe Turbine, a new highly scalable and distributed many-task dataflow engine. Turbine executes a generalized many-task intermediate representation with automated self-distribution, and is scalable to multi-petaflop infrastructures. We present here the architecture of Turbine and its performance on highly concurrent systems.","PeriodicalId":143151,"journal":{"name":"SWEET '12","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123839755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Makeflow: a portable abstraction for data intensive computing on clusters, clouds, and grids makflow:用于集群、云和网格上的数据密集型计算的可移植抽象
Pub Date : 2012-05-20 DOI: 10.1145/2443416.2443417
M. Albrecht, P. Donnelly, Peter Bui, D. Thain
In recent years, there has been a renewed interest in languages and systems for large scale distributed computing. Unfortunately, most systems available to the end user use a custom description language tightly coupled to a specific runtime implementation, making it difficult to transfer applications between systems. To address this problem we introduce Makeflow, a simple system for expressing and running a data-intensive workflow across multiple execution engines without requiring changes to the application or workflow description. Makeflow allows any user familiar with basic Unix Make syntax to generate a workflow and run it on one of many supported execution systems. Furthermore, in order to assess the performance characteristics of the various execution engines available to users and assist them in selecting one for use we introduce Workbench, a suite of benchmarks designed for analyzing common workflow patterns. We evaluate Workbench on two physical architectures -- the first a storage cluster with local disks and a slower network and the second a high performance computing cluster with a central parallel filesystem and fast network -- using a variety of execution engines. We conclude by demonstrating three applications that use Makeflow to execute data intensive applications consisting of thousands of jobs.
近年来,人们对大规模分布式计算的语言和系统重新产生了兴趣。不幸的是,最终用户可用的大多数系统使用与特定运行时实现紧密耦合的自定义描述语言,这使得在系统之间传输应用程序变得困难。为了解决这个问题,我们引入了Makeflow,这是一个简单的系统,用于跨多个执行引擎表达和运行数据密集型工作流,而不需要更改应用程序或工作流描述。Makeflow允许任何熟悉基本Unix Make语法的用户生成工作流,并在许多支持的执行系统之一上运行它。此外,为了评估可供用户使用的各种执行引擎的性能特征,并帮助他们选择要使用的执行引擎,我们介绍了Workbench,这是一套用于分析常见工作流模式的基准测试。我们在两个物理体系结构上评估Workbench——第一个是具有本地磁盘和较慢网络的存储集群,第二个是具有中央并行文件系统和快速网络的高性能计算集群——使用各种执行引擎。最后,我们将演示三个应用程序,它们使用makflow来执行由数千个作业组成的数据密集型应用程序。
{"title":"Makeflow: a portable abstraction for data intensive computing on clusters, clouds, and grids","authors":"M. Albrecht, P. Donnelly, Peter Bui, D. Thain","doi":"10.1145/2443416.2443417","DOIUrl":"https://doi.org/10.1145/2443416.2443417","url":null,"abstract":"In recent years, there has been a renewed interest in languages and systems for large scale distributed computing. Unfortunately, most systems available to the end user use a custom description language tightly coupled to a specific runtime implementation, making it difficult to transfer applications between systems. To address this problem we introduce Makeflow, a simple system for expressing and running a data-intensive workflow across multiple execution engines without requiring changes to the application or workflow description. Makeflow allows any user familiar with basic Unix Make syntax to generate a workflow and run it on one of many supported execution systems. Furthermore, in order to assess the performance characteristics of the various execution engines available to users and assist them in selecting one for use we introduce Workbench, a suite of benchmarks designed for analyzing common workflow patterns. We evaluate Workbench on two physical architectures -- the first a storage cluster with local disks and a slower network and the second a high performance computing cluster with a central parallel filesystem and fast network -- using a variety of execution engines. We conclude by demonstrating three applications that use Makeflow to execute data intensive applications consisting of thousands of jobs.","PeriodicalId":143151,"journal":{"name":"SWEET '12","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129902587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 145
Evaluating parameter sweep workflows in high performance computing 高性能计算中参数扫描工作流的评估
Pub Date : 2012-05-20 DOI: 10.1145/2443416.2443418
F. Chirigati, V. S. Sousa, Eduardo S. Ogasawara, Daniel de Oliveira, Jonas Dias, F. Porto, P. Valduriez, M. Mattoso
Scientific experiments based on computer simulations can be defined, executed and monitored using Scientific Workflow Management Systems (SWfMS). Several SWfMS are available, each with a different goal and a different engine. Due to the exploratory analysis, scientists need to run parameter sweep (PS) workflows, which are workflows that are invoked repeatedly using different input data. These workflows generate a large amount of tasks that are submitted to High Performance Computing (HPC) environments. Different execution models for a workflow may have significant differences in performance in HPC. However, selecting the best execution model for a given workflow is difficult due to the existence of many characteristics of the workflow that may affect the parallel execution. We developed a study to show performance impacts of using different execution models in running PS workflows in HPC. Our study contributes by presenting a characterization of PS workflow patterns (the basis for many existing scientific workflows) and its behavior under different execution models in HPC. We evaluated four execution models to run workflows in parallel. Our study measures the performance behavior of small, large and complex workflows among the evaluated execution models. The results can be used as a guideline to select the best model for a given scientific workflow execution in HPC. Our evaluation may also serve as a basis for workflow designers to analyze the expected behavior of an HPC workflow engine based on the characteristics of PS workflows.
基于计算机模拟的科学实验可以使用科学工作流管理系统(SWfMS)来定义、执行和监控。有几个SWfMS可用,每个SWfMS都有不同的目标和不同的引擎。由于探索性分析,科学家需要运行参数扫描(PS)工作流,这是使用不同输入数据重复调用的工作流。这些工作流产生大量的任务,这些任务被提交给高性能计算(HPC)环境。在HPC中,工作流的不同执行模型在性能上可能存在显著差异。然而,对于给定的工作流,选择最佳的执行模型是困难的,因为工作流存在许多可能影响并行执行的特征。我们进行了一项研究,以显示在HPC中运行PS工作流时使用不同的执行模型对性能的影响。我们的研究通过描述PS工作流模式(许多现有科学工作流的基础)及其在HPC中不同执行模型下的行为做出了贡献。我们评估了四个并行运行工作流的执行模型。我们的研究在评估的执行模型中测量了小型、大型和复杂工作流的性能行为。计算结果可作为在高性能计算中选择科学工作流执行的最佳模型的指导。我们的评估也可以作为工作流设计者分析基于PS工作流特征的HPC工作流引擎的预期行为的基础。
{"title":"Evaluating parameter sweep workflows in high performance computing","authors":"F. Chirigati, V. S. Sousa, Eduardo S. Ogasawara, Daniel de Oliveira, Jonas Dias, F. Porto, P. Valduriez, M. Mattoso","doi":"10.1145/2443416.2443418","DOIUrl":"https://doi.org/10.1145/2443416.2443418","url":null,"abstract":"Scientific experiments based on computer simulations can be defined, executed and monitored using Scientific Workflow Management Systems (SWfMS). Several SWfMS are available, each with a different goal and a different engine. Due to the exploratory analysis, scientists need to run parameter sweep (PS) workflows, which are workflows that are invoked repeatedly using different input data. These workflows generate a large amount of tasks that are submitted to High Performance Computing (HPC) environments. Different execution models for a workflow may have significant differences in performance in HPC. However, selecting the best execution model for a given workflow is difficult due to the existence of many characteristics of the workflow that may affect the parallel execution. We developed a study to show performance impacts of using different execution models in running PS workflows in HPC. Our study contributes by presenting a characterization of PS workflow patterns (the basis for many existing scientific workflows) and its behavior under different execution models in HPC. We evaluated four execution models to run workflows in parallel. Our study measures the performance behavior of small, large and complex workflows among the evaluated execution models. The results can be used as a guideline to select the best model for a given scientific workflow execution in HPC. Our evaluation may also serve as a basis for workflow designers to analyze the expected behavior of an HPC workflow engine based on the characteristics of PS workflows.","PeriodicalId":143151,"journal":{"name":"SWEET '12","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124470176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Oozie: towards a scalable workflow management system for Hadoop Oozie:面向Hadoop的可扩展工作流管理系统
Pub Date : 2012-05-20 DOI: 10.1145/2443416.2443420
Mohammad Islam, Angelo K. Huang, Mohamed Battisha, Michelle Chiang, Santhosh Srinivasan, Craig Peters, A. Neumann, A. Abdelnur
Hadoop is a massively scalable parallel computation platform capable of running hundreds of jobs concurrently, and many thousands of jobs per day. Managing all these computations demands for a workflow and scheduling system. In this paper, we identify four indispensable qualities that a Hadoop workflow management system must fulfill namely Scalability, Security, Multi-tenancy, and Operability. We find that conventional workflow management tools lack at least one of these qualities, and therefore present Apache Oozie, a workflow management system specialized for Hadoop. We discuss the architecture of Oozie, share our production experience over the last few years at Yahoo, and evaluate Oozie's scalability and performance.
Hadoop是一个大规模可扩展的并行计算平台,能够同时运行数百个作业,每天运行数千个作业。管理所有这些计算需要一个工作流和调度系统。在本文中,我们确定了Hadoop工作流管理系统必须满足的四个不可或缺的品质,即可扩展性、安全性、多租户和可操作性。我们发现传统的工作流管理工具至少缺乏这些品质中的一个,因此提出了Apache Oozie,一个专门为Hadoop设计的工作流管理系统。我们讨论了Oozie的架构,分享了我们过去几年在Yahoo的生产经验,并评估了Oozie的可伸缩性和性能。
{"title":"Oozie: towards a scalable workflow management system for Hadoop","authors":"Mohammad Islam, Angelo K. Huang, Mohamed Battisha, Michelle Chiang, Santhosh Srinivasan, Craig Peters, A. Neumann, A. Abdelnur","doi":"10.1145/2443416.2443420","DOIUrl":"https://doi.org/10.1145/2443416.2443420","url":null,"abstract":"Hadoop is a massively scalable parallel computation platform capable of running hundreds of jobs concurrently, and many thousands of jobs per day. Managing all these computations demands for a workflow and scheduling system. In this paper, we identify four indispensable qualities that a Hadoop workflow management system must fulfill namely Scalability, Security, Multi-tenancy, and Operability. We find that conventional workflow management tools lack at least one of these qualities, and therefore present Apache Oozie, a workflow management system specialized for Hadoop. We discuss the architecture of Oozie, share our production experience over the last few years at Yahoo, and evaluate Oozie's scalability and performance.","PeriodicalId":143151,"journal":{"name":"SWEET '12","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132038560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 107
期刊
SWEET '12
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1