Pub Date : 2019-11-01DOI: 10.1109/WORKS49585.2019.00009
Tapasya Patki, Zachary Frye, H. Bhatia, F. Natale, J. Glosli, Helgi I. Ingólfsson, B. Rountree
Accomplishing the goal of exascale computing under a potential power limit requires HPC clusters to maximize both parallel efficiency and power efficiency. As modern HPC systems embark on a trend toward extreme heterogeneity leveraging multiple GPUs per node, power management becomes even more challenging, especially when catering to scientific workflows with co-scheduled components. The impact of managing GPU power on workflow performance and run-to-run reproducibility has not been adequately studied. In this paper, we present a first-of-its-kind research to study the impact of the two power management knobs that are available on NVIDIA Volta GPUs: frequency capping and power capping. We analyzed performance and power metrics of GPU’s on a top-10 supercomputer by tuning these knobs for more than 5,300 runs in a scientific workflow. Our data found that GPU power capping in a scientific workflow is an effective way of improving power efficiency while preserving performance, while GPU frequency capping is a demonstrably unpredictable way of reducing power consumption. Additionally, we identified that frequency capping results in higher variation and anomalous behavior on GPUs, which is counterintuitive to what has been observed in the research conducted on CPUs.
在潜在的功率限制下实现百亿亿次计算的目标需要高性能计算集群最大限度地提高并行效率和功率效率。随着现代HPC系统开始向每个节点使用多个gpu的极端异构趋势发展,电源管理变得更加具有挑战性,特别是在满足具有协同调度组件的科学工作流时。管理GPU功率对工作流性能和运行到运行的再现性的影响尚未得到充分的研究。在本文中,我们提出了一项开创性的研究,以研究NVIDIA Volta gpu上可用的两个电源管理旋钮:频率封顶和功率封顶的影响。我们在一个科学的工作流程中,通过调整这些旋钮,在一台排名前10的超级计算机上分析了GPU的性能和功耗指标,运行了5300多次。我们的数据发现,在科学的工作流程中,GPU功率封顶是在保持性能的同时提高功率效率的有效方法,而GPU频率封顶显然是一种不可预测的降低功耗的方法。此外,我们发现频率上限导致gpu上的更高变化和异常行为,这与在cpu上进行的研究中观察到的情况是违反直觉的。
{"title":"Comparing GPU Power and Frequency Capping: A Case Study with the MuMMI Workflow","authors":"Tapasya Patki, Zachary Frye, H. Bhatia, F. Natale, J. Glosli, Helgi I. Ingólfsson, B. Rountree","doi":"10.1109/WORKS49585.2019.00009","DOIUrl":"https://doi.org/10.1109/WORKS49585.2019.00009","url":null,"abstract":"Accomplishing the goal of exascale computing under a potential power limit requires HPC clusters to maximize both parallel efficiency and power efficiency. As modern HPC systems embark on a trend toward extreme heterogeneity leveraging multiple GPUs per node, power management becomes even more challenging, especially when catering to scientific workflows with co-scheduled components. The impact of managing GPU power on workflow performance and run-to-run reproducibility has not been adequately studied. In this paper, we present a first-of-its-kind research to study the impact of the two power management knobs that are available on NVIDIA Volta GPUs: frequency capping and power capping. We analyzed performance and power metrics of GPU’s on a top-10 supercomputer by tuning these knobs for more than 5,300 runs in a scientific workflow. Our data found that GPU power capping in a scientific workflow is an effective way of improving power efficiency while preserving performance, while GPU frequency capping is a demonstrably unpredictable way of reducing power consumption. Additionally, we identified that frequency capping results in higher variation and anomalous behavior on GPUs, which is counterintuitive to what has been observed in the research conducted on CPUs.","PeriodicalId":436926,"journal":{"name":"2019 IEEE/ACM Workflows in Support of Large-Scale Science (WORKS)","volume":"218 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130391624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/WORKS49585.2019.00008
Ronny Tschüter, C. Herold, William Williams, Maximilian Knespel, Matthias Weber
Scientific workflows are well established in parallel computing. A workflow represents a conceptual description of work items and their dependencies. Researchers can use workflows to abstract away implementation details or resources to focus on the high-level behavior of their work items. Due to these abstractions and the complexity of scientific workflows, finding performance bottlenecks along with their root causes can quickly become involving. This work presents a top-down methodology for performance analysis of workflows to support users in this challenging task. Our work provides summarized performance metrics covering different workflow perspectives, from general overview to individual jobs and their job steps. These summaries allow to identify inefficiencies and determine the responsible job steps. In addition, we record detailed performance data about job steps, enabling a fine-grained analysis of the associated execution to exactly pinpoint performance issues. The introduced methodology provides a powerful tool for comprehensive performance analysis of complex workflows.
{"title":"A Top-Down Performance Analysis Methodology for Workflows: Tracking Performance Issues from Overview to Individual Operations","authors":"Ronny Tschüter, C. Herold, William Williams, Maximilian Knespel, Matthias Weber","doi":"10.1109/WORKS49585.2019.00008","DOIUrl":"https://doi.org/10.1109/WORKS49585.2019.00008","url":null,"abstract":"Scientific workflows are well established in parallel computing. A workflow represents a conceptual description of work items and their dependencies. Researchers can use workflows to abstract away implementation details or resources to focus on the high-level behavior of their work items. Due to these abstractions and the complexity of scientific workflows, finding performance bottlenecks along with their root causes can quickly become involving. This work presents a top-down methodology for performance analysis of workflows to support users in this challenging task. Our work provides summarized performance metrics covering different workflow perspectives, from general overview to individual jobs and their job steps. These summaries allow to identify inefficiencies and determine the responsible job steps. In addition, we record detailed performance data about job steps, enabling a fine-grained analysis of the associated execution to exactly pinpoint performance issues. The introduced methodology provides a powerful tool for comprehensive performance analysis of complex workflows.","PeriodicalId":436926,"journal":{"name":"2019 IEEE/ACM Workflows in Support of Large-Scale Science (WORKS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129249331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/WORKS49585.2019.00007
Kshitij Mehta, Ian T Foster, S. Klasky, B. Allen, M. Wolf, Jeremy S. Logan, E. Suchyta, J. Choi, Keichi Takahashi, I. Yakushin, T. Munson
In this paper we discuss our design of a toolset for automating performance studies of composed HPC applications that perform online data reduction and analysis. We describe Cheetah, a new framework for performing parametric studies on coupled applications. Cheetah facilitates understanding the impact of various factors such as process placement, synchronicity of algorithms, and storage vs. compute requirements for online analysis of large data. Ultimately, we aim to create a catalog of performance results that can help scientists understand tradeoffs when designing next-generation simulations that make use of online processing techniques. We illustrate the design choices of Cheetah by using a reaction-diffusion simulation (Gray-Scott) paired with an analysis application to demonstrate initial results of fine-grained process placement on Summit, a pre-exascale supercomputer at Oak Ridge National Laboratory.
{"title":"A Codesign Framework for Online Data Analysis and Reduction","authors":"Kshitij Mehta, Ian T Foster, S. Klasky, B. Allen, M. Wolf, Jeremy S. Logan, E. Suchyta, J. Choi, Keichi Takahashi, I. Yakushin, T. Munson","doi":"10.1109/WORKS49585.2019.00007","DOIUrl":"https://doi.org/10.1109/WORKS49585.2019.00007","url":null,"abstract":"In this paper we discuss our design of a toolset for automating performance studies of composed HPC applications that perform online data reduction and analysis. We describe Cheetah, a new framework for performing parametric studies on coupled applications. Cheetah facilitates understanding the impact of various factors such as process placement, synchronicity of algorithms, and storage vs. compute requirements for online analysis of large data. Ultimately, we aim to create a catalog of performance results that can help scientists understand tradeoffs when designing next-generation simulations that make use of online processing techniques. We illustrate the design choices of Cheetah by using a reaction-diffusion simulation (Gray-Scott) paired with an analysis application to demonstrate initial results of fine-grained process placement on Summit, a pre-exascale supercomputer at Oak Ridge National Laboratory.","PeriodicalId":436926,"journal":{"name":"2019 IEEE/ACM Workflows in Support of Large-Scale Science (WORKS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116445760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/WORKS49585.2019.00011
Xinzheng Niu, Mideng Qian, C. Wu, Aiqin Hou
Frequent Itemset Mining (FIM) is a fundamental procedure in various data mining techniques such as association rule mining. Among many existing algorithms, FP-Growth is considered as a milestone achievement that discovers frequenti temsets without generating candidates. However, due to the high complexity of its mining process and the high cost of its memory usage, FP-Growth still suffers from a performance bottleneck when dealing with large datasets. In this paper, we design a new Array Prefix-Tree structure, and based on that, propose an Array Prefix-Tree Growth (APT-Growth) algorithm, which explicitly obviates the need of recursively constructing conditional FP-Tree as required by FP-Growth. To support big data analytics, we further design and implement a parallel version of APTGrowth, referred to as PAPT-Growth, as a Spark workflow. We conduct FIM workflow experiments on both real-life and synthetic datasets for performance evaluation, and extensive results show that PAPT-Growth outperforms other representative parallel FIM algorithms in terms of execution time, which sheds light on its potential applications to big data mining.
{"title":"On a Parallel Spark Workflow for Frequent Itemset Mining Based on Array Prefix-Tree","authors":"Xinzheng Niu, Mideng Qian, C. Wu, Aiqin Hou","doi":"10.1109/WORKS49585.2019.00011","DOIUrl":"https://doi.org/10.1109/WORKS49585.2019.00011","url":null,"abstract":"Frequent Itemset Mining (FIM) is a fundamental procedure in various data mining techniques such as association rule mining. Among many existing algorithms, FP-Growth is considered as a milestone achievement that discovers frequenti temsets without generating candidates. However, due to the high complexity of its mining process and the high cost of its memory usage, FP-Growth still suffers from a performance bottleneck when dealing with large datasets. In this paper, we design a new Array Prefix-Tree structure, and based on that, propose an Array Prefix-Tree Growth (APT-Growth) algorithm, which explicitly obviates the need of recursively constructing conditional FP-Tree as required by FP-Growth. To support big data analytics, we further design and implement a parallel version of APTGrowth, referred to as PAPT-Growth, as a Spark workflow. We conduct FIM workflow experiments on both real-life and synthetic datasets for performance evaluation, and extensive results show that PAPT-Growth outperforms other representative parallel FIM algorithms in terms of execution time, which sheds light on its potential applications to big data mining.","PeriodicalId":436926,"journal":{"name":"2019 IEEE/ACM Workflows in Support of Large-Scale Science (WORKS)","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122058507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-09DOI: 10.1109/WORKS49585.2019.00006
Renan Souza, L. Azevedo, Vítor Lourenço, E. Soares, R. Thiago, R. Brandão, D. Civitarese, E. V. Brazil, M. Moreno, P. Valduriez, M. Mattoso, Renato Cerqueira, M. Netto
Machine Learning (ML) has become essential in several industries. In Computational Science and Engineering (CSE), the complexity of the ML lifecycle comes from the large variety of data, scientists' expertise, tools, and workflows. If data are not tracked properly during the lifecycle, it becomes unfeasible to recreate a ML model from scratch or to explain to stackholders how it was created. The main limitation of provenance tracking solutions is that they cannot cope with provenance capture and integration of domain and ML data processed in the multiple workflows in the lifecycle, while keeping the provenance capture overhead low. To handle this problem, in this paper we contribute with a detailed characterization of provenance data in the ML lifecycle in CSE; a new provenance data representation, called PROV-ML, built on top of W3C PROV and ML Schema; and extensions to a system that tracks provenance from multiple workflows to address the characteristics of ML and CSE, and to allow for provenance queries with a standard vocabulary. We show a practical use in a real case in the O&G industry, along with its evaluation using 239,616 CUDA cores in parallel.
{"title":"Provenance Data in the Machine Learning Lifecycle in Computational Science and Engineering","authors":"Renan Souza, L. Azevedo, Vítor Lourenço, E. Soares, R. Thiago, R. Brandão, D. Civitarese, E. V. Brazil, M. Moreno, P. Valduriez, M. Mattoso, Renato Cerqueira, M. Netto","doi":"10.1109/WORKS49585.2019.00006","DOIUrl":"https://doi.org/10.1109/WORKS49585.2019.00006","url":null,"abstract":"Machine Learning (ML) has become essential in several industries. In Computational Science and Engineering (CSE), the complexity of the ML lifecycle comes from the large variety of data, scientists' expertise, tools, and workflows. If data are not tracked properly during the lifecycle, it becomes unfeasible to recreate a ML model from scratch or to explain to stackholders how it was created. The main limitation of provenance tracking solutions is that they cannot cope with provenance capture and integration of domain and ML data processed in the multiple workflows in the lifecycle, while keeping the provenance capture overhead low. To handle this problem, in this paper we contribute with a detailed characterization of provenance data in the ML lifecycle in CSE; a new provenance data representation, called PROV-ML, built on top of W3C PROV and ML Schema; and extensions to a system that tracks provenance from multiple workflows to address the characteristics of ML and CSE, and to allow for provenance queries with a standard vocabulary. We show a practical use in a real case in the O&G industry, along with its evaluation using 239,616 CUDA cores in parallel.","PeriodicalId":436926,"journal":{"name":"2019 IEEE/ACM Workflows in Support of Large-Scale Science (WORKS)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126557660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-07-30DOI: 10.1109/WORKS49585.2019.00010
Mathieu Dugré, Valérie Hayot-Sasson, T. Glatard
In the past few years, neuroimaging has entered the Big Data era due to the joint increase in image resolution, data sharing, and study sizes. However, no particular Big Data engines have emerged in this field, and several alternatives remain available. We compare two popular Big Data engines with Python APIs, Apache Spark and Dask, for their runtime performance in processing neuroimaging pipelines. Our evaluation uses two synthetic pipelines processing the 81GB BigBrain image, and a real pipeline processing anatomical data from more than 1,000 subjects. We benchmark these pipelines using various combinations of task durations, data sizes, and numbers of workers, deployed on an 8-node (8 cores ea.) compute cluster in Compute Canada's Arbutus cloud. We evaluate PySpark's RDD API against Dask's Bag, Delayed and Futures. Results show that despite slight differences between Spark and Dask, both engines perform comparably. However, Dask pipelines risk being limited by Python's GIL depending on task type and cluster configuration. In all cases, the major limiting factor was data transfer. While either engine is suitable for neuroimaging pipelines, more effort needs to be placed in reducing data transfer time.
{"title":"A Performance Comparison of Dask and Apache Spark for Data-Intensive Neuroimaging Pipelines","authors":"Mathieu Dugré, Valérie Hayot-Sasson, T. Glatard","doi":"10.1109/WORKS49585.2019.00010","DOIUrl":"https://doi.org/10.1109/WORKS49585.2019.00010","url":null,"abstract":"In the past few years, neuroimaging has entered the Big Data era due to the joint increase in image resolution, data sharing, and study sizes. However, no particular Big Data engines have emerged in this field, and several alternatives remain available. We compare two popular Big Data engines with Python APIs, Apache Spark and Dask, for their runtime performance in processing neuroimaging pipelines. Our evaluation uses two synthetic pipelines processing the 81GB BigBrain image, and a real pipeline processing anatomical data from more than 1,000 subjects. We benchmark these pipelines using various combinations of task durations, data sizes, and numbers of workers, deployed on an 8-node (8 cores ea.) compute cluster in Compute Canada's Arbutus cloud. We evaluate PySpark's RDD API against Dask's Bag, Delayed and Futures. Results show that despite slight differences between Spark and Dask, both engines perform comparably. However, Dask pipelines risk being limited by Python's GIL depending on task type and cluster configuration. In all cases, the major limiting factor was data transfer. While either engine is suitable for neuroimaging pipelines, more effort needs to be placed in reducing data transfer time.","PeriodicalId":436926,"journal":{"name":"2019 IEEE/ACM Workflows in Support of Large-Scale Science (WORKS)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133844088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}