首页 > 最新文献

2019 IEEE/ACM Fourth International Parallel Data Systems Workshop (PDSW)最新文献

英文 中文
Understanding Data Motion in the Modern HPC Data Center 理解现代HPC数据中心中的数据运动
Pub Date : 2019-11-01 DOI: 10.1109/PDSW49588.2019.00012
Glenn K. Lockwood, S. Snyder, S. Byna, P. Carns, N. Wright
The utilization and performance of storage, compute, and network resources within HPC data centers have been studied extensively, but much less work has gone toward characterizing how these resources are used in conjunction to solve larger scientific challenges. To address this gap, we present our work in characterizing workloads and workflows at a data-center-wide level by examining all data transfers that occurred between storage, compute, and the external network at the National Energy Research Scientific Computing Center over a three-month period in 2019. Using a simple abstract representation of data transfers, we analyze over 100 million transfer logs from Darshan, HPSS user interfaces, and Globus to quantify the load on data paths between compute, storage, and the wide-area network based on transfer direction, user, transfer tool, source, destination, and time. We show that parallel I/O from user jobs, while undeniably important, is only one of several major I/O workloads that occurs throughout the execution of scientific workflows. We also show that this approach can be used to connect anomalous data traffic to specific users and file access patterns, and we construct time-resolved user transfer traces to demonstrate that one can systematically identify coupled data motion for individual workflows.
HPC数据中心内存储、计算和网络资源的利用率和性能已经得到了广泛的研究,但在描述如何将这些资源结合起来解决更大的科学挑战方面的工作要少得多。为了解决这一差距,我们通过检查2019年三个月期间国家能源研究科学计算中心存储、计算和外部网络之间发生的所有数据传输,介绍了我们在数据中心范围内描述工作负载和工作流程的工作。使用数据传输的简单抽象表示,我们分析了来自Darshan、HPSS用户界面和Globus的超过1亿个传输日志,以根据传输方向、用户、传输工具、源、目的地和时间量化计算、存储和广域网之间数据路径上的负载。我们展示了来自用户作业的并行I/O,虽然不可否认它很重要,但它只是科学工作流执行过程中出现的几个主要I/O工作负载之一。我们还表明,这种方法可用于将异常数据流量连接到特定用户和文件访问模式,并且我们构建了时间解析的用户传输跟踪,以证明可以系统地识别单个工作流的耦合数据运动。
{"title":"Understanding Data Motion in the Modern HPC Data Center","authors":"Glenn K. Lockwood, S. Snyder, S. Byna, P. Carns, N. Wright","doi":"10.1109/PDSW49588.2019.00012","DOIUrl":"https://doi.org/10.1109/PDSW49588.2019.00012","url":null,"abstract":"The utilization and performance of storage, compute, and network resources within HPC data centers have been studied extensively, but much less work has gone toward characterizing how these resources are used in conjunction to solve larger scientific challenges. To address this gap, we present our work in characterizing workloads and workflows at a data-center-wide level by examining all data transfers that occurred between storage, compute, and the external network at the National Energy Research Scientific Computing Center over a three-month period in 2019. Using a simple abstract representation of data transfers, we analyze over 100 million transfer logs from Darshan, HPSS user interfaces, and Globus to quantify the load on data paths between compute, storage, and the wide-area network based on transfer direction, user, transfer tool, source, destination, and time. We show that parallel I/O from user jobs, while undeniably important, is only one of several major I/O workloads that occurs throughout the execution of scientific workflows. We also show that this approach can be used to connect anomalous data traffic to specific users and file access patterns, and we construct time-resolved user transfer traces to demonstrate that one can systematically identify coupled data motion for individual workflows.","PeriodicalId":130430,"journal":{"name":"2019 IEEE/ACM Fourth International Parallel Data Systems Workshop (PDSW)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128624976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
[Copyright notice] (版权)
Pub Date : 2019-11-01 DOI: 10.1109/pdsw49588.2019.00002
{"title":"[Copyright notice]","authors":"","doi":"10.1109/pdsw49588.2019.00002","DOIUrl":"https://doi.org/10.1109/pdsw49588.2019.00002","url":null,"abstract":"","PeriodicalId":130430,"journal":{"name":"2019 IEEE/ACM Fourth International Parallel Data Systems Workshop (PDSW)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114982958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Physical Design Management in Storage Systems 面向存储系统的物理设计管理
Pub Date : 2019-11-01 DOI: 10.1109/PDSW49588.2019.00009
K. Dahlgren, J. LeFevre, Ashay Shirwadkar, Ken Iizawa, Aldrin Montana, P. Alvaro, C. Maltzahn
In the post-Moore era, systems and devices with new architectures will arrive at a rapid rate with significant impacts on the software stack. Applications will not be able to fully benefit from new architectures unless they can delegate adapting to new devices in lower layers of the stack. In this paper we introduce physical design management which deals with the problem of identifying and executing transformations on physical designs of stored data, i.e. how data is mapped to storage abstractions like files, objects, or blocks, in order to improve performance. Physical design is traditionally placed with applications, access libraries, and databases, using hard-wired assumptions about underlying storage systems. Yet, storage systems increasingly not only contain multiple kinds of storage devices with vastly different performance profiles but also move data among those storage devices, thereby changing the benefit of a particular physical design. We advocate placing physical design management in storage, identify interesting research challenges, provide a brief description of a prototype implementation in Ceph, and discuss the results of initial experiments at scale that are replicable using Cloudlab. These experiments show performance and resource utilization trade-offs associated with choosing different physical designs and choosing to transform between physical designs.
在后摩尔时代,具有新架构的系统和设备将以快速的速度出现,对软件堆栈产生重大影响。应用程序将无法从新的架构中充分受益,除非它们能够委派适应堆栈较低层的新设备。在本文中,我们介绍了物理设计管理,它处理识别和执行存储数据的物理设计转换的问题,即数据如何映射到存储抽象,如文件、对象或块,以提高性能。传统上,物理设计是与应用程序、访问库和数据库一起进行的,使用关于底层存储系统的硬连线假设。然而,存储系统不仅包含多种具有不同性能配置文件的存储设备,而且还在这些存储设备之间移动数据,从而改变了特定物理设计的好处。我们提倡将物理设计管理放在存储中,确定有趣的研究挑战,在Ceph中提供原型实现的简要描述,并讨论可使用Cloudlab复制的大规模初始实验结果。这些实验表明,在选择不同的物理设计和选择在物理设计之间转换时,性能和资源利用的权衡是相关的。
{"title":"Towards Physical Design Management in Storage Systems","authors":"K. Dahlgren, J. LeFevre, Ashay Shirwadkar, Ken Iizawa, Aldrin Montana, P. Alvaro, C. Maltzahn","doi":"10.1109/PDSW49588.2019.00009","DOIUrl":"https://doi.org/10.1109/PDSW49588.2019.00009","url":null,"abstract":"In the post-Moore era, systems and devices with new architectures will arrive at a rapid rate with significant impacts on the software stack. Applications will not be able to fully benefit from new architectures unless they can delegate adapting to new devices in lower layers of the stack. In this paper we introduce physical design management which deals with the problem of identifying and executing transformations on physical designs of stored data, i.e. how data is mapped to storage abstractions like files, objects, or blocks, in order to improve performance. Physical design is traditionally placed with applications, access libraries, and databases, using hard-wired assumptions about underlying storage systems. Yet, storage systems increasingly not only contain multiple kinds of storage devices with vastly different performance profiles but also move data among those storage devices, thereby changing the benefit of a particular physical design. We advocate placing physical design management in storage, identify interesting research challenges, provide a brief description of a prototype implementation in Ceph, and discuss the results of initial experiments at scale that are replicable using Cloudlab. These experiments show performance and resource utilization trade-offs associated with choosing different physical designs and choosing to transform between physical designs.","PeriodicalId":130430,"journal":{"name":"2019 IEEE/ACM Fourth International Parallel Data Systems Workshop (PDSW)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116931338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Applying Machine Learning to Understand Write Performance of Large-scale Parallel Filesystems 应用机器学习理解大规模并行文件系统的写性能
Pub Date : 2019-11-01 DOI: 10.1109/PDSW49588.2019.00008
Bing Xie, Zilong Tan, P. Carns, J. Chase, K. Harms, J. Lofstead, S. Oral, Sudharshan S. Vazhkudai, Feiyi Wang
In high-performance computing (HPC), I/O performance prediction offers the potential to improve the efficiency of scientific computing. In particular, accurate prediction can make runtime estimates more precise, guide users toward optimal checkpoint strategies, and better inform facility provisioning and scheduling policies. HPC I/O performance is notoriously difficult to predict and model, however, in large part because of inherent variability and a lack of transparency in the behaviors of constituent storage system components. In this work we seek to advance the state of the art in HPC I/O performance prediction by (1) modeling the mean performance to address high variability, (2) deriving model features from write patterns, system architecture and system configurations, and (3) employing Lasso regression model to improve model accuracy. We demonstrate the efficacy of our approach by applying it to a crucial subset of common HPC I/O motifs, namely, file-per-process checkpoint write workloads. We conduct experiments on two distinct production HPC platforms — Titan at the Oak Ridge Leadership Computing Facility and Cetus at the Argonne Leadership Computing Facility — to train and evaluate our models. We find that we can attain ≤ 30% relative error for 92.79% and 99.64% of the samples in our test set on these platforms, respectively.
在高性能计算(HPC)中,I/O性能预测提供了提高科学计算效率的潜力。特别是,准确的预测可以使运行时估计更加精确,指导用户采用最佳检查点策略,并更好地通知设施供应和调度策略。HPC I/O性能是出了名的难以预测和建模的,然而,在很大程度上是因为固有的可变性和组成存储系统组件的行为缺乏透明度。在这项工作中,我们试图通过(1)对平均性能建模来解决高可变性,(2)从写入模式、系统架构和系统配置中导出模型特征,以及(3)使用Lasso回归模型来提高模型准确性,来推进HPC I/O性能预测的最新技术。我们通过将该方法应用于常见HPC I/O主题的一个关键子集,即每个进程文件检查点写工作负载,来证明该方法的有效性。我们在两个不同的生产HPC平台上进行实验——橡树岭领导计算设施的Titan和阿贡领导计算设施的Cetus——来训练和评估我们的模型。我们发现,在这些平台上,我们的测试集中的92.79%和99.64%的样本分别可以达到≤30%的相对误差。
{"title":"Applying Machine Learning to Understand Write Performance of Large-scale Parallel Filesystems","authors":"Bing Xie, Zilong Tan, P. Carns, J. Chase, K. Harms, J. Lofstead, S. Oral, Sudharshan S. Vazhkudai, Feiyi Wang","doi":"10.1109/PDSW49588.2019.00008","DOIUrl":"https://doi.org/10.1109/PDSW49588.2019.00008","url":null,"abstract":"In high-performance computing (HPC), I/O performance prediction offers the potential to improve the efficiency of scientific computing. In particular, accurate prediction can make runtime estimates more precise, guide users toward optimal checkpoint strategies, and better inform facility provisioning and scheduling policies. HPC I/O performance is notoriously difficult to predict and model, however, in large part because of inherent variability and a lack of transparency in the behaviors of constituent storage system components. In this work we seek to advance the state of the art in HPC I/O performance prediction by (1) modeling the mean performance to address high variability, (2) deriving model features from write patterns, system architecture and system configurations, and (3) employing Lasso regression model to improve model accuracy. We demonstrate the efficacy of our approach by applying it to a crucial subset of common HPC I/O motifs, namely, file-per-process checkpoint write workloads. We conduct experiments on two distinct production HPC platforms — Titan at the Oak Ridge Leadership Computing Facility and Cetus at the Argonne Leadership Computing Facility — to train and evaluate our models. We find that we can attain ≤ 30% relative error for 92.79% and 99.64% of the samples in our test set on these platforms, respectively.","PeriodicalId":130430,"journal":{"name":"2019 IEEE/ACM Fourth International Parallel Data Systems Workshop (PDSW)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115154953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
A Foundation for Automated Placement of Data 数据自动放置的基础
Pub Date : 2019-11-01 DOI: 10.1109/PDSW49588.2019.00010
Douglas Otstott, Ming Zhao, Latchesar Ionkov
With the increasing complexity of memory and storage, it is important to automate the decision of how to assign data structures to memory and storage devices. On one hand, this requires developing models to reconcile application access patterns against the limited capacity of higher-performance devices. On the other, such a modeling task demands a set of primitives to build from, and a toolkit that implements those primitives in a robust, dynamic fashion. We focus on the latter problem, and to that end we present an interface that abstracts the physical layout of data from the application developer. This will allow developers focused on optimized data place- ment to use our abstracta as the basis for their implementation, while application developers will see a unified, scalable, and resilient memory environment.
随着内存和存储的复杂性日益增加,如何将数据结构分配给内存和存储设备的决策自动化变得非常重要。一方面,这需要开发模型来协调应用程序访问模式与高性能设备的有限容量。另一方面,这样的建模任务需要一组用于构建的原语,以及一个以健壮、动态的方式实现这些原语的工具包。我们主要关注后一个问题,为此,我们提供了一个接口,从应用程序开发人员那里抽象出数据的物理布局。这将允许专注于优化数据位置的开发人员使用我们的抽象作为他们实现的基础,而应用程序开发人员将看到一个统一的、可扩展的、有弹性的内存环境。
{"title":"A Foundation for Automated Placement of Data","authors":"Douglas Otstott, Ming Zhao, Latchesar Ionkov","doi":"10.1109/PDSW49588.2019.00010","DOIUrl":"https://doi.org/10.1109/PDSW49588.2019.00010","url":null,"abstract":"With the increasing complexity of memory and storage, it is important to automate the decision of how to assign data structures to memory and storage devices. On one hand, this requires developing models to reconcile application access patterns against the limited capacity of higher-performance devices. On the other, such a modeling task demands a set of primitives to build from, and a toolkit that implements those primitives in a robust, dynamic fashion. We focus on the latter problem, and to that end we present an interface that abstracts the physical layout of data from the application developer. This will allow developers focused on optimized data place- ment to use our abstracta as the basis for their implementation, while application developers will see a unified, scalable, and resilient memory environment.","PeriodicalId":130430,"journal":{"name":"2019 IEEE/ACM Fourth International Parallel Data Systems Workshop (PDSW)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126279726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Profiling Platform Storage Using IO500 and Mistral 使用IO500和Mistral分析平台存储
Pub Date : 2019-11-01 DOI: 10.1109/PDSW49588.2019.00011
Nolan D. Monnier, J. Lofstead, Margaret Lawson, M. Curry
This paper explores how we used IO500 and the Mistral tool from Ellexus to observe detailed performance characteristics to inform tuning IO performance on Astra, a ARM-based Sandia machine with an all flash, Lustre-based storage array. Through this case study, we demonstrate that IO500 serves as a meaningful storage benchmark, even for all flash storage. We also demonstrate that using fine-grained profiling tools, such as Mistral, is essential for revealing tuning requirement details. Overall, this paper demonstrates the value of a broad spectrum benchmark, like IO500, together with a fine grained performance analysis tool, such as Mistral, for understanding detailed storage system performance for better informed tuning.
本文探讨了我们如何使用IO500和Ellexus的Mistral工具来观察详细的性能特征,以便在Astra上调优IO性能。Astra是基于arm的Sandia机器,采用全闪存和基于lustret的存储阵列。通过本案例研究,我们证明IO500可以作为有意义的存储基准,甚至适用于所有闪存。我们还演示了使用细粒度分析工具,如Mistral,对于揭示调优需求细节是必不可少的。总的来说,本文展示了广谱基准测试(如IO500)和细粒度性能分析工具(如Mistral)的价值,可以了解详细的存储系统性能,以便更好地进行调优。
{"title":"Profiling Platform Storage Using IO500 and Mistral","authors":"Nolan D. Monnier, J. Lofstead, Margaret Lawson, M. Curry","doi":"10.1109/PDSW49588.2019.00011","DOIUrl":"https://doi.org/10.1109/PDSW49588.2019.00011","url":null,"abstract":"This paper explores how we used IO500 and the Mistral tool from Ellexus to observe detailed performance characteristics to inform tuning IO performance on Astra, a ARM-based Sandia machine with an all flash, Lustre-based storage array. Through this case study, we demonstrate that IO500 serves as a meaningful storage benchmark, even for all flash storage. We also demonstrate that using fine-grained profiling tools, such as Mistral, is essential for revealing tuning requirement details. Overall, this paper demonstrates the value of a broad spectrum benchmark, like IO500, together with a fine grained performance analysis tool, such as Mistral, for understanding detailed storage system performance for better informed tuning.","PeriodicalId":130430,"journal":{"name":"2019 IEEE/ACM Fourth International Parallel Data Systems Workshop (PDSW)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122670139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Enabling Transparent Asynchronous I/O using Background Threads 使用后台线程启用透明异步I/O
Pub Date : 2019-11-01 DOI: 10.1109/PDSW49588.2019.00006
Houjun Tang, Q. Koziol, S. Byna, J. Mainzer, Tonglin Li
With scientific applications moving toward exascale levels, an increasing amount of data is being produced and analyzed. Providing efficient data access is crucial to the productivity of the scientific discovery process. Compared to improvements in CPU and network speeds, I/O performance lags far behind, such that moving data across the storage hierarchy can take longer than data generation or analysis. To alleviate this I/O bottleneck, asynchronous read and write operations have been provided by the POSIX and MPI-I/O interfaces and can overlap I/O operations with computation, and thus hide I/O latency. However, these standards lack support for non-data operations such as file open, stat, and close, and their read and write operations require users to both manually manage data dependencies and use low-level byte offsets. This requires significant effort and expertise for applications to utilize. To overcome these issues, we present an asynchronous I/O framework that provides support for all I/O operations and manages data dependencies transparently and automatically. Our prototype asynchronous I/O implementation as an HDF5 VOL connector demonstrates the effectiveness of hiding the I/O cost from the application with low overhead and easy-to-use programming interface.
随着科学应用向百亿亿级发展,越来越多的数据被产生和分析。提供有效的数据访问对科学发现过程的生产力至关重要。与CPU和网络速度的改进相比,I/O性能远远落后,因此跨存储层次移动数据可能比数据生成或分析花费的时间更长。为了缓解这种I/O瓶颈,POSIX和MPI-I/O接口提供了异步读写操作,可以将I/O操作与计算重叠,从而隐藏I/O延迟。然而,这些标准缺乏对非数据操作(如文件打开、stat和关闭)的支持,并且它们的读写操作要求用户手动管理数据依赖关系并使用低级字节偏移量。这需要大量的工作和专业知识来供应用程序利用。为了克服这些问题,我们提出了一个异步I/O框架,它为所有I/O操作提供支持,并透明、自动地管理数据依赖关系。我们作为HDF5 VOL连接器的原型异步I/O实现演示了通过低开销和易于使用的编程接口向应用程序隐藏I/O成本的有效性。
{"title":"Enabling Transparent Asynchronous I/O using Background Threads","authors":"Houjun Tang, Q. Koziol, S. Byna, J. Mainzer, Tonglin Li","doi":"10.1109/PDSW49588.2019.00006","DOIUrl":"https://doi.org/10.1109/PDSW49588.2019.00006","url":null,"abstract":"With scientific applications moving toward exascale levels, an increasing amount of data is being produced and analyzed. Providing efficient data access is crucial to the productivity of the scientific discovery process. Compared to improvements in CPU and network speeds, I/O performance lags far behind, such that moving data across the storage hierarchy can take longer than data generation or analysis. To alleviate this I/O bottleneck, asynchronous read and write operations have been provided by the POSIX and MPI-I/O interfaces and can overlap I/O operations with computation, and thus hide I/O latency. However, these standards lack support for non-data operations such as file open, stat, and close, and their read and write operations require users to both manually manage data dependencies and use low-level byte offsets. This requires significant effort and expertise for applications to utilize. To overcome these issues, we present an asynchronous I/O framework that provides support for all I/O operations and manages data dependencies transparently and automatically. Our prototype asynchronous I/O implementation as an HDF5 VOL connector demonstrates the effectiveness of hiding the I/O cost from the application with low overhead and easy-to-use programming interface.","PeriodicalId":130430,"journal":{"name":"2019 IEEE/ACM Fourth International Parallel Data Systems Workshop (PDSW)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122920793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Active Learning-based Automatic Tuning and Prediction of Parallel I/O Performance 基于主动学习的并行I/O性能自动调优与预测
Pub Date : 2019-11-01 DOI: 10.1109/PDSW49588.2019.00007
Megha Agarwal, Divyansh Singhvi, Preeti Malakar, S. Byna
Parallel I/O is an indispensable part of scientific applications. The current stack of parallel I/O contains many tunable parameters. While changing these parameters can increase I/O performance many-fold, the application developers usually resort to default values because tuning is a cumbersome process and requires expertise. We propose two auto-tuning models, based on active learning that recommend a good set of parameter values (currently tested with Lustre parameters and MPI-IO hints) for an application on a given system. These models use Bayesian optimization to find the values of parameters by minimizing an objective function. The first model runs the application to determine these values, whereas, the second model uses an I/O prediction model for the same. Thus the training time is significantly reduced in comparison to the first model (e.g., from 800 seconds to 18 seconds). Also both the models provide flexibility to focus on improvement of either read or write performance. To keep the tuning process generic, we have focused on both read and write performance. We have validated our models using an I/O benchmark (IOR) and 3 scientific application I/O kernels (S3D-IO, BT-IO and GenericIO) on two supercomputers (HPC2010 and Cori). Using the two models, we achieve an increase in I/O bandwidth of up to 11× over the default parameters. We got up to 3× improvements for 37 TB writes, corresponding to 1 billion particles in GenericIO. We also achieved up to 3.2× higher bandwidth for 4.8 TB of noncontiguous I/O in BT-IO benchmark.
并行I/O是科学应用中不可缺少的一部分。当前并行I/O堆栈包含许多可调参数。虽然更改这些参数可以将I/O性能提高许多倍,但应用程序开发人员通常使用默认值,因为调优是一个繁琐的过程,需要专业知识。我们提出了两种基于主动学习的自动调优模型,为给定系统上的应用程序推荐一组良好的参数值(目前使用Lustre参数和MPI-IO提示进行了测试)。这些模型使用贝叶斯优化通过最小化目标函数来找到参数的值。第一个模型运行应用程序来确定这些值,而第二个模型使用相同的I/O预测模型。因此,与第一个模型相比,训练时间大大减少(例如,从800秒减少到18秒)。此外,这两种模型都提供了专注于提高读或写性能的灵活性。为了保持调优过程的通用性,我们将重点放在读和写性能上。我们在两台超级计算机(HPC2010和Cori)上使用I/O基准测试(IOR)和3个科学应用I/O内核(ssd - io, BT-IO和GenericIO)验证了我们的模型。使用这两种模型,我们实现了I/O带宽比默认参数增加了11倍。对于37tb的写入,我们得到了3倍的改进,相当于GenericIO中的10亿个粒子。在BT-IO基准测试中,我们还为4.8 TB的不连续I/O实现了高达3.2倍的高带宽。
{"title":"Active Learning-based Automatic Tuning and Prediction of Parallel I/O Performance","authors":"Megha Agarwal, Divyansh Singhvi, Preeti Malakar, S. Byna","doi":"10.1109/PDSW49588.2019.00007","DOIUrl":"https://doi.org/10.1109/PDSW49588.2019.00007","url":null,"abstract":"Parallel I/O is an indispensable part of scientific applications. The current stack of parallel I/O contains many tunable parameters. While changing these parameters can increase I/O performance many-fold, the application developers usually resort to default values because tuning is a cumbersome process and requires expertise. We propose two auto-tuning models, based on active learning that recommend a good set of parameter values (currently tested with Lustre parameters and MPI-IO hints) for an application on a given system. These models use Bayesian optimization to find the values of parameters by minimizing an objective function. The first model runs the application to determine these values, whereas, the second model uses an I/O prediction model for the same. Thus the training time is significantly reduced in comparison to the first model (e.g., from 800 seconds to 18 seconds). Also both the models provide flexibility to focus on improvement of either read or write performance. To keep the tuning process generic, we have focused on both read and write performance. We have validated our models using an I/O benchmark (IOR) and 3 scientific application I/O kernels (S3D-IO, BT-IO and GenericIO) on two supercomputers (HPC2010 and Cori). Using the two models, we achieve an increase in I/O bandwidth of up to 11× over the default parameters. We got up to 3× improvements for 37 TB writes, corresponding to 1 billion particles in GenericIO. We also achieved up to 3.2× higher bandwidth for 4.8 TB of noncontiguous I/O in BT-IO benchmark.","PeriodicalId":130430,"journal":{"name":"2019 IEEE/ACM Fourth International Parallel Data Systems Workshop (PDSW)","volume":"178 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122875620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
In Search of a Fast and Efficient Serverless DAG Engine 寻找快速高效的无服务器DAG引擎
Pub Date : 2019-10-14 DOI: 10.1109/PDSW49588.2019.00005
Benjamin Carver, Jingyuan Zhang, Ao Wang, Yue Cheng
Python-written data analytics applications can be modeled as and compiled into a directed acyclic graph (DAG) based workflow, where the nodes are fine-grained tasks and the edges are task dependencies.Such analytics workflow jobs are increasingly characterized by short, fine-grained tasks with large fan-outs. These characteristics make them well-suited for a new cloud computing model called serverless computing or Function-as-a-Service (FaaS), which has become prevalent in recent years. The auto-scaling property of serverless computing platforms accommodates short tasks and bursty workloads, while the pay-per-use billing model of serverless computing providers keeps the cost of short tasks low. In this paper, we thoroughly investigate the problem space of DAG scheduling in serverless computing. We identify and evaluate a set of techniques to make DAG schedulers serverless-aware. These techniques have been implemented in WUKONG , a serverless, DAG scheduler attuned to AWS Lambda. WUKONG provides decentralized scheduling through a combination of static and dynamic scheduling. We present the results of an empirical study in which WUKONG is applied to a range of microbenchmark and real-world DAG applications. Results demonstrate the efficacy of WUKONG in minimizing the performance overhead introduced by AWS Lambda — WUKONG achieves competitive performance compared to a serverful DAG scheduler, while improving the performance of real-world DAG jobs by as much as 4.1x at larger scale.
可以将python编写的数据分析应用程序建模为并编译为基于有向无环图(DAG)的工作流,其中节点是细粒度任务,边缘是任务依赖项。这种分析工作流作业越来越多地以短而细粒度的任务为特征,这些任务具有较大的扇形输出。这些特征使它们非常适合一种新的云计算模型,称为无服务器计算或功能即服务(FaaS),这种模型近年来变得非常流行。无服务器计算平台的自动伸缩特性适应短任务和突发工作负载,而无服务器计算提供商的按使用付费计费模式使短任务的成本保持在较低水平。本文深入研究了无服务器计算中DAG调度的问题空间。我们确定并评估了一组使DAG调度器无服务器感知的技术。这些技术已经在WUKONG中实现,WUKONG是一个无服务器的DAG调度器,与AWS Lambda进行了协调。悟空通过静态和动态调度相结合的方式提供去中心化调度。我们提出了一项实证研究的结果,其中WUKONG应用于一系列微基准和现实世界的DAG应用。结果证明了WUKONG在最小化AWS Lambda引入的性能开销方面的有效性-与服务器式DAG调度器相比,WUKONG实现了具有竞争力的性能,同时在更大规模的DAG作业中将实际DAG作业的性能提高了4.1倍。
{"title":"In Search of a Fast and Efficient Serverless DAG Engine","authors":"Benjamin Carver, Jingyuan Zhang, Ao Wang, Yue Cheng","doi":"10.1109/PDSW49588.2019.00005","DOIUrl":"https://doi.org/10.1109/PDSW49588.2019.00005","url":null,"abstract":"Python-written data analytics applications can be modeled as and compiled into a directed acyclic graph (DAG) based workflow, where the nodes are fine-grained tasks and the edges are task dependencies.Such analytics workflow jobs are increasingly characterized by short, fine-grained tasks with large fan-outs. These characteristics make them well-suited for a new cloud computing model called serverless computing or Function-as-a-Service (FaaS), which has become prevalent in recent years. The auto-scaling property of serverless computing platforms accommodates short tasks and bursty workloads, while the pay-per-use billing model of serverless computing providers keeps the cost of short tasks low. In this paper, we thoroughly investigate the problem space of DAG scheduling in serverless computing. We identify and evaluate a set of techniques to make DAG schedulers serverless-aware. These techniques have been implemented in WUKONG , a serverless, DAG scheduler attuned to AWS Lambda. WUKONG provides decentralized scheduling through a combination of static and dynamic scheduling. We present the results of an empirical study in which WUKONG is applied to a range of microbenchmark and real-world DAG applications. Results demonstrate the efficacy of WUKONG in minimizing the performance overhead introduced by AWS Lambda — WUKONG achieves competitive performance compared to a serverful DAG scheduler, while improving the performance of real-world DAG jobs by as much as 4.1x at larger scale.","PeriodicalId":130430,"journal":{"name":"2019 IEEE/ACM Fourth International Parallel Data Systems Workshop (PDSW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131465740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
期刊
2019 IEEE/ACM Fourth International Parallel Data Systems Workshop (PDSW)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1