Pub Date : 2022-12-01DOI: 10.1016/j.parco.2022.102982
Paul Fischer , Stefan Kerkemeier , Misun Min , Yu-Hsiang Lan , Malachi Phillips , Thilina Rathnayake , Elia Merzari , Ananias Tomboulides , Ali Karakus , Noel Chalmers , Tim Warburton
The development of NekRS, a GPU-oriented thermal-fluids simulation code based on the spectral element method (SEM) is described. For performance portability, the code is based on the open concurrent compute abstraction and leverages scalable developments in the SEM code Nek5000 and in libParanumal, which is a library of high-performance kernels for high-order discretizations and PDE-based miniapps. Critical performance sections of the Navier–Stokes time advancement are addressed. Performance results on several platforms are presented, including scaling to 27,648 V100s on OLCF Summit, for calculations of up to 60B grid points (240B degrees-of-freedom).
{"title":"NekRS, a GPU-accelerated spectral element Navier–Stokes solver","authors":"Paul Fischer , Stefan Kerkemeier , Misun Min , Yu-Hsiang Lan , Malachi Phillips , Thilina Rathnayake , Elia Merzari , Ananias Tomboulides , Ali Karakus , Noel Chalmers , Tim Warburton","doi":"10.1016/j.parco.2022.102982","DOIUrl":"10.1016/j.parco.2022.102982","url":null,"abstract":"<div><p><span><span>The development of NekRS, a GPU-oriented thermal-fluids simulation code based on the spectral element method (SEM) is described. For performance portability, the code is based on the open concurrent compute abstraction and leverages scalable developments in the SEM code Nek5000 and in libParanumal, which is a library of high-performance kernels for high-order </span>discretizations and PDE-based miniapps. Critical performance sections of the Navier–Stokes </span>time advancement are addressed. Performance results on several platforms are presented, including scaling to 27,648 V100s on OLCF Summit, for calculations of up to 60B grid points (240B degrees-of-freedom).</p></div>","PeriodicalId":54642,"journal":{"name":"Parallel Computing","volume":"114 ","pages":"Article 102982"},"PeriodicalIF":1.4,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81085812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1016/j.parco.2022.102980
Xinyuan Wang, Hejiao Huang
Coroutine is able to increase program concurrency and processor core utilization. However, for adapting the coroutine-to-transaction model, the existing coroutine package has the following disadvantages: (1) Additional scheduler threads incur synchronization overhead when the load between scheduler threads and worker threads is unbalanced. (2) Coroutines are swapped out periodically to prevent deadlocks, which will increase the conflict rate by adding suspended transactions. (3) Supporting only the swap-out function (yield, await, etc.) cannot flexibly control the transaction swap-in time. In this paper, we present SGPM, a coroutine framework for transaction processing. To adapt to the coroutine-to-transaction model, SGPM has the following properties: First, it eliminates scheduler threads and the periodic coroutine switch. Second, it provides a variety of coroutine scheduling strategies to make all types of concurrency control protocols run on SGPM reasonably. We implement eight well-known concurrency control on SGPM and, particularly, we use SGPM to optimize the performance of four wound-wait concurrency control among them, including 2PL, SS2PL, Calvin, and EWV. The experiment result demonstrates that after SGPM optimization 2PL and SS2PL outperform OCC and MVCC, and the throughput of Calvin and EWV is also improved by 1.2x and 1.3x respectively.
{"title":"SGPM: A coroutine framework for transaction processing","authors":"Xinyuan Wang, Hejiao Huang","doi":"10.1016/j.parco.2022.102980","DOIUrl":"10.1016/j.parco.2022.102980","url":null,"abstract":"<div><p><span><span>Coroutine is able to increase program concurrency and processor core utilization. However, for adapting the coroutine-to-transaction model, the existing coroutine package has the following disadvantages: (1) Additional scheduler threads incur synchronization overhead when the load between scheduler threads and worker threads is unbalanced. (2) Coroutines are swapped out periodically to prevent </span>deadlocks, which will increase the conflict rate by adding suspended transactions. (3) Supporting only the swap-out function (yield, await, etc.) cannot flexibly control the transaction swap-in time. In this paper, we present SGPM, a coroutine framework for </span>transaction processing<span>. To adapt to the coroutine-to-transaction model, SGPM has the following properties: First, it eliminates scheduler threads and the periodic coroutine switch. Second, it provides a variety of coroutine scheduling strategies to make all types of concurrency control protocols run on SGPM reasonably. We implement eight well-known concurrency control on SGPM and, particularly, we use SGPM to optimize the performance of four wound-wait concurrency control among them, including 2PL, SS2PL, Calvin, and EWV. The experiment result demonstrates that after SGPM optimization 2PL and SS2PL outperform OCC and MVCC, and the throughput of Calvin and EWV is also improved by 1.2x and 1.3x respectively.</span></p></div>","PeriodicalId":54642,"journal":{"name":"Parallel Computing","volume":"114 ","pages":"Article 102980"},"PeriodicalIF":1.4,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77557910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1016/j.parco.2022.102973
Lukas Spies , Amanda Bienz , David Moulton , Luke Olson , Andrew Reisner
Exchanging halo data is a common task in modern scientific computing applications and efficient handling of this operation is critical for the performance of the overall simulation. Tausch is a novel header-only library that provides a simple API for efficiently handling these types of data movements. Tausch supports both simple CPU-only systems, but also more complex heterogeneous systems with both CPUs and GPUs. It currently supports both OpenCL and CUDA for communicating with GPGPU devices, and allows for communication between GPGPUs and CPUs. The API allows for drop-in replacement in existing codes and can be used for the communication layer in new codes. This paper provides an overview of the approach taken in Tausch, and a performance analysis that demonstrates expected and achieved performance. We highlight the ease of use and performance with three applications: First Tausch is compared to the halo exchange framework from two Mantevo applications, HPCCG and miniFE, and then it is used to replace a legacy halo exchange library in the flexible multigrid solver framework Cedar.
{"title":"Tausch: A halo exchange library for large heterogeneous computing systems using MPI, OpenCL, and CUDA","authors":"Lukas Spies , Amanda Bienz , David Moulton , Luke Olson , Andrew Reisner","doi":"10.1016/j.parco.2022.102973","DOIUrl":"10.1016/j.parco.2022.102973","url":null,"abstract":"<div><p><span>Exchanging halo data is a common task in modern scientific computing<span><span> applications and efficient handling of this operation is critical for the performance of the overall simulation. Tausch is a novel header-only library that provides a simple API for efficiently handling these types of data movements. Tausch supports both simple CPU-only systems, but also more complex heterogeneous systems with both CPUs and </span>GPUs. It currently supports both </span></span>OpenCL<span> and CUDA for communicating with GPGPU devices, and allows for communication between GPGPUs and CPUs. The API allows for drop-in replacement in existing codes and can be used for the communication layer in new codes. This paper provides an overview of the approach taken in Tausch, and a performance analysis that demonstrates expected and achieved performance. We highlight the ease of use and performance with three applications: First Tausch is compared to the halo exchange framework from two Mantevo applications, HPCCG and miniFE, and then it is used to replace a legacy halo exchange library in the flexible multigrid solver framework Cedar.</span></p></div>","PeriodicalId":54642,"journal":{"name":"Parallel Computing","volume":"114 ","pages":"Article 102973"},"PeriodicalIF":1.4,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85992755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is known that an indirect network with a small host-to-host Average Shortest Path Length (h-ASPL) improves overall system performance in a parallel computer system. As a means to discuss such indirect networks in graph theory, the Order/Radix Problem (ORP) has been proposed. ORP involves finding a graph with a minimum h-ASPL that satisfies a given number of hosts and radix. A graph in ORP represents an indirect network and has two types of vertices: host and switch. We propose an optimization algorithm to generate graphs with a sufficiently small h-ASPL. The primary features of the proposed algorithm are the symmetry of the graph and the bias of the hosts adjacent to each switch. These features reduce the computational time to calculate the h-ASPL and improve the search performance of the algorithm. The performance of the proposed algorithm is evaluated using problems presented by Graph Golf, an international ORP competition. Our results show that the proposed algorithm can generate graphs with a smaller h-ASPL than the existing algorithm. To evaluate the performance of the graphs generated by the proposed algorithm, we use the parallel simulation framework SimGrid and the parallel benchmark collection NAS Parallel Benchmarks. Our results also show that the graphs generated by the proposed algorithm have higher performance than those generated by the existing algorithm.
{"title":"Graph optimization algorithm using symmetry and host bias for low-latency indirect network","authors":"Masahiro Nakao , Masaki Tsukamoto , Yoshiko Hanada , Keiji Yamamoto","doi":"10.1016/j.parco.2022.102983","DOIUrl":"https://doi.org/10.1016/j.parco.2022.102983","url":null,"abstract":"<div><p>It is known that an indirect network with a small host-to-host Average Shortest Path Length (h-ASPL) improves overall system performance in a parallel computer system. As a means to discuss such indirect networks in graph theory, the Order/Radix Problem (ORP) has been proposed. ORP involves finding a graph with a minimum h-ASPL that satisfies a given number of hosts and radix. A graph in ORP represents an indirect network and has two types of vertices: host and switch. We propose an optimization algorithm to generate graphs with a sufficiently small h-ASPL. The primary features of the proposed algorithm are the symmetry of the graph and the bias of the hosts adjacent to each switch. These features reduce the computational time to calculate the h-ASPL and improve the search performance of the algorithm. The performance of the proposed algorithm is evaluated using problems presented by Graph Golf, an international ORP competition. Our results show that the proposed algorithm can generate graphs with a smaller h-ASPL than the existing algorithm. To evaluate the performance of the graphs generated by the proposed algorithm, we use the parallel simulation framework SimGrid and the parallel benchmark collection NAS Parallel Benchmarks. Our results also show that the graphs generated by the proposed algorithm have higher performance than those generated by the existing algorithm.</p></div>","PeriodicalId":54642,"journal":{"name":"Parallel Computing","volume":"114 ","pages":"Article 102983"},"PeriodicalIF":1.4,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0167819122000722/pdfft?md5=70b6cbe2b73c6952541b7170b6406471&pid=1-s2.0-S0167819122000722-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"137225368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01DOI: 10.1016/j.parco.2022.102950
Alessio Netti , Michael Ott , Carla Guillen , Daniele Tafani , Martin Schulz
As HPC systems continue to grow in scale and complexity, efficient and manageable operation is increasingly critical. For this reason, many centers are starting to explore the use of Operational Data Analytics (ODA) techniques, which extract knowledge from the massive amounts of data produced by monitoring systems and use it for enacting control over system knobs, or for aiding administrators through visualization. As ODA is a multi-faceted problem, much research effort has gone into finding solutions to its separate aspects: however, comprehensive solutions to enable production use of ODA are still rare, while accounts of ODA experiences and the associated challenges are even harder to come across.
In this work we aim to bridge the gap between ODA research and production use by presenting our own experiences, associated with proactive control of warm-water inlet temperatures and visualization of job data on two different HPC systems. We cover the entire development process, starting from a description of requirements and challenges, and down to design, deployment and evaluation. Moreover, we discuss a series of critical points related to the maintainability of ODA, and propose action items in an effort to drive the community forward. We rely on a series of open-source tools and techniques, which make for a generic ODA framework that is suitable for most use cases.
随着高性能计算系统的规模和复杂性不断增长,高效和可管理的操作变得越来越重要。出于这个原因,许多中心开始探索使用操作数据分析(Operational Data Analytics, ODA)技术,该技术从监视系统产生的大量数据中提取知识,并将其用于对系统旋钮进行控制,或者通过可视化帮助管理员。由于官方发展援助是一个多方面的问题,许多研究工作都是为了寻找解决其各个方面的办法;然而,使官方发展援助能够用于生产的全面解决办法仍然很少,而关于官方发展援助的经验和有关挑战的叙述则更加困难。在这项工作中,我们的目标是通过介绍我们自己的经验,在两种不同的高性能计算系统上主动控制温水入口温度和可视化工作数据,弥合ODA研究和生产使用之间的差距。我们涵盖了整个开发过程,从需求和挑战的描述开始,一直到设计、部署和评估。此外,我们讨论了一系列与官方发展援助可维护性相关的关键点,并提出了行动项目,以努力推动社区向前发展。我们依赖于一系列开源工具和技术,这些工具和技术构成了适用于大多数用例的通用ODA框架。
{"title":"Operational Data Analytics in practice: Experiences from design to deployment in production HPC environments","authors":"Alessio Netti , Michael Ott , Carla Guillen , Daniele Tafani , Martin Schulz","doi":"10.1016/j.parco.2022.102950","DOIUrl":"10.1016/j.parco.2022.102950","url":null,"abstract":"<div><p><span>As HPC systems continue to grow in scale and complexity, efficient and manageable operation is increasingly critical. For this reason, many centers are starting to explore the use of </span><span><em>Operational </em><em>Data Analytics</em></span> (ODA) techniques, which extract knowledge from the massive amounts of data produced by monitoring systems and use it for enacting control over system knobs, or for aiding administrators through visualization. As ODA is a multi-faceted problem, much research effort has gone into finding solutions to its separate aspects: however, comprehensive solutions to enable production use of ODA are still rare, while accounts of ODA experiences and the associated challenges are even harder to come across.</p><p>In this work we aim to bridge the gap between ODA research and production use by presenting our own experiences, associated with proactive control of warm-water inlet temperatures<span> and visualization of job data on two different HPC systems. We cover the entire development process, starting from a description of requirements and challenges, and down to design, deployment and evaluation. Moreover, we discuss a series of critical points related to the maintainability of ODA, and propose action items in an effort to drive the community forward. We rely on a series of open-source tools and techniques, which make for a generic ODA framework that is suitable for most use cases.</span></p></div>","PeriodicalId":54642,"journal":{"name":"Parallel Computing","volume":"113 ","pages":"Article 102950"},"PeriodicalIF":1.4,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74644871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01DOI: 10.1016/j.parco.2022.102969
Jaemin Choi , Zane Fink , Sam White , Nitin Bhat , David F. Richards , Laxmikant V. Kale
As an increasing number of leadership-class systems embrace GPU accelerators in the race towards exascale, efficient communication of GPU data is becoming one of the most critical components of high-performance computing. For developers of parallel programming models, implementing support for GPU-aware communication using native APIs for GPUs such as CUDA can be a daunting task as it requires considerable effort with little guarantee of performance. In this work, we demonstrate the capability of the Unified Communication X (UCX) framework to compose a GPU-aware communication layer that serves multiple parallel programming models of the Charm++ ecosystem: Charm++, Adaptive MPI (AMPI), and Charm4py. We demonstrate the performance impact of our designs with microbenchmarks adapted from the OSU benchmark suite, obtaining improvements in latency of up to 10.1x in Charm++, 11.7x in AMPI, and 17.4x in Charm4py. We also observe increases in bandwidth of up to 10.1x in Charm++, 10x in AMPI, and 10.5x in Charm4py. We show the potential impact of our designs on real-world applications by evaluating a proxy application for the Jacobi iterative method, improving the communication performance by up to 12.4x in Charm++, 12.8x in AMPI, and 19.7x in Charm4py.
{"title":"Accelerating communication for parallel programming models on GPU systems","authors":"Jaemin Choi , Zane Fink , Sam White , Nitin Bhat , David F. Richards , Laxmikant V. Kale","doi":"10.1016/j.parco.2022.102969","DOIUrl":"10.1016/j.parco.2022.102969","url":null,"abstract":"<div><p><span>As an increasing number of leadership-class systems embrace GPU accelerators in the race towards exascale, efficient communication of GPU data is becoming one of the most critical components of high-performance computing. For developers of </span>parallel programming models<span>, implementing support for GPU-aware communication using native APIs for GPUs such as CUDA can be a daunting task as it requires considerable effort with little guarantee of performance. In this work, we demonstrate the capability of the Unified Communication X (UCX) framework to compose a GPU-aware communication layer that serves multiple parallel programming models of the Charm++ ecosystem: Charm++, Adaptive MPI (AMPI), and Charm4py. We demonstrate the performance impact of our designs with microbenchmarks<span> adapted from the OSU benchmark suite, obtaining improvements in latency of up to 10.1x in Charm++, 11.7x in AMPI, and 17.4x in Charm4py. We also observe increases in bandwidth of up to 10.1x in Charm++, 10x in AMPI, and 10.5x in Charm4py. We show the potential impact of our designs on real-world applications by evaluating a proxy application for the Jacobi iterative method, improving the communication performance by up to 12.4x in Charm++, 12.8x in AMPI, and 19.7x in Charm4py.</span></span></p></div>","PeriodicalId":54642,"journal":{"name":"Parallel Computing","volume":"113 ","pages":"Article 102969"},"PeriodicalIF":1.4,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82219606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01DOI: 10.1016/j.parco.2022.102954
Jiazhi Jiang, Dan Huang, Jiangsu Du, Yutong Lu, Xiangke Liao
In many scenarios, particularly scientific AI applications, algorithm engineers widely adopt more complex convolution, e.g. 3D CNN, to improve the accuracy. Scientific AI applications with 3D-CNN, which tends to train with volumetric datasets, substantially increase the size of the input, which in turn potentially restricts the channel sizes (e.g. less than 64) under the constraints of limited device memory capacity. Since existing convolution implementations tend to split and parallelize computing the small channel convolution from channel dimension, they usually cannot fully exploit the performance of GPU accelerator, in particular that configured with the emerging tensor core.
In this work, we target on enhancing the performance of small channel 3D convolution on the GPU platform configured with tensor cores. Our analysis shows that the channel size of convolution has a great effect on the performance of existing convolution implementations, that are memory-bound on tensor core. By leveraging the memory hierarchy characteristics and the WMMA API of tensor core, we propose and implement holistic optimizations for both promoting the data access efficiency and intensifying the utilization of computing units. Experiments show that our implementation can obtain 1.1x–5.4x speedup comparing to the cuDNN’s implementations for the 3D convolutions on different GPU platforms. We also evaluate our implementations on two practical scientific AI applications and observe up to 1.7x and 2.0x overall speedups compared with using cuDNN on V100 GPU.
{"title":"Optimizing small channel 3D convolution on GPU with tensor core","authors":"Jiazhi Jiang, Dan Huang, Jiangsu Du, Yutong Lu, Xiangke Liao","doi":"10.1016/j.parco.2022.102954","DOIUrl":"10.1016/j.parco.2022.102954","url":null,"abstract":"<div><p><span>In many scenarios, particularly scientific AI applications, algorithm engineers widely adopt more complex convolution, e.g. 3D </span>CNN<span>, to improve the accuracy. Scientific AI applications with 3D-CNN, which tends to train with volumetric datasets<span>, substantially increase the size of the input, which in turn potentially restricts the channel sizes (e.g. less than 64) under the constraints of limited device memory capacity. Since existing convolution implementations tend to split and parallelize computing the small channel convolution from channel dimension, they usually cannot fully exploit the performance of GPU accelerator, in particular that configured with the emerging tensor core.</span></span></p><p><span>In this work, we target on enhancing the performance of small channel 3D convolution on the GPU platform configured with tensor cores. Our analysis shows that the channel size of convolution has a great effect on the performance of existing convolution implementations, that are memory-bound on tensor core. By leveraging the memory hierarchy characteristics and the WMMA API of tensor core, we propose and implement holistic optimizations for both promoting the data access efficiency and intensifying the utilization of </span>computing units. Experiments show that our implementation can obtain 1.1x–5.4x speedup comparing to the cuDNN’s implementations for the 3D convolutions on different GPU platforms. We also evaluate our implementations on two practical scientific AI applications and observe up to 1.7x and 2.0x overall speedups compared with using cuDNN on V100 GPU.</p></div>","PeriodicalId":54642,"journal":{"name":"Parallel Computing","volume":"113 ","pages":"Article 102954"},"PeriodicalIF":1.4,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78348079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Graph optimization algorithm using symmetry and host bias for low-latency indirect network","authors":"M. Nakao, M. Tsukamoto, Y. Hanada, Keiji Yamamoto","doi":"10.2139/ssrn.4048955","DOIUrl":"https://doi.org/10.2139/ssrn.4048955","url":null,"abstract":"","PeriodicalId":54642,"journal":{"name":"Parallel Computing","volume":"36 1","pages":"102983"},"PeriodicalIF":1.4,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90026890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01DOI: 10.1016/j.parco.2022.102972
Hao Wang , Ce Yu , Jian Xiao , Shanjiang Tang , Yu Lu , Hao Fu , Bo Kang , Gang Zheng , Chenzhou Cui
Gridding is the performance-critical step in the data reduction pipeline for radio astronomy research, allowing astronomers to create the correct sky images for further analysis. Like the 2D stencil computation, gridding iteratively updates the output cells by convolution, where the value at each output cell in the space is computed as a weighted sum of neighboring point values. Existing state-of-the-art works have achieved performance improvement of gridding by using multi-core CPUs and GPUs in real-world applications, and their study proved that gridding is a type of scientific computation with high-density computing characteristics. However, low computational performance or high power consumption becomes the main limitation for their processing of large-scale astronomical data. The high-density computing feature of gridding provides opportunities to accelerate it on the multi-core vector processor with vector-SIMD architectures. However, existing works’ (such as those implemented on CPUs or GPUs) task parallelization and data transfer strategies are inefficient to perform gridding directly on the vector processor without any dedicated mapping algorithm.
M-DSP is a multi-core vector processor with vector-SIMD architectures designed for the next-generation exascale supercomputer, delivering high performance with ultra-low power consumption. In this paper, we present, for the first time, a novel method to achieve efficient gridding on the M-DSP. Specifically, we propose a gridding workflow designed for the vector-SIMD architectures and present a vectorized version of the gridding convolution algorithm to fully exploit the computational power of the M-DSP. In addition, centering on the processor architectures, we propose task-based parallelization strategies for block and line computing as well as different data loading strategies to achieve high parallel performance and high data transfer efficiency. Experimental results show that our work on M-DSP exhibits very competitive performance compared to other methods running on CPUs or GPUs. This demonstrates the efficiency of our method and the fact that the vector-SIMD architecture is beneficial for scientific computing with ”high density” characteristics, which can exploit its wide vector core and achieve higher performance than its competitors.
{"title":"A method for efficient radio astronomical data gridding on multi-core vector processor","authors":"Hao Wang , Ce Yu , Jian Xiao , Shanjiang Tang , Yu Lu , Hao Fu , Bo Kang , Gang Zheng , Chenzhou Cui","doi":"10.1016/j.parco.2022.102972","DOIUrl":"10.1016/j.parco.2022.102972","url":null,"abstract":"<div><p><span><span>Gridding is the performance-critical step in the data reduction pipeline for radio astronomy research, allowing astronomers to create the correct sky images for further analysis. Like the 2D stencil computation, gridding iteratively updates the output cells by convolution, where the value at each output cell in the space is computed as a weighted sum of neighboring point values. Existing state-of-the-art works have achieved performance improvement of gridding by using multi-core CPUs and GPUs in real-world applications, and their study proved that gridding is a type of scientific computation with high-density computing characteristics. However, low computational performance or high </span>power consumption<span> becomes the main limitation for their processing of large-scale astronomical data. The high-density computing feature of gridding provides opportunities to accelerate it on the multi-core vector processor with vector-SIMD architectures. However, existing works’ (such as those implemented on CPUs or GPUs) task </span></span>parallelization<span> and data transfer strategies are inefficient to perform gridding directly on the vector processor without any dedicated mapping algorithm.</span></p><p>M-DSP is a multi-core vector processor with vector-SIMD architectures designed for the next-generation exascale supercomputer<span>, delivering high performance with ultra-low power consumption. In this paper, we present, for the first time, a novel method to achieve efficient gridding on the M-DSP. Specifically, we propose a gridding workflow designed for the vector-SIMD architectures and present a vectorized version<span> of the gridding convolution algorithm to fully exploit the computational power of the M-DSP. In addition, centering on the processor architectures, we propose task-based parallelization strategies for block and line computing as well as different data loading strategies to achieve high parallel performance and high data transfer efficiency. Experimental results show that our work on M-DSP exhibits very competitive performance compared to other methods running on CPUs or GPUs. This demonstrates the efficiency of our method and the fact that the vector-SIMD architecture is beneficial for scientific computing with ”high density” characteristics, which can exploit its wide vector core and achieve higher performance than its competitors.</span></span></p></div>","PeriodicalId":54642,"journal":{"name":"Parallel Computing","volume":"113 ","pages":"Article 102972"},"PeriodicalIF":1.4,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75782731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01DOI: 10.1016/j.parco.2022.102958
Qingxiao Sun , Liu Yi , Hailong Yang , Mingzhen Li , Zhongzhi Luan , Depei Qian
Although GPUs have been indispensable in data centers, meeting the Quality of Service (QoS) under task consolidation on GPU is extremely challenging. Previous works mostly rely on the static task or resource scheduling and cannot handle the QoS violation during runtime. In addition, existing works fail to exploit the computing characteristics of batch tasks, and thus waste the opportunities to reduce power consumption while improving GPU utilization. To address the above problems, we propose a new runtime mechanism SMQoS that can dynamically adjust the resource allocation during runtime to meet the QoS of latency-sensitive (LS) tasks and determine the optimal resource allocation for batch tasks to improve GPU utilization and power efficiency. We implement the proposed mechanism on both simulator (SMQoS) and real GPU hardware (RH-SMQoS). The experimental results show that both SMQoS and RH-SMQoS can achieve better QoS for LS tasks and higher throughput for batch tasks compared to the state-of-the-art works. With hardware extension, the SMQoS can further reduce the power consumption by power gating idle computing resources.
{"title":"QoS-aware dynamic resource allocation with improved utilization and energy efficiency on GPU","authors":"Qingxiao Sun , Liu Yi , Hailong Yang , Mingzhen Li , Zhongzhi Luan , Depei Qian","doi":"10.1016/j.parco.2022.102958","DOIUrl":"10.1016/j.parco.2022.102958","url":null,"abstract":"<div><p><span><span><span>Although GPUs have been indispensable in </span>data centers, meeting the Quality of Service (QoS) under task consolidation on GPU is extremely challenging. Previous works mostly rely on the static task or resource scheduling and cannot handle the QoS violation during runtime. In addition, existing works fail to exploit the computing characteristics of batch tasks, and thus waste the opportunities to reduce </span>power consumption while improving GPU utilization. To address the above problems, we propose a new runtime mechanism </span><em>SMQoS</em> that can dynamically adjust the resource allocation during runtime to meet the QoS of latency-sensitive (LS) tasks and determine the optimal resource allocation for batch tasks to improve GPU utilization and power efficiency. We implement the proposed mechanism on both simulator (<em>SMQoS</em>) and real GPU hardware (<em>RH-SMQoS</em>). The experimental results show that both <em>SMQoS</em> and <em>RH-SMQoS</em><span> can achieve better QoS for LS tasks and higher throughput for batch tasks compared to the state-of-the-art works. With hardware extension, the </span><em>SMQoS</em> can further reduce the power consumption by power gating idle computing resources.</p></div>","PeriodicalId":54642,"journal":{"name":"Parallel Computing","volume":"113 ","pages":"Article 102958"},"PeriodicalIF":1.4,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75432812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}