Hongwei Chen , Shiyang Chen , Joshua J. Turner , Adrian Feiguin
{"title":"在 Nvidia GPU 上使用张量核进行原子自旋动力学模拟的内核融合","authors":"Hongwei Chen , Shiyang Chen , Joshua J. Turner , Adrian Feiguin","doi":"10.1016/j.jocs.2024.102357","DOIUrl":null,"url":null,"abstract":"<div><p>In atomistic spin dynamics simulations, the time cost of constructing the space- and time-displaced pair correlation function in real space increases quadratically as the number of spins <span><math><mi>N</mi></math></span>, leading to significant computational effort. The GEMM subroutine can be adopted to accelerate the calculation of the dynamical spin–spin correlation function, but the computational cost of simulating large spin systems (<span><math><mrow><mo>></mo><mn>40000</mn></mrow></math></span> spins) on CPUs remains expensive. In this work, we perform the simulation on a graphics processing unit (GPU), a hardware solution widely used as an accelerator for scientific computing and deep learning. We show that GPUs can accelerate the simulation up to 25-fold compared to multi-core CPUs when using the GEMM subroutine on both. To hide memory latency, we fuse the element-wise operation into the GEMM kernel using <span><math><mstyle><mi>C</mi><mi>U</mi><mi>T</mi><mi>L</mi><mi>A</mi><mi>S</mi><mi>S</mi></mstyle></math></span> which can improve the performance by 26% <span><math><mo>∼</mo></math></span> 33% compared to the implementation based on <span><math><mstyle><mi>c</mi><mi>u</mi><mi>B</mi><mi>L</mi><mi>A</mi><mi>S</mi></mstyle></math></span>. Furthermore, we perform the ‘on-the-fly’ calculation in the epilogue of the GEMM subroutine to avoid saving intermediate results on global memory, which makes large-scale atomistic spin dynamics simulations feasible and affordable.</p></div>","PeriodicalId":48907,"journal":{"name":"Journal of Computational Science","volume":null,"pages":null},"PeriodicalIF":3.1000,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Kernel fusion in atomistic spin dynamics simulations on Nvidia GPUs using tensor core\",\"authors\":\"Hongwei Chen , Shiyang Chen , Joshua J. Turner , Adrian Feiguin\",\"doi\":\"10.1016/j.jocs.2024.102357\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>In atomistic spin dynamics simulations, the time cost of constructing the space- and time-displaced pair correlation function in real space increases quadratically as the number of spins <span><math><mi>N</mi></math></span>, leading to significant computational effort. The GEMM subroutine can be adopted to accelerate the calculation of the dynamical spin–spin correlation function, but the computational cost of simulating large spin systems (<span><math><mrow><mo>></mo><mn>40000</mn></mrow></math></span> spins) on CPUs remains expensive. In this work, we perform the simulation on a graphics processing unit (GPU), a hardware solution widely used as an accelerator for scientific computing and deep learning. We show that GPUs can accelerate the simulation up to 25-fold compared to multi-core CPUs when using the GEMM subroutine on both. To hide memory latency, we fuse the element-wise operation into the GEMM kernel using <span><math><mstyle><mi>C</mi><mi>U</mi><mi>T</mi><mi>L</mi><mi>A</mi><mi>S</mi><mi>S</mi></mstyle></math></span> which can improve the performance by 26% <span><math><mo>∼</mo></math></span> 33% compared to the implementation based on <span><math><mstyle><mi>c</mi><mi>u</mi><mi>B</mi><mi>L</mi><mi>A</mi><mi>S</mi></mstyle></math></span>. Furthermore, we perform the ‘on-the-fly’ calculation in the epilogue of the GEMM subroutine to avoid saving intermediate results on global memory, which makes large-scale atomistic spin dynamics simulations feasible and affordable.</p></div>\",\"PeriodicalId\":48907,\"journal\":{\"name\":\"Journal of Computational Science\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2024-06-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Computational Science\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1877750324001509\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Computational Science","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1877750324001509","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0
摘要
在原子自旋动力学模拟中,在实空间构建空间和时间错位的自旋对相关函数的时间成本随着自旋数 N 的增加而二次方增加,从而导致大量的计算工作。可以采用 GEMM 子程序来加速动力学自旋相关函数的计算,但在 CPU 上模拟大型自旋系统(>40000 个自旋)的计算成本仍然很高。在这项工作中,我们在图形处理器(GPU)上进行模拟,GPU是一种广泛用作科学计算和深度学习加速器的硬件解决方案。我们的研究表明,与多核 CPU 相比,在 GPU 上使用 GEMM 子程序时,模拟速度最多可提高 25 倍。为了隐藏内存延迟,我们使用 CUTLASS 将元素向操作融合到 GEMM 内核中,与基于 cuBLAS 的实现相比,性能提高了 26% ∼ 33%。此外,我们在 GEMM 子程序的尾声中执行 "即时 "计算,避免将中间结果保存在全局内存中,这使得大规模原子自旋动力学模拟变得可行且经济实惠。
Kernel fusion in atomistic spin dynamics simulations on Nvidia GPUs using tensor core
In atomistic spin dynamics simulations, the time cost of constructing the space- and time-displaced pair correlation function in real space increases quadratically as the number of spins , leading to significant computational effort. The GEMM subroutine can be adopted to accelerate the calculation of the dynamical spin–spin correlation function, but the computational cost of simulating large spin systems ( spins) on CPUs remains expensive. In this work, we perform the simulation on a graphics processing unit (GPU), a hardware solution widely used as an accelerator for scientific computing and deep learning. We show that GPUs can accelerate the simulation up to 25-fold compared to multi-core CPUs when using the GEMM subroutine on both. To hide memory latency, we fuse the element-wise operation into the GEMM kernel using which can improve the performance by 26% 33% compared to the implementation based on . Furthermore, we perform the ‘on-the-fly’ calculation in the epilogue of the GEMM subroutine to avoid saving intermediate results on global memory, which makes large-scale atomistic spin dynamics simulations feasible and affordable.
期刊介绍:
Computational Science is a rapidly growing multi- and interdisciplinary field that uses advanced computing and data analysis to understand and solve complex problems. It has reached a level of predictive capability that now firmly complements the traditional pillars of experimentation and theory.
The recent advances in experimental techniques such as detectors, on-line sensor networks and high-resolution imaging techniques, have opened up new windows into physical and biological processes at many levels of detail. The resulting data explosion allows for detailed data driven modeling and simulation.
This new discipline in science combines computational thinking, modern computational methods, devices and collateral technologies to address problems far beyond the scope of traditional numerical methods.
Computational science typically unifies three distinct elements:
• Modeling, Algorithms and Simulations (e.g. numerical and non-numerical, discrete and continuous);
• Software developed to solve science (e.g., biological, physical, and social), engineering, medicine, and humanities problems;
• Computer and information science that develops and optimizes the advanced system hardware, software, networking, and data management components (e.g. problem solving environments).