{"title":"Accelerating Lattice QCD Simulations using GPUs","authors":"Tilmann Matthaei","doi":"arxiv-2407.00041","DOIUrl":null,"url":null,"abstract":"Solving discretized versions of the Dirac equation represents a large share\nof execution time in lattice Quantum Chromodynamics (QCD) simulations. Many\nhigh-performance computing (HPC) clusters use graphics processing units (GPUs)\nto offer more computational resources. Our solver program, DDalphaAMG,\npreviously was unable to fully take advantage of GPUs to accelerate its\ncomputations. Making use of GPUs for DDalphaAMG is an ongoing development, and\nwe will present some current progress herein. Through a detailed description of\nour development, this thesis should offer valuable insights into using GPUs to\naccelerate a memory-bound CPU implementation. We developed a storage scheme for multiple tuples, which allows much more\nefficient memory access on GPUs, given that the element at the same index is\nread from multiple tuples simultaneously. Still, our implementation of a\ndiscrete Dirac operator is memory-bound, and we only achieved improvements for\nlarge linear systems on few nodes at the JUWELS cluster. These improvements do\nnot currently overcome additional introduced overheads. However, the results\nfor the application of the Wilson-Dirac operator show a speedup of around 3 for\nlarge lattices. If the additional overheads can be eliminated in the future,\nGPUs could reduce the DDalphaAMG execution time significantly for large\nlattices. We also found that a previous publication on the GPU acceleration of\nDDalphaAMG, underrepresented the achieved speedup, because small lattices were\nused. This further highlights that GPUs often require large-scale problems to\nsolve in order to be faster than CPUs","PeriodicalId":501191,"journal":{"name":"arXiv - PHYS - High Energy Physics - Lattice","volume":"133 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - PHYS - High Energy Physics - Lattice","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2407.00041","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Solving discretized versions of the Dirac equation represents a large share
of execution time in lattice Quantum Chromodynamics (QCD) simulations. Many
high-performance computing (HPC) clusters use graphics processing units (GPUs)
to offer more computational resources. Our solver program, DDalphaAMG,
previously was unable to fully take advantage of GPUs to accelerate its
computations. Making use of GPUs for DDalphaAMG is an ongoing development, and
we will present some current progress herein. Through a detailed description of
our development, this thesis should offer valuable insights into using GPUs to
accelerate a memory-bound CPU implementation. We developed a storage scheme for multiple tuples, which allows much more
efficient memory access on GPUs, given that the element at the same index is
read from multiple tuples simultaneously. Still, our implementation of a
discrete Dirac operator is memory-bound, and we only achieved improvements for
large linear systems on few nodes at the JUWELS cluster. These improvements do
not currently overcome additional introduced overheads. However, the results
for the application of the Wilson-Dirac operator show a speedup of around 3 for
large lattices. If the additional overheads can be eliminated in the future,
GPUs could reduce the DDalphaAMG execution time significantly for large
lattices. We also found that a previous publication on the GPU acceleration of
DDalphaAMG, underrepresented the achieved speedup, because small lattices were
used. This further highlights that GPUs often require large-scale problems to
solve in order to be faster than CPUs