{"title":"Refactoring BZIP2 on the new-generation sunway supercomputer","authors":"Xiaohui Liu, Zekun Yin, Haodong Tian, Wubing Wan, Mengyuan Hua, Wenlai Zhao, Zhenchun Huang, Ping Gao, Fangjin Zhu, Hua Wang, Xiaohui Duan","doi":"10.1002/eng2.12806","DOIUrl":null,"url":null,"abstract":"<p>High-performance computing is progressively assuming a fundamental role in advancing scientific research and engineering domains. However, the ever-expanding scales of scientific simulations pose challenges for efficient data I/O and storage. The data compression technology has garnered significant attention as a solution to reduce data transmission and storage costs while enhancing performance. In particular, the BZIP2 lossless compression algorithm has been widely used due to its exceptional compression ratio, moderate compression speed, high reliability, and open-source nature. This paper focuses on the design and realization of a parallelized BZIP2 algorithm tailored for deployment on the New-Generation Sunway supercomputing platform. By leveraging the unique cache patterns of the New-Generation Sunway processor, we propose the highly tuned multi-threading and multi-node implementations of the BZIP2 applications for different scenarios. Moreover, we also propose the efficient BZIP2 libraries based on the management processing element and computing processing element which support the commonly used high-level (de)compression interfaces. The test results indicate that the our multi-threading implementation achieves maximum speedup of 23.09<span></span><math>\n <semantics>\n <mrow>\n <mo>×</mo>\n </mrow>\n <annotation>$$ \\times $$</annotation>\n </semantics></math> (8.57<span></span><math>\n <semantics>\n <mrow>\n <mo>×</mo>\n </mrow>\n <annotation>$$ \\times $$</annotation>\n </semantics></math>) in decompression(compression) compared to the sequential implementation. Furthermore, the multi-node implementation achieves 50.81% (26.35%) parallel efficiency and peak performance of 16.6 GB/s (52.8 GB/s) for compression(decompression) when scaling up to 2048 processes.</p>","PeriodicalId":72922,"journal":{"name":"Engineering reports : open access","volume":"7 1","pages":""},"PeriodicalIF":2.0000,"publicationDate":"2023-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/eng2.12806","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering reports : open access","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/eng2.12806","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
High-performance computing is progressively assuming a fundamental role in advancing scientific research and engineering domains. However, the ever-expanding scales of scientific simulations pose challenges for efficient data I/O and storage. The data compression technology has garnered significant attention as a solution to reduce data transmission and storage costs while enhancing performance. In particular, the BZIP2 lossless compression algorithm has been widely used due to its exceptional compression ratio, moderate compression speed, high reliability, and open-source nature. This paper focuses on the design and realization of a parallelized BZIP2 algorithm tailored for deployment on the New-Generation Sunway supercomputing platform. By leveraging the unique cache patterns of the New-Generation Sunway processor, we propose the highly tuned multi-threading and multi-node implementations of the BZIP2 applications for different scenarios. Moreover, we also propose the efficient BZIP2 libraries based on the management processing element and computing processing element which support the commonly used high-level (de)compression interfaces. The test results indicate that the our multi-threading implementation achieves maximum speedup of 23.09 (8.57) in decompression(compression) compared to the sequential implementation. Furthermore, the multi-node implementation achieves 50.81% (26.35%) parallel efficiency and peak performance of 16.6 GB/s (52.8 GB/s) for compression(decompression) when scaling up to 2048 processes.
高性能计算在推进科学研究和工程领域中日益发挥着基础性作用。然而,不断扩大的科学模拟规模对有效的数据I/O和存储提出了挑战。数据压缩技术作为一种既能降低数据传输和存储成本,又能提高性能的解决方案,受到了广泛关注。其中,BZIP2无损压缩算法以其优越的压缩比、适中的压缩速度、高可靠性和开源特性得到了广泛的应用。本文针对新一代神威超级计算平台,设计并实现了一种并行化的BZIP2算法。通过利用新一代神威处理器独特的缓存模式,我们提出了针对不同场景的BZIP2应用程序的高度调优的多线程和多节点实现。此外,我们还提出了基于管理处理元素和计算处理元素的高效BZIP2库,支持常用的高级(解)压缩接口。测试结果表明,与顺序实现相比,我们的多线程实现在解压缩(压缩)方面实现了23.09 × $$ \times $$ (8.57 × $$ \times $$)的最大加速。多节点实现达到50.81% (26.35%) parallel efficiency and peak performance of 16.6 GB/s (52.8 GB/s) for compression(decompression) when scaling up to 2048 processes.