Pub Date : 2024-07-23DOI: 10.1109/TPDS.2024.3432579
Cunyang Wei;Haipeng Jia;Yunquan Zhang;Jianyu Yao;Chendi Li;Wenxuan Cao
The matrix multiplication algorithm is a fundamental numerical technique in linear algebra and plays a crucial role in many scientific computing applications. Despite the high performance of mainstream basic linear algebra libraries for large-scale dense matrix multiplications, they exhibit poor performance when applied to matrix multiplication with irregular input. This paper proposes an input-aware tuning framework that accounts for application scenarios and computer architectures to provide high-performance irregular matrix multiplication on ARMv8 and X86 CPUs. The framework comprises two stages: the install-time stage and the run-time stage. The install-time stage utilizes our proposed computational template to generate high-performance kernels for general data layout and SIMD-friendly data layout. The run-time stage utilizes a tiling algorithm suitable for irregular GEMM to select the optimal kernel and link as an execution plan. Additionally, load-balanced multi-threaded optimization algorithms are defined to exploit the multi-threading capability of modern processors. Experiments demonstrate that the proposed IrGEMM framework can achieve significant performance improvements for irregular GEMM on both ARMv8 and X86 CPUs compared to other mainstream BLAS libraries.
矩阵乘法算法是线性代数中的一项基本数值技术,在许多科学计算应用中发挥着至关重要的作用。尽管主流的基本线性代数库在大规模密集矩阵乘法中表现出很高的性能,但在应用于不规则输入的矩阵乘法时却表现不佳。本文提出了一个输入感知调优框架,该框架考虑了应用场景和计算机架构,可在 ARMv8 和 X86 CPU 上提供高性能的不规则矩阵乘法。该框架包括两个阶段:安装阶段和运行阶段。安装阶段利用我们提出的计算模板,为通用数据布局和 SIMD 友好数据布局生成高性能内核。运行阶段利用适合不规则 GEMM 的平铺算法,选择最佳内核和链接作为执行计划。此外,还定义了负载平衡多线程优化算法,以利用现代处理器的多线程能力。实验证明,与其他主流 BLAS 库相比,所提出的 IrGEMM 框架可以在 ARMv8 和 X86 CPU 上显著提高不规则 GEMM 的性能。
{"title":"IrGEMM: An Input-Aware Tuning Framework for Irregular GEMM on ARM and X86 CPUs","authors":"Cunyang Wei;Haipeng Jia;Yunquan Zhang;Jianyu Yao;Chendi Li;Wenxuan Cao","doi":"10.1109/TPDS.2024.3432579","DOIUrl":"10.1109/TPDS.2024.3432579","url":null,"abstract":"The matrix multiplication algorithm is a fundamental numerical technique in linear algebra and plays a crucial role in many scientific computing applications. Despite the high performance of mainstream basic linear algebra libraries for large-scale dense matrix multiplications, they exhibit poor performance when applied to matrix multiplication with irregular input. This paper proposes an input-aware tuning framework that accounts for application scenarios and computer architectures to provide high-performance irregular matrix multiplication on ARMv8 and X86 CPUs. The framework comprises two stages: the install-time stage and the run-time stage. The install-time stage utilizes our proposed computational template to generate high-performance kernels for general data layout and SIMD-friendly data layout. The run-time stage utilizes a tiling algorithm suitable for irregular GEMM to select the optimal kernel and link as an execution plan. Additionally, load-balanced multi-threaded optimization algorithms are defined to exploit the multi-threading capability of modern processors. Experiments demonstrate that the proposed IrGEMM framework can achieve significant performance improvements for irregular GEMM on both ARMv8 and X86 CPUs compared to other mainstream BLAS libraries.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"35 9","pages":"1672-1689"},"PeriodicalIF":5.6,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141772338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-23DOI: 10.1109/TPDS.2024.3432620
Rui Tian;Jiazhi Jiang;Jiangsu Du;Dan Huang;Yutong Lu
Recommendation systems are essential to the operation of the majority of internet services, with Deep Learning Recommendation Models (DLRMs) serving as a crucial component. However, due to distinct computation, data access, and memory usage characteristics of recommendation models, the trainning of DLRMs may suffer from low resource utilization on prevalent heterogeneous CPU-GPU hardware platforms. Furthermore, as the majority of high-performance computing systems presently depend on multi-GPU computing nodes, the challenge of addressing low resource utilization becomes even more pronounced. Existing concurrent training solutions cannot be straightforwardly applied to DLRM due to various factors, such as insufficient fine-grained memory management and the lack of collaborative CPU-GPU scheduling. In this paper, we introduce RMixer, a scheduling framework that addresses these challenges by providing an efficient job management and scheduling mechanism for DLRM training jobs on heterogeneous CPU-GPU platforms. To facilitate training co-location, we first estimate the peak memory consumption of each job. Additionally, we track and collect resource utilization for DLRM training jobs. Based on the information of computational patterns, a batched job dispatcher with dynamic resource-complementary scheduling policy is proposed to co-locate DLRM training jobs on CPU-GPU platform. Scheduling strategies for both intra-GPU and inter-GPU scenarios were meticulously devised, with a focus on thoroughly examining individual GPU resource utilization and achieving a balanced state across multiple GPUs. Experimental results demonstrate that our implementation achieved up to 5.3× and 7.5× higher throughput on single GPU and 4 GPU respectively for training jobs involving various recommendation models.
{"title":"Sophisticated Orchestrating Concurrent DLRM Training on CPU/GPU Platform","authors":"Rui Tian;Jiazhi Jiang;Jiangsu Du;Dan Huang;Yutong Lu","doi":"10.1109/TPDS.2024.3432620","DOIUrl":"10.1109/TPDS.2024.3432620","url":null,"abstract":"Recommendation systems are essential to the operation of the majority of internet services, with Deep Learning Recommendation Models (DLRMs) serving as a crucial component. However, due to distinct computation, data access, and memory usage characteristics of recommendation models, the trainning of DLRMs may suffer from low resource utilization on prevalent heterogeneous CPU-GPU hardware platforms. Furthermore, as the majority of high-performance computing systems presently depend on multi-GPU computing nodes, the challenge of addressing low resource utilization becomes even more pronounced. Existing concurrent training solutions cannot be straightforwardly applied to DLRM due to various factors, such as insufficient fine-grained memory management and the lack of collaborative CPU-GPU scheduling. In this paper, we introduce RMixer, a scheduling framework that addresses these challenges by providing an efficient job management and scheduling mechanism for DLRM training jobs on heterogeneous CPU-GPU platforms. To facilitate training co-location, we first estimate the peak memory consumption of each job. Additionally, we track and collect resource utilization for DLRM training jobs. Based on the information of computational patterns, a batched job dispatcher with dynamic resource-complementary scheduling policy is proposed to co-locate DLRM training jobs on CPU-GPU platform. Scheduling strategies for both intra-GPU and inter-GPU scenarios were meticulously devised, with a focus on thoroughly examining individual GPU resource utilization and achieving a balanced state across multiple GPUs. Experimental results demonstrate that our implementation achieved up to 5.3× and 7.5× higher throughput on single GPU and 4 GPU respectively for training jobs involving various recommendation models.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"35 11","pages":"2177-2192"},"PeriodicalIF":5.6,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141772290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deep Neural Networks (DNNs) have gained widespread adoption in diverse fields, including image classification, object detection, and natural language processing. However, training large-scale DNN models often encounters significant memory bottlenecks, which ask for efficient management of extensive tensors. Heterogeneous memory system, which combines persistent memory (PM) modules with traditional DRAM, offers an economically viable solution to address tensor management challenges during DNN training. However, existing memory management methods on heterogeneous memory systems often lead to low PM access efficiency, low bandwidth utilization, and incomplete analysis of model characteristics. To overcome these hurdles, we introduce an efficient tensor management approach, DeepTM, tailored for heterogeneous memory to alleviate memory bottlenecks during DNN training. DeepTM employs page-level tensor aggregation to enhance PM read and write performance and executes contiguous page migration to increase memory bandwidth. Through an analysis of tensor access patterns and model characteristics, we quantify the overall performance and transform the performance optimization problem into the framework of Integer Linear Programming. Additionally, we achieve tensor heat recognition by dynamically adjusting the weights of four key tensor characteristics and develop a global optimization strategy using Deep Reinforcement Learning. To validate the efficacy of our approach, we implement and evaluate DeepTM, utilizing the TensorFlow framework running on a PM-based heterogeneous memory system. The experimental results demonstrate that DeepTM achieves performance improvements of up to 36% and 49% compared to the current state-of-the-art memory management strategies AutoTM and Sentinel, respectively. Furthermore, our solution reduces the overhead by 18 times and achieves up to 29% cost reduction compared to AutoTM.
{"title":"DeepTM: Efficient Tensor Management in Heterogeneous Memory for DNN Training","authors":"Haoran Zhou;Wei Rang;Hongyang Chen;Xiaobo Zhou;Dazhao Cheng","doi":"10.1109/TPDS.2024.3431910","DOIUrl":"10.1109/TPDS.2024.3431910","url":null,"abstract":"Deep Neural Networks (DNNs) have gained widespread adoption in diverse fields, including image classification, object detection, and natural language processing. However, training large-scale DNN models often encounters significant memory bottlenecks, which ask for efficient management of extensive tensors. Heterogeneous memory system, which combines persistent memory (PM) modules with traditional DRAM, offers an economically viable solution to address tensor management challenges during DNN training. However, existing memory management methods on heterogeneous memory systems often lead to low PM access efficiency, low bandwidth utilization, and incomplete analysis of model characteristics. To overcome these hurdles, we introduce an efficient tensor management approach, DeepTM, tailored for heterogeneous memory to alleviate memory bottlenecks during DNN training. DeepTM employs page-level tensor aggregation to enhance PM read and write performance and executes contiguous page migration to increase memory bandwidth. Through an analysis of tensor access patterns and model characteristics, we quantify the overall performance and transform the performance optimization problem into the framework of Integer Linear Programming. Additionally, we achieve tensor heat recognition by dynamically adjusting the weights of four key tensor characteristics and develop a global optimization strategy using Deep Reinforcement Learning. To validate the efficacy of our approach, we implement and evaluate DeepTM, utilizing the TensorFlow framework running on a PM-based heterogeneous memory system. The experimental results demonstrate that DeepTM achieves performance improvements of up to 36% and 49% compared to the current state-of-the-art memory management strategies AutoTM and Sentinel, respectively. Furthermore, our solution reduces the overhead by 18 times and achieves up to 29% cost reduction compared to AutoTM.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"35 11","pages":"1920-1935"},"PeriodicalIF":5.6,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141772291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-22DOI: 10.1109/TPDS.2024.3431611
Gabriele Mencagli;Patrizio Dazzi;Massimo Coppola
An increasing number of application domains require high-throughput processing to extract insights from massive data streams. The Data Stream Processing (DSP) paradigm provides formal approaches to analyze structured data streams considered as special, unbounded relations. The most used class of stateful operators in DSP are the ones running sliding-window aggregation, which continuously extracts insights from the most recent portion of the stream. This article presents Springald