{"title":"Implementation of an Accurate and Efficient Compensated DGEMM for 64-bit ARMv8 Multi-Core Processors","authors":"Hao Jiang, Feng Wang, Kuan Li, Canqun Yang, Kejia Zhao, Chun Huang","doi":"10.1109/ICPADS.2015.68","DOIUrl":null,"url":null,"abstract":"This paper presents an implementation of an accurate and efficient compensated Double-precision General Matrix Multiplication (DGEMM) based on OpenBLAS for 64-bit ARMv8 multi-core processors. Due to cancellation phenomena in floating point arithmetic, the results of DGEMM may not be as accurate as expected. In order to increase the accuracy of DGEMM, we compensate the error introduced by its dot product kernel (GEBP) by applying an error-free transformation to rewrite the kernel in assembly language. We optimize the computations in the inner kernel through exploiting loop unrolling, instruction scheduling and software-implemented register rotation to exploit instruction level parallelism (ILP). We also conduct a priori error analysis of the derived CompDGEMM. Our compensated DGEMM is as accurate as the existing quadruple precision GEMM using MBLAS, but is up to 6.4x faster. Our parallel implementation achieves good performance and scalability under varying thread counts across a range of matrix sizes evaluated.","PeriodicalId":231517,"journal":{"name":"2015 IEEE 21st International Conference on Parallel and Distributed Systems (ICPADS)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 IEEE 21st International Conference on Parallel and Distributed Systems (ICPADS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICPADS.2015.68","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
This paper presents an implementation of an accurate and efficient compensated Double-precision General Matrix Multiplication (DGEMM) based on OpenBLAS for 64-bit ARMv8 multi-core processors. Due to cancellation phenomena in floating point arithmetic, the results of DGEMM may not be as accurate as expected. In order to increase the accuracy of DGEMM, we compensate the error introduced by its dot product kernel (GEBP) by applying an error-free transformation to rewrite the kernel in assembly language. We optimize the computations in the inner kernel through exploiting loop unrolling, instruction scheduling and software-implemented register rotation to exploit instruction level parallelism (ILP). We also conduct a priori error analysis of the derived CompDGEMM. Our compensated DGEMM is as accurate as the existing quadruple precision GEMM using MBLAS, but is up to 6.4x faster. Our parallel implementation achieves good performance and scalability under varying thread counts across a range of matrix sizes evaluated.