{"title":"在英特尔、AMD和富士通处理器上批量、小矩阵和矩形矩阵乘法的缓存优化和性能建模","authors":"Sameer Deshmukh, Rio Yokota, George Bosilca","doi":"arxiv-2311.07602","DOIUrl":null,"url":null,"abstract":"Factorization and multiplication of dense matrices and tensors are critical,\nyet extremely expensive pieces of the scientific toolbox. Careful use of low\nrank approximation can drastically reduce the computation and memory\nrequirements of these operations. In addition to a lower arithmetic complexity,\nsuch methods can, by their structure, be designed to efficiently exploit modern\nhardware architectures. The majority of existing work relies on batched BLAS\nlibraries to handle the computation of many small dense matrices. We show that\nthrough careful analysis of the cache utilization, register accumulation using\nSIMD registers and a redesign of the implementation, one can achieve\nsignificantly higher throughput for these types of batched low-rank matrices\nacross a large range of block and batch sizes. We test our algorithm on 3 CPUs\nusing diverse ISAs -- the Fujitsu A64FX using ARM SVE, the Intel Xeon 6148\nusing AVX-512 and AMD EPYC 7502 using AVX-2, and show that our new batching\nmethodology is able to obtain more than twice the throughput of vendor\noptimized libraries for all CPU architectures and problem sizes.","PeriodicalId":501256,"journal":{"name":"arXiv - CS - Mathematical Software","volume":"10 4","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Cache Optimization and Performance Modeling of Batched, Small, and Rectangular Matrix Multiplication on Intel, AMD, and Fujitsu Processors\",\"authors\":\"Sameer Deshmukh, Rio Yokota, George Bosilca\",\"doi\":\"arxiv-2311.07602\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Factorization and multiplication of dense matrices and tensors are critical,\\nyet extremely expensive pieces of the scientific toolbox. Careful use of low\\nrank approximation can drastically reduce the computation and memory\\nrequirements of these operations. In addition to a lower arithmetic complexity,\\nsuch methods can, by their structure, be designed to efficiently exploit modern\\nhardware architectures. The majority of existing work relies on batched BLAS\\nlibraries to handle the computation of many small dense matrices. We show that\\nthrough careful analysis of the cache utilization, register accumulation using\\nSIMD registers and a redesign of the implementation, one can achieve\\nsignificantly higher throughput for these types of batched low-rank matrices\\nacross a large range of block and batch sizes. We test our algorithm on 3 CPUs\\nusing diverse ISAs -- the Fujitsu A64FX using ARM SVE, the Intel Xeon 6148\\nusing AVX-512 and AMD EPYC 7502 using AVX-2, and show that our new batching\\nmethodology is able to obtain more than twice the throughput of vendor\\noptimized libraries for all CPU architectures and problem sizes.\",\"PeriodicalId\":501256,\"journal\":{\"name\":\"arXiv - CS - Mathematical Software\",\"volume\":\"10 4\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-11-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Mathematical Software\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2311.07602\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Mathematical Software","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2311.07602","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Cache Optimization and Performance Modeling of Batched, Small, and Rectangular Matrix Multiplication on Intel, AMD, and Fujitsu Processors
Factorization and multiplication of dense matrices and tensors are critical,
yet extremely expensive pieces of the scientific toolbox. Careful use of low
rank approximation can drastically reduce the computation and memory
requirements of these operations. In addition to a lower arithmetic complexity,
such methods can, by their structure, be designed to efficiently exploit modern
hardware architectures. The majority of existing work relies on batched BLAS
libraries to handle the computation of many small dense matrices. We show that
through careful analysis of the cache utilization, register accumulation using
SIMD registers and a redesign of the implementation, one can achieve
significantly higher throughput for these types of batched low-rank matrices
across a large range of block and batch sizes. We test our algorithm on 3 CPUs
using diverse ISAs -- the Fujitsu A64FX using ARM SVE, the Intel Xeon 6148
using AVX-512 and AMD EPYC 7502 using AVX-2, and show that our new batching
methodology is able to obtain more than twice the throughput of vendor
optimized libraries for all CPU architectures and problem sizes.