{"title":"Attention in SRAM on Tenstorrent Grayskull","authors":"Moritz Thüning","doi":"arxiv-2407.13885","DOIUrl":null,"url":null,"abstract":"When implementations of the Transformer's self-attention layer utilize SRAM\ninstead of DRAM, they can achieve significant speedups. The Tenstorrent\nGrayskull architecture provides a large SRAM, distributed across a grid of\ncores. This work presents a fused kernel for Grayskull, that exclusively\nutilizes its large SRAM by combining matrix multiplication, attention score\nscaling and Softmax operations. Additionally, a dedicated Softmax kernel\nutilizing the SRAM and a CPU implementation serving as a baseline are\npresented. The Softmax operation consumes most of the runtime in the\ncomputation of attention weights from queries and keys on Grayskull. The\nspeedup of the dedicated Softmax kernel compared to the CPU implementation is\nup to $10 \\times$, and the Softmax implementation inside the fused kernel is\napproximately $1.8 \\times$ faster than the dedicated Softmax kernel. The time\nand memory complexity of all implementations is quadratic in sequence length.\nCurrently, the Grayskull e150 is approximately $30 \\times$ cheaper for the\ngeneral public than an Nvidia H100 PCIe (a state-of-the-art GPU) and offers\napproximately $1.5 \\times$ more SRAM.","PeriodicalId":501291,"journal":{"name":"arXiv - CS - Performance","volume":"2 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Performance","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2407.13885","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
When implementations of the Transformer's self-attention layer utilize SRAM
instead of DRAM, they can achieve significant speedups. The Tenstorrent
Grayskull architecture provides a large SRAM, distributed across a grid of
cores. This work presents a fused kernel for Grayskull, that exclusively
utilizes its large SRAM by combining matrix multiplication, attention score
scaling and Softmax operations. Additionally, a dedicated Softmax kernel
utilizing the SRAM and a CPU implementation serving as a baseline are
presented. The Softmax operation consumes most of the runtime in the
computation of attention weights from queries and keys on Grayskull. The
speedup of the dedicated Softmax kernel compared to the CPU implementation is
up to $10 \times$, and the Softmax implementation inside the fused kernel is
approximately $1.8 \times$ faster than the dedicated Softmax kernel. The time
and memory complexity of all implementations is quadratic in sequence length.
Currently, the Grayskull e150 is approximately $30 \times$ cheaper for the
general public than an Nvidia H100 PCIe (a state-of-the-art GPU) and offers
approximately $1.5 \times$ more SRAM.