{"title":"KWT-Tiny: RISC-V Accelerated, Embedded Keyword Spotting Transformer","authors":"Aness Al-Qawlaq, Ajay Kumar M, Deepu John","doi":"arxiv-2407.16026","DOIUrl":null,"url":null,"abstract":"This paper explores the adaptation of Transformerbased models for edge\ndevices through the quantisation and hardware acceleration of the ARM Keyword\nTransformer (KWT) model on a RISC-V platform. The model was targeted to run on\n64kB RAM in bare-metal C using a custom-developed edge AI library. KWT-1 was\nretrained to be 369 times smaller, with only a 10% loss in accuracy through\nreducing output classes from 35 to 2. The retraining and quantisation reduced\nmodel size from 2.42 MB to 1.65 kB. The integration of custom RISC-V\ninstructions that accelerated GELU and SoftMax operations enabled a 5x speedup\nand thus ~5x power reduction in inference, with inference clock cycle counts\ndecreasing from 26 million to 5.5 million clock cycles while incurring a small\narea overhead of approximately 29%. The results demonstrate a viable method for\nporting and accelerating Transformer-based models in low-power IoT devices.","PeriodicalId":501291,"journal":{"name":"arXiv - CS - Performance","volume":"356 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Performance","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2407.16026","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper explores the adaptation of Transformerbased models for edge
devices through the quantisation and hardware acceleration of the ARM Keyword
Transformer (KWT) model on a RISC-V platform. The model was targeted to run on
64kB RAM in bare-metal C using a custom-developed edge AI library. KWT-1 was
retrained to be 369 times smaller, with only a 10% loss in accuracy through
reducing output classes from 35 to 2. The retraining and quantisation reduced
model size from 2.42 MB to 1.65 kB. The integration of custom RISC-V
instructions that accelerated GELU and SoftMax operations enabled a 5x speedup
and thus ~5x power reduction in inference, with inference clock cycle counts
decreasing from 26 million to 5.5 million clock cycles while incurring a small
area overhead of approximately 29%. The results demonstrate a viable method for
porting and accelerating Transformer-based models in low-power IoT devices.