{"title":"SWAT:基于 FPGA 的高效 Swin 变压器加速器","authors":"Qiwei Dong, Xiaoru Xie, Zhongfeng Wang","doi":"10.1109/ASP-DAC58780.2024.10473931","DOIUrl":null,"url":null,"abstract":"Swin Transformer achieves greater efficiency than Vision Transformer by utilizing local self-attention and shifted windows. However, existing hardware accelerators designed for Transformer have not been optimized for the unique computation flow and data reuse property in Swin Transformer, resulting in lower hardware utilization and extra memory accesses. To address this issue, we develop SWAT, an efficient Swin Transformer Accelerator based on FPGA. Firstly, to eliminate the redundant computations in shifted windows, a novel tiling strategy is employed, which helps the developed multiplier array to fully utilize the sparsity. Additionally, we deploy a dynamic pipeline interleaving dataflow, which not only reduces the processing latency but also maximizes data reuse, thereby decreasing access to memories. Furthermore, customized quantization strategies and approximate calculations for non-linear calculations are adopted to simplify the hardware complexity with negligible network accuracy loss. We implement SWAT on the Xilinx Alveo U50 platform and evaluate it with Swin-T on the ImageNet dataset. The proposed architecture can achieve improvements of $2.02 \\times \\sim 3.11 \\times$ in power efficiency compared to existing Transformer accelerators on FPGAs.","PeriodicalId":518586,"journal":{"name":"2024 29th Asia and South Pacific Design Automation Conference (ASP-DAC)","volume":"340 2","pages":"515-520"},"PeriodicalIF":0.0000,"publicationDate":"2024-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"SWAT: An Efficient Swin Transformer Accelerator Based on FPGA\",\"authors\":\"Qiwei Dong, Xiaoru Xie, Zhongfeng Wang\",\"doi\":\"10.1109/ASP-DAC58780.2024.10473931\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Swin Transformer achieves greater efficiency than Vision Transformer by utilizing local self-attention and shifted windows. However, existing hardware accelerators designed for Transformer have not been optimized for the unique computation flow and data reuse property in Swin Transformer, resulting in lower hardware utilization and extra memory accesses. To address this issue, we develop SWAT, an efficient Swin Transformer Accelerator based on FPGA. Firstly, to eliminate the redundant computations in shifted windows, a novel tiling strategy is employed, which helps the developed multiplier array to fully utilize the sparsity. Additionally, we deploy a dynamic pipeline interleaving dataflow, which not only reduces the processing latency but also maximizes data reuse, thereby decreasing access to memories. Furthermore, customized quantization strategies and approximate calculations for non-linear calculations are adopted to simplify the hardware complexity with negligible network accuracy loss. We implement SWAT on the Xilinx Alveo U50 platform and evaluate it with Swin-T on the ImageNet dataset. The proposed architecture can achieve improvements of $2.02 \\\\times \\\\sim 3.11 \\\\times$ in power efficiency compared to existing Transformer accelerators on FPGAs.\",\"PeriodicalId\":518586,\"journal\":{\"name\":\"2024 29th Asia and South Pacific Design Automation Conference (ASP-DAC)\",\"volume\":\"340 2\",\"pages\":\"515-520\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-01-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2024 29th Asia and South Pacific Design Automation Conference (ASP-DAC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ASP-DAC58780.2024.10473931\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2024 29th Asia and South Pacific Design Automation Conference (ASP-DAC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ASP-DAC58780.2024.10473931","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
SWAT: An Efficient Swin Transformer Accelerator Based on FPGA
Swin Transformer achieves greater efficiency than Vision Transformer by utilizing local self-attention and shifted windows. However, existing hardware accelerators designed for Transformer have not been optimized for the unique computation flow and data reuse property in Swin Transformer, resulting in lower hardware utilization and extra memory accesses. To address this issue, we develop SWAT, an efficient Swin Transformer Accelerator based on FPGA. Firstly, to eliminate the redundant computations in shifted windows, a novel tiling strategy is employed, which helps the developed multiplier array to fully utilize the sparsity. Additionally, we deploy a dynamic pipeline interleaving dataflow, which not only reduces the processing latency but also maximizes data reuse, thereby decreasing access to memories. Furthermore, customized quantization strategies and approximate calculations for non-linear calculations are adopted to simplify the hardware complexity with negligible network accuracy loss. We implement SWAT on the Xilinx Alveo U50 platform and evaluate it with Swin-T on the ImageNet dataset. The proposed architecture can achieve improvements of $2.02 \times \sim 3.11 \times$ in power efficiency compared to existing Transformer accelerators on FPGAs.