{"title":"CWCT: An Effective Vision Transformer using improved Cross-Window Self-Attention and CNN","authors":"Mengxing Li, Ying Song, Bo Wang","doi":"10.1109/VRW55335.2022.00041","DOIUrl":null,"url":null,"abstract":"In the process of metaverse construction, in order to achieve better interaction, it is necessary to provide clear semantic information for each object. Image classification technology plays a very important role in this process. Based on CMT transformer and improved Cross-Shaped Window Self-Attention, this paper presents an improved Image classification framework combining CNN and transformers, which is called CWCT transformer. Due to the high resolution of the image, vision transformers will lead to too high model complexity and too much calculation. To solve this problem, CWCT captures local features by using optimized Cross-Window Self-Attention mechanism and global features by using convolutional neural networks (CNN) stack. This structure has the flexibility to model at various scales and has linear computational complexity concerning image size. Compared with the original CMT network, the classification accuracy has been improved on ImageNet-1k and randomly screened Tiny-ImageNet dataset. Thanks to the optimized Cross-Window Self-Attention, the CWCT proposed in this paper has a significant improvement in operation speed and model complexity compared with CMT.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VRW55335.2022.00041","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
In the process of metaverse construction, in order to achieve better interaction, it is necessary to provide clear semantic information for each object. Image classification technology plays a very important role in this process. Based on CMT transformer and improved Cross-Shaped Window Self-Attention, this paper presents an improved Image classification framework combining CNN and transformers, which is called CWCT transformer. Due to the high resolution of the image, vision transformers will lead to too high model complexity and too much calculation. To solve this problem, CWCT captures local features by using optimized Cross-Window Self-Attention mechanism and global features by using convolutional neural networks (CNN) stack. This structure has the flexibility to model at various scales and has linear computational complexity concerning image size. Compared with the original CMT network, the classification accuracy has been improved on ImageNet-1k and randomly screened Tiny-ImageNet dataset. Thanks to the optimized Cross-Window Self-Attention, the CWCT proposed in this paper has a significant improvement in operation speed and model complexity compared with CMT.