{"title":"SS ViT: Observing pathologies of multi-layer perceptron weights and re-setting vision transformer","authors":"Chao Ning, Hongping Gan","doi":"10.1016/j.patcog.2025.111422","DOIUrl":null,"url":null,"abstract":"<div><div>Vision Transformer (ViT) usually adopts a columnar or hierarchical structure with four stages, where identical block settings are applied within the same stage. To achieve more nuanced configurations for each ViT block, additional search is conducted to explore stronger architectures. However, the search cost is typically expensive and the results may not be transferable to different ViT architectures. In this paper, we present a DFC module, which exploits two lightweight grouped linear (GL) layers to learn the representations of the expansion layer between two fully connected layers and the nonlinear activation of multi-layer perceptron (MLP), respectively. Afterwards, we introduce the DFC module into vanilla ViT and analyze the learned weights of its GL layers. Interestingly, several pathologies arise even though the GL layers share the same initialization strategy. For instance, the GL layer weights display different patterns across various depths, and the GL1 and GL2 weights have different patterns in the same depth. We progressively compare and analyze these pathologies and derive a specific setting (SS) for ViT blocks at different depths. Experimental results demonstrate that SS generically improves the performance of various ViT architectures, not only enhancing accuracy but also reducing inference time and computational complexity. For example, on ImageNet-1k classification task, SS yields a significant 0.8% accuracy improvement, approximately 12.9% faster inference speed, and 25% fewer floating-point operations (FLOPs) on PVTv2 model. The codes and trained models are available at <span><span>https://github.com/ICSResearch/SS</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"162 ","pages":"Article 111422"},"PeriodicalIF":7.5000,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0031320325000822","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Vision Transformer (ViT) usually adopts a columnar or hierarchical structure with four stages, where identical block settings are applied within the same stage. To achieve more nuanced configurations for each ViT block, additional search is conducted to explore stronger architectures. However, the search cost is typically expensive and the results may not be transferable to different ViT architectures. In this paper, we present a DFC module, which exploits two lightweight grouped linear (GL) layers to learn the representations of the expansion layer between two fully connected layers and the nonlinear activation of multi-layer perceptron (MLP), respectively. Afterwards, we introduce the DFC module into vanilla ViT and analyze the learned weights of its GL layers. Interestingly, several pathologies arise even though the GL layers share the same initialization strategy. For instance, the GL layer weights display different patterns across various depths, and the GL1 and GL2 weights have different patterns in the same depth. We progressively compare and analyze these pathologies and derive a specific setting (SS) for ViT blocks at different depths. Experimental results demonstrate that SS generically improves the performance of various ViT architectures, not only enhancing accuracy but also reducing inference time and computational complexity. For example, on ImageNet-1k classification task, SS yields a significant 0.8% accuracy improvement, approximately 12.9% faster inference speed, and 25% fewer floating-point operations (FLOPs) on PVTv2 model. The codes and trained models are available at https://github.com/ICSResearch/SS.
期刊介绍:
The field of Pattern Recognition is both mature and rapidly evolving, playing a crucial role in various related fields such as computer vision, image processing, text analysis, and neural networks. It closely intersects with machine learning and is being applied in emerging areas like biometrics, bioinformatics, multimedia data analysis, and data science. The journal Pattern Recognition, established half a century ago during the early days of computer science, has since grown significantly in scope and influence.