{"title":"ECTFormer: An efficient Conv-Transformer model design for image recognition","authors":"","doi":"10.1016/j.patcog.2024.111092","DOIUrl":null,"url":null,"abstract":"<div><div>Since the success of Vision Transformers (ViTs), there has been growing interest in combining ConvNets and Transformers in the computer vision community. While the hybrid models have demonstrated state-of-the-art performance, many of these models are too large and complex to be applied to edge devices for real-world applications. To address this challenge, we propose an efficient hybrid network called ECTFormer that leverages the strengths of ConvNets and Transformers while considering both model performance and inference speed. Specifically, our approach involves: (1) optimizing the combination of convolution kernels by dynamically adjusting kernel sizes based on the scale of feature tensors; (2) revisiting existing overlapping patchify to not only reduce the model size but also propagate fine-grained patches for the performance enhancement; and (3) introducing an efficient single-head self-attention mechanism, rather than multi-head self-attention in the base Transformer, to minimize the increase in model size and boost inference speed, overcoming bottlenecks of ViTs. In experimental results on ImageNet-1K, ECTFormer not only demonstrates comparable or higher top-1 accuracy but also faster inference speed on both GPUs and edge devices compared to other efficient networks.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":7.5000,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0031320324008434","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Since the success of Vision Transformers (ViTs), there has been growing interest in combining ConvNets and Transformers in the computer vision community. While the hybrid models have demonstrated state-of-the-art performance, many of these models are too large and complex to be applied to edge devices for real-world applications. To address this challenge, we propose an efficient hybrid network called ECTFormer that leverages the strengths of ConvNets and Transformers while considering both model performance and inference speed. Specifically, our approach involves: (1) optimizing the combination of convolution kernels by dynamically adjusting kernel sizes based on the scale of feature tensors; (2) revisiting existing overlapping patchify to not only reduce the model size but also propagate fine-grained patches for the performance enhancement; and (3) introducing an efficient single-head self-attention mechanism, rather than multi-head self-attention in the base Transformer, to minimize the increase in model size and boost inference speed, overcoming bottlenecks of ViTs. In experimental results on ImageNet-1K, ECTFormer not only demonstrates comparable or higher top-1 accuracy but also faster inference speed on both GPUs and edge devices compared to other efficient networks.
期刊介绍:
The field of Pattern Recognition is both mature and rapidly evolving, playing a crucial role in various related fields such as computer vision, image processing, text analysis, and neural networks. It closely intersects with machine learning and is being applied in emerging areas like biometrics, bioinformatics, multimedia data analysis, and data science. The journal Pattern Recognition, established half a century ago during the early days of computer science, has since grown significantly in scope and influence.