{"title":"CAST: An innovative framework for Cross-dimensional Attention Structure in Transformers","authors":"Dezheng Wang , Xiaoyi Wei , Congyan Chen","doi":"10.1016/j.patcog.2024.111153","DOIUrl":null,"url":null,"abstract":"<div><div>Dominant Transformer-based approaches rely solely on attention mechanisms and their variations, primarily emphasizing capturing crucial information within the temporal dimension. For enhanced performance, we introduce a novel architecture for Cross-dimensional Attention Structure in Transformers (CAST), which presents an innovative approach in Transformer-based models, emphasizing attention mechanisms across both temporal and spatial dimensions. The core component of CAST, the cross-dimensional attention structure (CAS), captures dependencies among multivariable time series in both temporal and spatial dimensions. The Static Attention Mechanism (SAM) is incorporated to simplify and enhance multivariate time series forecasting performance. This integration effectively reduces complexity, leading to a more efficient model. CAST demonstrates robust and efficient capabilities in predicting multivariate time series, with the simplicity of SAM broadening its applicability to various tasks. Beyond time series forecasting, CAST also shows promise in CV classification tasks. By integrating CAS into pre-trained image models, CAST facilitates spatiotemporal reasoning. Experimental results highlight the superior performance of CAST in time series forecasting and its competitive edge in CV classification tasks.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"159 ","pages":"Article 111153"},"PeriodicalIF":7.5000,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S003132032400904X","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Dominant Transformer-based approaches rely solely on attention mechanisms and their variations, primarily emphasizing capturing crucial information within the temporal dimension. For enhanced performance, we introduce a novel architecture for Cross-dimensional Attention Structure in Transformers (CAST), which presents an innovative approach in Transformer-based models, emphasizing attention mechanisms across both temporal and spatial dimensions. The core component of CAST, the cross-dimensional attention structure (CAS), captures dependencies among multivariable time series in both temporal and spatial dimensions. The Static Attention Mechanism (SAM) is incorporated to simplify and enhance multivariate time series forecasting performance. This integration effectively reduces complexity, leading to a more efficient model. CAST demonstrates robust and efficient capabilities in predicting multivariate time series, with the simplicity of SAM broadening its applicability to various tasks. Beyond time series forecasting, CAST also shows promise in CV classification tasks. By integrating CAS into pre-trained image models, CAST facilitates spatiotemporal reasoning. Experimental results highlight the superior performance of CAST in time series forecasting and its competitive edge in CV classification tasks.
期刊介绍:
The field of Pattern Recognition is both mature and rapidly evolving, playing a crucial role in various related fields such as computer vision, image processing, text analysis, and neural networks. It closely intersects with machine learning and is being applied in emerging areas like biometrics, bioinformatics, multimedia data analysis, and data science. The journal Pattern Recognition, established half a century ago during the early days of computer science, has since grown significantly in scope and influence.