{"title":"A hybrid lightweight transformer architecture based on fuzzy attention prototypes for multivariate time series classification","authors":"Yan Gu , Feng Jin , Jun Zhao , Wei Wang","doi":"10.1016/j.ins.2025.121942","DOIUrl":null,"url":null,"abstract":"<div><div>Multivariate time series classification has become a research hotspot owing to its rapid development. Existing methods mainly focus on the feature correlations of time series, ignoring data uncertainty and sample sparsity. To address these challenges, a hybrid lightweight Transformer architecture based on fuzzy attention prototypes named FapFormer is proposed, in which a convolutional spanning Vision Transformer module is built to perform feature extraction and provide inductive bias, incorporating dynamic feature sampling to select the key features adaptively for increasing the training efficiency. A progressive branching convolution (PBC) block and convolutional self-attention (CSA) block are then introduced to extract both local and global features. Furthermore, a feature complementation strategy is implemented to enable the CSA block to specialize in global dependencies, overcoming the local receptive field limitations of the PBC block. Finally, a novel fuzzy attention prototype learning method is proposed to represent class prototypes for data uncertainty, which employs the distances between prototypes and low-dimensional embeddings for classification. Experiments were conducted using both the UEA benchmark dataset and a practical industrial dataset demonstrate that FapFormer outperforms several state-of-the-art methods, achieving improved accuracy and reduced computational complexity, even under conditions of data uncertainty and sample sparsity.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"703 ","pages":"Article 121942"},"PeriodicalIF":8.1000,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Sciences","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S002002552500074X","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Multivariate time series classification has become a research hotspot owing to its rapid development. Existing methods mainly focus on the feature correlations of time series, ignoring data uncertainty and sample sparsity. To address these challenges, a hybrid lightweight Transformer architecture based on fuzzy attention prototypes named FapFormer is proposed, in which a convolutional spanning Vision Transformer module is built to perform feature extraction and provide inductive bias, incorporating dynamic feature sampling to select the key features adaptively for increasing the training efficiency. A progressive branching convolution (PBC) block and convolutional self-attention (CSA) block are then introduced to extract both local and global features. Furthermore, a feature complementation strategy is implemented to enable the CSA block to specialize in global dependencies, overcoming the local receptive field limitations of the PBC block. Finally, a novel fuzzy attention prototype learning method is proposed to represent class prototypes for data uncertainty, which employs the distances between prototypes and low-dimensional embeddings for classification. Experiments were conducted using both the UEA benchmark dataset and a practical industrial dataset demonstrate that FapFormer outperforms several state-of-the-art methods, achieving improved accuracy and reduced computational complexity, even under conditions of data uncertainty and sample sparsity.
期刊介绍:
Informatics and Computer Science Intelligent Systems Applications is an esteemed international journal that focuses on publishing original and creative research findings in the field of information sciences. We also feature a limited number of timely tutorial and surveying contributions.
Our journal aims to cater to a diverse audience, including researchers, developers, managers, strategic planners, graduate students, and anyone interested in staying up-to-date with cutting-edge research in information science, knowledge engineering, and intelligent systems. While readers are expected to share a common interest in information science, they come from varying backgrounds such as engineering, mathematics, statistics, physics, computer science, cell biology, molecular biology, management science, cognitive science, neurobiology, behavioral sciences, and biochemistry.