{"title":"一种基于改进局部线性嵌入和滑动窗口卷积的变压器混合网络用于脑电图解码。","authors":"Ketong Li, Peng Chen, Qian Chen, Xiangyun Li","doi":"10.1088/1741-2552/ada30b","DOIUrl":null,"url":null,"abstract":"<p><p><i>Objective</i>. Brain-computer interface(BCI) is leveraged by artificial intelligence in EEG signal decoding, which makes it possible to become a new means of human-machine interaction. However, the performance of current EEG decoding methods is still insufficient for clinical applications because of inadequate EEG information extraction and limited computational resources in hospitals. This paper introduces a hybrid network that employs a transformer with modified locally linear embedding and sliding window convolution for EEG decoding.<i>Approach</i>. This network separately extracts channel and temporal features from EEG signals, subsequently fusing these features using a cross-attention mechanism. Simultaneously, manifold learning is employed to lower the computational burden of the model by mapping the high-dimensional EEG data to a low-dimensional space by its dimension reduction function.<i>Main results</i>. The proposed model achieves accuracy rates of 84.44%, 94.96%, and 82.79% on the BCI Competition IV dataset 2a, high gamma dataset, and a self-constructed motor imagery (MI) dataset from the left and right hand fist-clenching tests respectively. The results indicate our model outperforms the baseline models by EEG-channel transformer with dimension-reduced EEG data and window attention with sliding window convolution. Additionally, to enhance the interpretability of the model, features preceding the temporal feature extraction network were visualized. This visualization promotes the understanding of how the model prefers task-related channels.<i>Significance</i>. The transformer-based method makes the MI-EEG decoding more practical for further clinical applications.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A hybrid network using transformer with modified locally linear embedding and sliding window convolution for EEG decoding.\",\"authors\":\"Ketong Li, Peng Chen, Qian Chen, Xiangyun Li\",\"doi\":\"10.1088/1741-2552/ada30b\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p><i>Objective</i>. Brain-computer interface(BCI) is leveraged by artificial intelligence in EEG signal decoding, which makes it possible to become a new means of human-machine interaction. However, the performance of current EEG decoding methods is still insufficient for clinical applications because of inadequate EEG information extraction and limited computational resources in hospitals. This paper introduces a hybrid network that employs a transformer with modified locally linear embedding and sliding window convolution for EEG decoding.<i>Approach</i>. This network separately extracts channel and temporal features from EEG signals, subsequently fusing these features using a cross-attention mechanism. Simultaneously, manifold learning is employed to lower the computational burden of the model by mapping the high-dimensional EEG data to a low-dimensional space by its dimension reduction function.<i>Main results</i>. The proposed model achieves accuracy rates of 84.44%, 94.96%, and 82.79% on the BCI Competition IV dataset 2a, high gamma dataset, and a self-constructed motor imagery (MI) dataset from the left and right hand fist-clenching tests respectively. The results indicate our model outperforms the baseline models by EEG-channel transformer with dimension-reduced EEG data and window attention with sliding window convolution. Additionally, to enhance the interpretability of the model, features preceding the temporal feature extraction network were visualized. This visualization promotes the understanding of how the model prefers task-related channels.<i>Significance</i>. The transformer-based method makes the MI-EEG decoding more practical for further clinical applications.</p>\",\"PeriodicalId\":94096,\"journal\":{\"name\":\"Journal of neural engineering\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-01-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of neural engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1088/1741-2552/ada30b\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of neural engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1088/1741-2552/ada30b","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A hybrid network using transformer with modified locally linear embedding and sliding window convolution for EEG decoding.
Objective. Brain-computer interface(BCI) is leveraged by artificial intelligence in EEG signal decoding, which makes it possible to become a new means of human-machine interaction. However, the performance of current EEG decoding methods is still insufficient for clinical applications because of inadequate EEG information extraction and limited computational resources in hospitals. This paper introduces a hybrid network that employs a transformer with modified locally linear embedding and sliding window convolution for EEG decoding.Approach. This network separately extracts channel and temporal features from EEG signals, subsequently fusing these features using a cross-attention mechanism. Simultaneously, manifold learning is employed to lower the computational burden of the model by mapping the high-dimensional EEG data to a low-dimensional space by its dimension reduction function.Main results. The proposed model achieves accuracy rates of 84.44%, 94.96%, and 82.79% on the BCI Competition IV dataset 2a, high gamma dataset, and a self-constructed motor imagery (MI) dataset from the left and right hand fist-clenching tests respectively. The results indicate our model outperforms the baseline models by EEG-channel transformer with dimension-reduced EEG data and window attention with sliding window convolution. Additionally, to enhance the interpretability of the model, features preceding the temporal feature extraction network were visualized. This visualization promotes the understanding of how the model prefers task-related channels.Significance. The transformer-based method makes the MI-EEG decoding more practical for further clinical applications.