{"title":"基于动态图卷积和时间自注意的混合网络用于基于EEG的情绪识别。","authors":"Cheng Cheng, Zikang Yu, Yong Zhang, Lin Feng","doi":"10.1109/TNNLS.2023.3319315","DOIUrl":null,"url":null,"abstract":"<p><p>The electroencephalogram (EEG) signal has become a highly effective decoding target for emotion recognition and has garnered significant attention from researchers. Its spatial topological and time-dependent characteristics make it crucial to explore both spatial information and temporal information for accurate emotion recognition. However, existing studies often focus on either spatial or temporal aspects of EEG signals, neglecting the joint consideration of both perspectives. To this end, this article proposes a hybrid network consisting of a dynamic graph convolution (DGC) module and temporal self-attention representation (TSAR) module, which concurrently incorporates the representative knowledge of spatial topology and temporal context into the EEG emotion recognition task. Specifically, the DGC module is designed to capture the spatial functional relationships within the brain by dynamically updating the adjacency matrix during the model training process. Simultaneously, the TSAR module is introduced to emphasize more valuable time segments and extract global temporal features from EEG signals. To fully exploit the interactivity between spatial and temporal information, the hierarchical cross-attention fusion (H-CAF) module is incorporated to fuse the complementary information from spatial and temporal features. Extensive experimental results on the DEAP, SEED, and SEED-IV datasets demonstrate that the proposed method outperforms other state-of-the-art methods.</p>","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"PP ","pages":""},"PeriodicalIF":10.2000,"publicationDate":"2023-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Hybrid Network Using Dynamic Graph Convolution and Temporal Self-Attention for EEG-Based Emotion Recognition.\",\"authors\":\"Cheng Cheng, Zikang Yu, Yong Zhang, Lin Feng\",\"doi\":\"10.1109/TNNLS.2023.3319315\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>The electroencephalogram (EEG) signal has become a highly effective decoding target for emotion recognition and has garnered significant attention from researchers. Its spatial topological and time-dependent characteristics make it crucial to explore both spatial information and temporal information for accurate emotion recognition. However, existing studies often focus on either spatial or temporal aspects of EEG signals, neglecting the joint consideration of both perspectives. To this end, this article proposes a hybrid network consisting of a dynamic graph convolution (DGC) module and temporal self-attention representation (TSAR) module, which concurrently incorporates the representative knowledge of spatial topology and temporal context into the EEG emotion recognition task. Specifically, the DGC module is designed to capture the spatial functional relationships within the brain by dynamically updating the adjacency matrix during the model training process. Simultaneously, the TSAR module is introduced to emphasize more valuable time segments and extract global temporal features from EEG signals. To fully exploit the interactivity between spatial and temporal information, the hierarchical cross-attention fusion (H-CAF) module is incorporated to fuse the complementary information from spatial and temporal features. Extensive experimental results on the DEAP, SEED, and SEED-IV datasets demonstrate that the proposed method outperforms other state-of-the-art methods.</p>\",\"PeriodicalId\":13303,\"journal\":{\"name\":\"IEEE transactions on neural networks and learning systems\",\"volume\":\"PP \",\"pages\":\"\"},\"PeriodicalIF\":10.2000,\"publicationDate\":\"2023-10-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on neural networks and learning systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1109/TNNLS.2023.3319315\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/TNNLS.2023.3319315","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Hybrid Network Using Dynamic Graph Convolution and Temporal Self-Attention for EEG-Based Emotion Recognition.
The electroencephalogram (EEG) signal has become a highly effective decoding target for emotion recognition and has garnered significant attention from researchers. Its spatial topological and time-dependent characteristics make it crucial to explore both spatial information and temporal information for accurate emotion recognition. However, existing studies often focus on either spatial or temporal aspects of EEG signals, neglecting the joint consideration of both perspectives. To this end, this article proposes a hybrid network consisting of a dynamic graph convolution (DGC) module and temporal self-attention representation (TSAR) module, which concurrently incorporates the representative knowledge of spatial topology and temporal context into the EEG emotion recognition task. Specifically, the DGC module is designed to capture the spatial functional relationships within the brain by dynamically updating the adjacency matrix during the model training process. Simultaneously, the TSAR module is introduced to emphasize more valuable time segments and extract global temporal features from EEG signals. To fully exploit the interactivity between spatial and temporal information, the hierarchical cross-attention fusion (H-CAF) module is incorporated to fuse the complementary information from spatial and temporal features. Extensive experimental results on the DEAP, SEED, and SEED-IV datasets demonstrate that the proposed method outperforms other state-of-the-art methods.
期刊介绍:
The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.