{"title":"TcT: Temporal and channel Transformer for EEG-based Emotion Recognition","authors":"Yanling Liu, Yueying Zhou, Daoqiang Zhang","doi":"10.1109/CBMS55023.2022.00072","DOIUrl":null,"url":null,"abstract":"In recent years, Electroencephalogram (EEG)-based emotion recognition has developed rapidly and gained increasing attention in the field of brain-computer interface. Relevant studies in the neuroscience domain have shown that various emotional states may activate differently in brain regions and time points. Though the EEG signals have the characteristics of high temporal resolution and strong global correlation, the low signal-to-noise ratio and much redundant information bring challenges to the fast emotion recognition. To cope with the above problem, we propose a Temporal and channel Transformer (TcT) model for emotion recognition, which is directly applied to the raw preprocessed EEG data. In the model, we propose a TcT self-attention mechanism that simultaneously captures temporal and channel dependencies. The sliding window weight sharing strategy is designed to gradually refine the features from coarse time granularity, and reduce the complexity of the attention calculation. The original signal is passed between layers through the residual structure to integrate the features of different layers. We conduct experiments on the DEAP database to verify the effectiveness of the proposed model. The results show that the model achieves better classification performance in less time and with fewer resources than state-of-the-art methods.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CBMS55023.2022.00072","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
In recent years, Electroencephalogram (EEG)-based emotion recognition has developed rapidly and gained increasing attention in the field of brain-computer interface. Relevant studies in the neuroscience domain have shown that various emotional states may activate differently in brain regions and time points. Though the EEG signals have the characteristics of high temporal resolution and strong global correlation, the low signal-to-noise ratio and much redundant information bring challenges to the fast emotion recognition. To cope with the above problem, we propose a Temporal and channel Transformer (TcT) model for emotion recognition, which is directly applied to the raw preprocessed EEG data. In the model, we propose a TcT self-attention mechanism that simultaneously captures temporal and channel dependencies. The sliding window weight sharing strategy is designed to gradually refine the features from coarse time granularity, and reduce the complexity of the attention calculation. The original signal is passed between layers through the residual structure to integrate the features of different layers. We conduct experiments on the DEAP database to verify the effectiveness of the proposed model. The results show that the model achieves better classification performance in less time and with fewer resources than state-of-the-art methods.