{"title":"基于迁移学习的特征提取和卷积神经网络的基于脑电图的情感识别","authors":"Vaibhav Jadhav, Namita Tiwari, Meenu Chawla","doi":"10.1051/itmconf/20235302011","DOIUrl":null,"url":null,"abstract":"In this paper, a novel method for EEG(Electroencephalography) based emotion recognition is introduced. This method uses transfer learning to extract features from multichannel EEG signals, these features are then arranged in an 8×9 map to represent their spatial location on scalp and then we introduce a CNN model which takes in the spatial feature map and extracts spatial relations between EEG channel and finally classify the emotions. First, EEG signals are converted to spectrogram and passed through a pre-trained image classification model to get a feature vector from spectrogram of EEG. Then, feature vectors of different channels are rearranged and are presented as input to a CNN model which extracts spatial features or dependencies of channels as part of training. Finally, CNN outputs are flattened and passed through dense layer to classify between emotion classes. In this study, SEED, SEED-IV and SEED-V EEG emotion data-sets are used for classification and our method achieves best classification accuracy of 97.09% on SEED, 89.81% on SEED-IV and 88.23% on SEED-V data-set with fivefold cross validation.","PeriodicalId":433898,"journal":{"name":"ITM Web of Conferences","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"EEG-based Emotion Recognition using Transfer Learning Based Feature Extraction and Convolutional Neural Network\",\"authors\":\"Vaibhav Jadhav, Namita Tiwari, Meenu Chawla\",\"doi\":\"10.1051/itmconf/20235302011\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, a novel method for EEG(Electroencephalography) based emotion recognition is introduced. This method uses transfer learning to extract features from multichannel EEG signals, these features are then arranged in an 8×9 map to represent their spatial location on scalp and then we introduce a CNN model which takes in the spatial feature map and extracts spatial relations between EEG channel and finally classify the emotions. First, EEG signals are converted to spectrogram and passed through a pre-trained image classification model to get a feature vector from spectrogram of EEG. Then, feature vectors of different channels are rearranged and are presented as input to a CNN model which extracts spatial features or dependencies of channels as part of training. Finally, CNN outputs are flattened and passed through dense layer to classify between emotion classes. In this study, SEED, SEED-IV and SEED-V EEG emotion data-sets are used for classification and our method achieves best classification accuracy of 97.09% on SEED, 89.81% on SEED-IV and 88.23% on SEED-V data-set with fivefold cross validation.\",\"PeriodicalId\":433898,\"journal\":{\"name\":\"ITM Web of Conferences\",\"volume\":\"46 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ITM Web of Conferences\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1051/itmconf/20235302011\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ITM Web of Conferences","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1051/itmconf/20235302011","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
本文提出了一种基于脑电图的情绪识别新方法。该方法利用迁移学习从多通道脑电信号中提取特征,然后将这些特征排列成8×9图来表示它们在头皮上的空间位置,然后引入CNN模型,该模型吸收空间特征图,提取脑电信号通道之间的空间关系,最后对情绪进行分类。首先,将脑电信号转换为频谱图,并通过预训练的图像分类模型从脑电信号的频谱图中得到特征向量。然后,将不同通道的特征向量重新排列并作为CNN模型的输入,该模型提取通道的空间特征或依赖关系作为训练的一部分。最后,对CNN输出进行平面化处理,并通过密集层进行情感分类。本研究使用SEED、SEED- iv和SEED- v EEG情绪数据集进行分类,经五重交叉验证,我们的方法在SEED、SEED- iv和SEED- v数据集上的分类准确率分别为97.09%、89.81%和88.23%。
EEG-based Emotion Recognition using Transfer Learning Based Feature Extraction and Convolutional Neural Network
In this paper, a novel method for EEG(Electroencephalography) based emotion recognition is introduced. This method uses transfer learning to extract features from multichannel EEG signals, these features are then arranged in an 8×9 map to represent their spatial location on scalp and then we introduce a CNN model which takes in the spatial feature map and extracts spatial relations between EEG channel and finally classify the emotions. First, EEG signals are converted to spectrogram and passed through a pre-trained image classification model to get a feature vector from spectrogram of EEG. Then, feature vectors of different channels are rearranged and are presented as input to a CNN model which extracts spatial features or dependencies of channels as part of training. Finally, CNN outputs are flattened and passed through dense layer to classify between emotion classes. In this study, SEED, SEED-IV and SEED-V EEG emotion data-sets are used for classification and our method achieves best classification accuracy of 97.09% on SEED, 89.81% on SEED-IV and 88.23% on SEED-V data-set with fivefold cross validation.