基于迁移学习的特征提取和卷积神经网络的基于脑电图的情感识别

Vaibhav Jadhav, Namita Tiwari, Meenu Chawla
{"title":"基于迁移学习的特征提取和卷积神经网络的基于脑电图的情感识别","authors":"Vaibhav Jadhav, Namita Tiwari, Meenu Chawla","doi":"10.1051/itmconf/20235302011","DOIUrl":null,"url":null,"abstract":"In this paper, a novel method for EEG(Electroencephalography) based emotion recognition is introduced. This method uses transfer learning to extract features from multichannel EEG signals, these features are then arranged in an 8×9 map to represent their spatial location on scalp and then we introduce a CNN model which takes in the spatial feature map and extracts spatial relations between EEG channel and finally classify the emotions. First, EEG signals are converted to spectrogram and passed through a pre-trained image classification model to get a feature vector from spectrogram of EEG. Then, feature vectors of different channels are rearranged and are presented as input to a CNN model which extracts spatial features or dependencies of channels as part of training. Finally, CNN outputs are flattened and passed through dense layer to classify between emotion classes. In this study, SEED, SEED-IV and SEED-V EEG emotion data-sets are used for classification and our method achieves best classification accuracy of 97.09% on SEED, 89.81% on SEED-IV and 88.23% on SEED-V data-set with fivefold cross validation.","PeriodicalId":433898,"journal":{"name":"ITM Web of Conferences","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"EEG-based Emotion Recognition using Transfer Learning Based Feature Extraction and Convolutional Neural Network\",\"authors\":\"Vaibhav Jadhav, Namita Tiwari, Meenu Chawla\",\"doi\":\"10.1051/itmconf/20235302011\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, a novel method for EEG(Electroencephalography) based emotion recognition is introduced. This method uses transfer learning to extract features from multichannel EEG signals, these features are then arranged in an 8×9 map to represent their spatial location on scalp and then we introduce a CNN model which takes in the spatial feature map and extracts spatial relations between EEG channel and finally classify the emotions. First, EEG signals are converted to spectrogram and passed through a pre-trained image classification model to get a feature vector from spectrogram of EEG. Then, feature vectors of different channels are rearranged and are presented as input to a CNN model which extracts spatial features or dependencies of channels as part of training. Finally, CNN outputs are flattened and passed through dense layer to classify between emotion classes. In this study, SEED, SEED-IV and SEED-V EEG emotion data-sets are used for classification and our method achieves best classification accuracy of 97.09% on SEED, 89.81% on SEED-IV and 88.23% on SEED-V data-set with fivefold cross validation.\",\"PeriodicalId\":433898,\"journal\":{\"name\":\"ITM Web of Conferences\",\"volume\":\"46 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ITM Web of Conferences\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1051/itmconf/20235302011\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ITM Web of Conferences","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1051/itmconf/20235302011","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

本文提出了一种基于脑电图的情绪识别新方法。该方法利用迁移学习从多通道脑电信号中提取特征,然后将这些特征排列成8×9图来表示它们在头皮上的空间位置,然后引入CNN模型,该模型吸收空间特征图,提取脑电信号通道之间的空间关系,最后对情绪进行分类。首先,将脑电信号转换为频谱图,并通过预训练的图像分类模型从脑电信号的频谱图中得到特征向量。然后,将不同通道的特征向量重新排列并作为CNN模型的输入,该模型提取通道的空间特征或依赖关系作为训练的一部分。最后,对CNN输出进行平面化处理,并通过密集层进行情感分类。本研究使用SEED、SEED- iv和SEED- v EEG情绪数据集进行分类,经五重交叉验证,我们的方法在SEED、SEED- iv和SEED- v数据集上的分类准确率分别为97.09%、89.81%和88.23%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
EEG-based Emotion Recognition using Transfer Learning Based Feature Extraction and Convolutional Neural Network
In this paper, a novel method for EEG(Electroencephalography) based emotion recognition is introduced. This method uses transfer learning to extract features from multichannel EEG signals, these features are then arranged in an 8×9 map to represent their spatial location on scalp and then we introduce a CNN model which takes in the spatial feature map and extracts spatial relations between EEG channel and finally classify the emotions. First, EEG signals are converted to spectrogram and passed through a pre-trained image classification model to get a feature vector from spectrogram of EEG. Then, feature vectors of different channels are rearranged and are presented as input to a CNN model which extracts spatial features or dependencies of channels as part of training. Finally, CNN outputs are flattened and passed through dense layer to classify between emotion classes. In this study, SEED, SEED-IV and SEED-V EEG emotion data-sets are used for classification and our method achieves best classification accuracy of 97.09% on SEED, 89.81% on SEED-IV and 88.23% on SEED-V data-set with fivefold cross validation.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Stock Price Prediction using Facebook Prophet Drowsiness Detection using EEG signals and Machine Learning Algorithms Aging mechanisms analysis of Graphite/LiNi0.80Co0.15Al0.05O2 lithium-ion batteries among the whole life cycle at different temperatures Android-based object recognition application for visually impaired Conception d’une séquence d’introduction dynamique du produit scalaire via une approche constructiviste intégrant la mécanique et les TIC
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1