Spatial-frequency-temporal convolutional recurrent network for olfactory-enhanced EEG emotion recognition

IF 2.7 4区 医学 Q2 BIOCHEMICAL RESEARCH METHODS Journal of Neuroscience Methods Pub Date : 2022-07-01 DOI:10.1016/j.jneumeth.2022.109624
Mengxia Xing , Shiang Hu , Bing Wei , Zhao Lv
{"title":"Spatial-frequency-temporal convolutional recurrent network for olfactory-enhanced EEG emotion recognition","authors":"Mengxia Xing ,&nbsp;Shiang Hu ,&nbsp;Bing Wei ,&nbsp;Zhao Lv","doi":"10.1016/j.jneumeth.2022.109624","DOIUrl":null,"url":null,"abstract":"<div><h3>Background</h3><p>Multimedia stimulation of brain activity is important for emotion induction. Based on brain activity, emotion recognition using EEG signals has become a hot issue in the field of affective computing.</p></div><div><h3>New method</h3><p>In this paper, we develop a noval odor-video elicited physiological signal database (OVPD), in which we collect the EEG signals from eight participants in positive, neutral and negative emotional states when they are stimulated by synchronizing traditional video content with the odors. To make full use of the EEG features from different domains, we design a 3DCNN-BiLSTM model combining convolutional neural network (CNN) and bidirectional long short term memory (BiLSTM) for EEG emotion recognition. First, we transform EEG signals into 4D representations that retain spatial, frequency and temporal information. Then, the representations are fed into the 3DCNN-BiLSTM model to recognize emotions. CNN is applied to learn spatial and frequency information from the 4D representations. BiLSTM is designed to extract forward and backward temporal dependences.</p></div><div><h3>Results</h3><p>We conduct 5-fold cross validation experiments five times on the OVPD dataset to evaluate the performance of the model. The experimental results show that our presented model achieves an average accuracy of 98.29% with the standard deviation of 0.72% under the olfactory-enhanced video stimuli, and an average accuracy of 98.03% with the standard deviation of 0.73% under the traditional video stimuli on the OVPD dataset in the three-class classification of positive, neutral and negative emotions. To verify the generalisability of our proposed model, we also evaluate this approach on the public EEG emotion dataset (SEED).</p></div><div><h3>Comparison with existing method</h3><p>Compared with other baseline methods, our designed model achieves better recognition performance on the OVPD dataset. The average accuracy of positive, neutral and negative emotions is better in response to the olfactory-enhanced videos than the pure videos for the 3DCNN-BiLSTM model and other baseline methods.</p></div><div><h3>Conclusion</h3><p>The proposed 3DCNN-BiLSTM model is effective by fusing the spatial-frequency-temporal features of EEG signals for emotion recognition. The provided olfactory stimuli can induce stronger emotions than traditional video stimuli and improve the accuracy of emotion recognition to a certain extent. However, superimposing odors unrelated to the video scenes may distract participants’ attention, and thus reduce the final accuracy of EEG emotion recognition.</p></div>","PeriodicalId":16415,"journal":{"name":"Journal of Neuroscience Methods","volume":"376 ","pages":"Article 109624"},"PeriodicalIF":2.7000,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Neuroscience Methods","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0165027022001510","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"BIOCHEMICAL RESEARCH METHODS","Score":null,"Total":0}
引用次数: 6

Abstract

Background

Multimedia stimulation of brain activity is important for emotion induction. Based on brain activity, emotion recognition using EEG signals has become a hot issue in the field of affective computing.

New method

In this paper, we develop a noval odor-video elicited physiological signal database (OVPD), in which we collect the EEG signals from eight participants in positive, neutral and negative emotional states when they are stimulated by synchronizing traditional video content with the odors. To make full use of the EEG features from different domains, we design a 3DCNN-BiLSTM model combining convolutional neural network (CNN) and bidirectional long short term memory (BiLSTM) for EEG emotion recognition. First, we transform EEG signals into 4D representations that retain spatial, frequency and temporal information. Then, the representations are fed into the 3DCNN-BiLSTM model to recognize emotions. CNN is applied to learn spatial and frequency information from the 4D representations. BiLSTM is designed to extract forward and backward temporal dependences.

Results

We conduct 5-fold cross validation experiments five times on the OVPD dataset to evaluate the performance of the model. The experimental results show that our presented model achieves an average accuracy of 98.29% with the standard deviation of 0.72% under the olfactory-enhanced video stimuli, and an average accuracy of 98.03% with the standard deviation of 0.73% under the traditional video stimuli on the OVPD dataset in the three-class classification of positive, neutral and negative emotions. To verify the generalisability of our proposed model, we also evaluate this approach on the public EEG emotion dataset (SEED).

Comparison with existing method

Compared with other baseline methods, our designed model achieves better recognition performance on the OVPD dataset. The average accuracy of positive, neutral and negative emotions is better in response to the olfactory-enhanced videos than the pure videos for the 3DCNN-BiLSTM model and other baseline methods.

Conclusion

The proposed 3DCNN-BiLSTM model is effective by fusing the spatial-frequency-temporal features of EEG signals for emotion recognition. The provided olfactory stimuli can induce stronger emotions than traditional video stimuli and improve the accuracy of emotion recognition to a certain extent. However, superimposing odors unrelated to the video scenes may distract participants’ attention, and thus reduce the final accuracy of EEG emotion recognition.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
嗅觉增强脑电情绪识别的时空卷积递归网络
多媒体刺激大脑活动对情绪诱导很重要。基于大脑活动的脑电信号情感识别已成为情感计算领域的研究热点。在本文中,我们建立了一个新的气味-视频诱发生理信号数据库(OVPD),在该数据库中,我们收集了8名受试者在正、中性和负情绪状态下受到传统视频内容与气味同步刺激时的脑电图信号。为了充分利用不同领域的脑电特征,我们设计了一种结合卷积神经网络(CNN)和双向长短期记忆(BiLSTM)的3DCNN-BiLSTM模型用于脑电情绪识别。首先,我们将脑电图信号转换为保留空间、频率和时间信息的四维表示。然后,将这些表征输入到3DCNN-BiLSTM模型中进行情绪识别。利用CNN从四维表示中学习空间和频率信息。BiLSTM旨在提取向前和向后的时间依赖性。结果我们在OVPD数据集上进行了5次5重交叉验证实验,以评估模型的性能。实验结果表明,该模型在嗅觉增强视频刺激下的平均准确率为98.29%,标准差为0.72%;在OVPD数据集上,在正、中性、负情绪三类分类中,在传统视频刺激下的平均准确率为98.03%,标准差为0.73%。为了验证我们提出的模型的通用性,我们还在公开的EEG情感数据集(SEED)上评估了这种方法。与现有方法相比,我们设计的模型在OVPD数据集上取得了更好的识别性能。3DCNN-BiLSTM模型及其他基线方法对嗅觉增强视频的正面、中性和负面情绪的平均准确率均优于单纯视频。结论3DCNN-BiLSTM模型融合了脑电信号的空间-频率-时间特征,能够有效地进行情绪识别。所提供的嗅觉刺激比传统的视频刺激能诱发更强烈的情绪,在一定程度上提高了情绪识别的准确性。然而,叠加与视频场景无关的气味可能会分散参与者的注意力,从而降低EEG情绪识别的最终准确性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Journal of Neuroscience Methods
Journal of Neuroscience Methods 医学-神经科学
CiteScore
7.10
自引率
3.30%
发文量
226
审稿时长
52 days
期刊介绍: The Journal of Neuroscience Methods publishes papers that describe new methods that are specifically for neuroscience research conducted in invertebrates, vertebrates or in man. Major methodological improvements or important refinements of established neuroscience methods are also considered for publication. The Journal''s Scope includes all aspects of contemporary neuroscience research, including anatomical, behavioural, biochemical, cellular, computational, molecular, invasive and non-invasive imaging, optogenetic, and physiological research investigations.
期刊最新文献
Electrode configurations for sensitive and specific detection of compound muscle action potentials to the tibialis anterior muscle after peroneal nerve injury in rats. Enhancing fMRI quality control. Multi-layer transfer learning algorithm based on improved common spatial pattern for brain-computer interfaces. Fractal analysis to assess the differentiation state of oligodendroglia in culture. STSimM: A new tool for evaluating neuron model performance and detecting spike trains similarity.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1