Multimodal Emotion Recognition From EEG Signals and Facial Expressions

IF 3.6 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS IEEE Access Pub Date : 1900-01-01 DOI:10.1109/ACCESS.2023.3263670
Shuai Wang, Jingzi Qu, Yong Zhang, YiDie Zhang
{"title":"Multimodal Emotion Recognition From EEG Signals and Facial Expressions","authors":"Shuai Wang, Jingzi Qu, Yong Zhang, YiDie Zhang","doi":"10.1109/ACCESS.2023.3263670","DOIUrl":null,"url":null,"abstract":"Emotion recognition has attracted attention in recent years. It is widely used in healthcare, teaching, human-computer interaction, and other fields. Human emotional features are often used to recognize different emotions. Currently, there is more and more research on multimodal emotion recognition based on the fusion of multiple features. This paper proposes a deep learning model for multimodal emotion recognition based on the fusion of electroencephalogram (EEG) signals and facial expressions to achieve an excellent classification effect. First, a pre-trained convolution neural network (CNN) is used to extract the facial features from the facial expressions. Next, the attention mechanism is introduced to extract more critical facial frame features. Then, we apply CNNs to extract spatial features from original EEG signals, which use a local convolution kernel and a global convolution kernel to learn the features of left and right hemispheres channels and all EEG channels. After feature-level fusion, the fusion features of the facial expression features and EEG features are fed into the classifier for emotion recognition. This paper conducted experiments on the DEAP and MAHNOB-HCI datasets to evaluate the performance of the proposed model. The accuracy of valence dimension classification is 96.63%, and arousal dimension classification is 97.15% on the DEAP dataset, while 96.69% and 96.26% on the MAHNOB-HCI dataset. The experimental results show that the proposed model can effectively recognize emotions.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"3 1","pages":"33061-33068"},"PeriodicalIF":3.6000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Access","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/ACCESS.2023.3263670","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 2

Abstract

Emotion recognition has attracted attention in recent years. It is widely used in healthcare, teaching, human-computer interaction, and other fields. Human emotional features are often used to recognize different emotions. Currently, there is more and more research on multimodal emotion recognition based on the fusion of multiple features. This paper proposes a deep learning model for multimodal emotion recognition based on the fusion of electroencephalogram (EEG) signals and facial expressions to achieve an excellent classification effect. First, a pre-trained convolution neural network (CNN) is used to extract the facial features from the facial expressions. Next, the attention mechanism is introduced to extract more critical facial frame features. Then, we apply CNNs to extract spatial features from original EEG signals, which use a local convolution kernel and a global convolution kernel to learn the features of left and right hemispheres channels and all EEG channels. After feature-level fusion, the fusion features of the facial expression features and EEG features are fed into the classifier for emotion recognition. This paper conducted experiments on the DEAP and MAHNOB-HCI datasets to evaluate the performance of the proposed model. The accuracy of valence dimension classification is 96.63%, and arousal dimension classification is 97.15% on the DEAP dataset, while 96.69% and 96.26% on the MAHNOB-HCI dataset. The experimental results show that the proposed model can effectively recognize emotions.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于脑电图信号和面部表情的多模态情绪识别
情感识别近年来引起了人们的广泛关注。广泛应用于医疗保健、教学、人机交互等领域。人类的情绪特征经常被用来识别不同的情绪。目前,基于多特征融合的多模态情感识别研究越来越多。本文提出了一种基于脑电图信号和面部表情融合的多模态情绪识别深度学习模型,以获得良好的分类效果。首先,使用预训练的卷积神经网络(CNN)从面部表情中提取面部特征;其次,引入注意机制提取更关键的面部框架特征。然后,我们利用cnn从原始脑电信号中提取空间特征,利用局部卷积核和全局卷积核学习左右半球通道和所有脑电信号通道的特征。经过特征级融合后,将面部表情特征与脑电特征融合后的特征输入到分类器中进行情绪识别。本文在DEAP和MAHNOB-HCI数据集上进行了实验,以评估所提出模型的性能。DEAP数据的效价维度分类准确率为96.63%,唤醒维度分类准确率为97.15%,MAHNOB-HCI数据的效价维度分类准确率为96.69%,唤醒维度分类准确率为96.26%。实验结果表明,该模型能够有效地识别情绪。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Access
IEEE Access COMPUTER SCIENCE, INFORMATION SYSTEMSENGIN-ENGINEERING, ELECTRICAL & ELECTRONIC
CiteScore
9.80
自引率
7.70%
发文量
6673
审稿时长
6 weeks
期刊介绍: IEEE Access® is a multidisciplinary, open access (OA), applications-oriented, all-electronic archival journal that continuously presents the results of original research or development across all of IEEE''s fields of interest. IEEE Access will publish articles that are of high interest to readers, original, technically correct, and clearly presented. Supported by author publication charges (APC), its hallmarks are a rapid peer review and publication process with open access to all readers. Unlike IEEE''s traditional Transactions or Journals, reviews are "binary", in that reviewers will either Accept or Reject an article in the form it is submitted in order to achieve rapid turnaround. Especially encouraged are submissions on: Multidisciplinary topics, or applications-oriented articles and negative results that do not fit within the scope of IEEE''s traditional journals. Practical articles discussing new experiments or measurement techniques, interesting solutions to engineering. Development of new or improved fabrication or manufacturing techniques. Reviews or survey articles of new or evolving fields oriented to assist others in understanding the new area.
期刊最新文献
From Simulation to Clinical Translation: A Deep Learning Framework for Pancreatic Tumor Segmentation With GUI Integration Integrating Machine Learning and Image-Based Damage Quantification to Predict Self-Healing Performance of Asphalt Mixtures Edge-Deployable Neural Network Framework for Real-Time Antenna Performance Prediction in Wearable Telemedicine Systems Graph Neural Network-Based Composition Recommendation for Solid Oxide Fuel Cells Using Full-Cycle Data From Topology to Geometry: A Neural Ricci Flow Framework for Predicting Flash Crashes and Contagion
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1