Multimodal Emotion Recognition From EEG Signals and Facial Expressions

IF 3.4 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS IEEE Access Pub Date : 2023-01-01 DOI:10.1109/ACCESS.2023.3263670
Shuai Wang;Jingzi Qu;Yong Zhang;Yidie Zhang
{"title":"Multimodal Emotion Recognition From EEG Signals and Facial Expressions","authors":"Shuai Wang;Jingzi Qu;Yong Zhang;Yidie Zhang","doi":"10.1109/ACCESS.2023.3263670","DOIUrl":null,"url":null,"abstract":"Emotion recognition has attracted attention in recent years. It is widely used in healthcare, teaching, human-computer interaction, and other fields. Human emotional features are often used to recognize different emotions. Currently, there is more and more research on multimodal emotion recognition based on the fusion of multiple features. This paper proposes a deep learning model for multimodal emotion recognition based on the fusion of electroencephalogram (EEG) signals and facial expressions to achieve an excellent classification effect. First, a pre-trained convolution neural network (CNN) is used to extract the facial features from the facial expressions. Next, the attention mechanism is introduced to extract more critical facial frame features. Then, we apply CNNs to extract spatial features from original EEG signals, which use a local convolution kernel and a global convolution kernel to learn the features of left and right hemispheres channels and all EEG channels. After feature-level fusion, the fusion features of the facial expression features and EEG features are fed into the classifier for emotion recognition. This paper conducted experiments on the DEAP and MAHNOB-HCI datasets to evaluate the performance of the proposed model. The accuracy of valence dimension classification is 96.63%, and arousal dimension classification is 97.15% on the DEAP dataset, while 96.69% and 96.26% on the MAHNOB-HCI dataset. The experimental results show that the proposed model can effectively recognize emotions.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"11 ","pages":"33061-33068"},"PeriodicalIF":3.4000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/6287639/10005208/10089483.pdf","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Access","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10089483/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 4

Abstract

Emotion recognition has attracted attention in recent years. It is widely used in healthcare, teaching, human-computer interaction, and other fields. Human emotional features are often used to recognize different emotions. Currently, there is more and more research on multimodal emotion recognition based on the fusion of multiple features. This paper proposes a deep learning model for multimodal emotion recognition based on the fusion of electroencephalogram (EEG) signals and facial expressions to achieve an excellent classification effect. First, a pre-trained convolution neural network (CNN) is used to extract the facial features from the facial expressions. Next, the attention mechanism is introduced to extract more critical facial frame features. Then, we apply CNNs to extract spatial features from original EEG signals, which use a local convolution kernel and a global convolution kernel to learn the features of left and right hemispheres channels and all EEG channels. After feature-level fusion, the fusion features of the facial expression features and EEG features are fed into the classifier for emotion recognition. This paper conducted experiments on the DEAP and MAHNOB-HCI datasets to evaluate the performance of the proposed model. The accuracy of valence dimension classification is 96.63%, and arousal dimension classification is 97.15% on the DEAP dataset, while 96.69% and 96.26% on the MAHNOB-HCI dataset. The experimental results show that the proposed model can effectively recognize emotions.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于脑电信号和面部表情的多模式情绪识别
近年来,情绪识别引起了人们的关注。它广泛应用于医疗保健、教学、人机交互等领域。人类的情感特征经常被用来识别不同的情绪。目前,基于多特征融合的多模式情感识别研究越来越多。本文提出了一种基于脑电信号和面部表情融合的多模式情绪识别深度学习模型,以达到良好的分类效果。首先,使用预先训练的卷积神经网络(CNN)从面部表情中提取面部特征。接下来,引入注意力机制来提取更关键的面部框架特征。然后,我们应用神经网络从原始脑电信号中提取空间特征,使用局部卷积核和全局卷积核来学习左右半球通道和所有脑电通道的特征。经过特征级融合后,将人脸表情特征和脑电特征的融合特征输入分类器进行情绪识别。本文在DEAP和MAHNOB-HCI数据集上进行了实验,以评估所提出模型的性能。在DEAP数据集上,效价维度分类的准确率为96.63%,唤醒维度分类的正确率为97.15%,而在MAHNOB-HCI数据集上分别为96.69%和96.26%。实验结果表明,该模型能够有效地识别情绪。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
相关文献
Experiencing student learning and tourism training in a 3D virtual world: An exploratory study
IF 3.7 2区 教育学Journal of Hospitality Leisure Sport & Tourism EducationPub Date : 2013-07-01 DOI: 10.1016/j.jhlste.2013.09.007
Yu-Chih Huang , Sheila J. Backman , Lan-Lan Chang , Kenneth F. Backman , Francis A. McGuire
来源期刊
IEEE Access
IEEE Access COMPUTER SCIENCE, INFORMATION SYSTEMSENGIN-ENGINEERING, ELECTRICAL & ELECTRONIC
CiteScore
9.80
自引率
7.70%
发文量
6673
审稿时长
6 weeks
期刊介绍: IEEE Access® is a multidisciplinary, open access (OA), applications-oriented, all-electronic archival journal that continuously presents the results of original research or development across all of IEEE''s fields of interest. IEEE Access will publish articles that are of high interest to readers, original, technically correct, and clearly presented. Supported by author publication charges (APC), its hallmarks are a rapid peer review and publication process with open access to all readers. Unlike IEEE''s traditional Transactions or Journals, reviews are "binary", in that reviewers will either Accept or Reject an article in the form it is submitted in order to achieve rapid turnaround. Especially encouraged are submissions on: Multidisciplinary topics, or applications-oriented articles and negative results that do not fit within the scope of IEEE''s traditional journals. Practical articles discussing new experiments or measurement techniques, interesting solutions to engineering. Development of new or improved fabrication or manufacturing techniques. Reviews or survey articles of new or evolving fields oriented to assist others in understanding the new area.
期刊最新文献
Corrections to “Smarter World Living Lab as an Integrated Approach: Learning How to Improve Quality of Life” Corrections to “Could the Use of AI in Higher Education Hinder Students With Disabilities? A Scoping Review” Neural Network-Based Genetic Algorithm for Complex Circuit Design of High-Power Vacuum Electron Device Operational and Planning Perspectives on Battery Swapping and Wireless Charging Technologies: A Multidisciplinary Review Fuzzy Rule-Based Combination Model for the Fire Pixel Segmentation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1