Fusion of Facial Expressions and EEG for Multimodal Emotion Recognition.

3区 工程技术 Q1 Mathematics Computational Intelligence and Neuroscience Pub Date : 2017-01-01 Epub Date: 2017-09-19 DOI:10.1155/2017/2107451
Yongrui Huang, Jianhao Yang, Pengkai Liao, Jiahui Pan
{"title":"Fusion of Facial Expressions and EEG for Multimodal Emotion Recognition.","authors":"Yongrui Huang,&nbsp;Jianhao Yang,&nbsp;Pengkai Liao,&nbsp;Jiahui Pan","doi":"10.1155/2017/2107451","DOIUrl":null,"url":null,"abstract":"<p><p>This paper proposes two multimodal fusion methods between brain and peripheral signals for emotion recognition. The input signals are electroencephalogram and facial expression. The stimuli are based on a subset of movie clips that correspond to four specific areas of valance-arousal emotional space (happiness, neutral, sadness, and fear). For facial expression detection, four basic emotion states (happiness, neutral, sadness, and fear) are detected by a neural network classifier. For EEG detection, four basic emotion states and three emotion intensity levels (strong, ordinary, and weak) are detected by two support vector machines (SVM) classifiers, respectively. Emotion recognition is based on two decision-level fusion methods of both EEG and facial expression detections by using a sum rule or a production rule. Twenty healthy subjects attended two experiments. The results show that the accuracies of two multimodal fusion detections are 81.25% and 82.75%, respectively, which are both higher than that of facial expression (74.38%) or EEG detection (66.88%). The combination of facial expressions and EEG information for emotion recognition compensates for their defects as single information sources.</p>","PeriodicalId":10634,"journal":{"name":"Computational Intelligence and Neuroscience","volume":" ","pages":"2107451"},"PeriodicalIF":0.0000,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2017/2107451","citationCount":"82","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computational Intelligence and Neuroscience","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1155/2017/2107451","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2017/9/19 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"Mathematics","Score":null,"Total":0}
引用次数: 82

Abstract

This paper proposes two multimodal fusion methods between brain and peripheral signals for emotion recognition. The input signals are electroencephalogram and facial expression. The stimuli are based on a subset of movie clips that correspond to four specific areas of valance-arousal emotional space (happiness, neutral, sadness, and fear). For facial expression detection, four basic emotion states (happiness, neutral, sadness, and fear) are detected by a neural network classifier. For EEG detection, four basic emotion states and three emotion intensity levels (strong, ordinary, and weak) are detected by two support vector machines (SVM) classifiers, respectively. Emotion recognition is based on two decision-level fusion methods of both EEG and facial expression detections by using a sum rule or a production rule. Twenty healthy subjects attended two experiments. The results show that the accuracies of two multimodal fusion detections are 81.25% and 82.75%, respectively, which are both higher than that of facial expression (74.38%) or EEG detection (66.88%). The combination of facial expressions and EEG information for emotion recognition compensates for their defects as single information sources.

Abstract Image

Abstract Image

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
面部表情与脑电图融合的多模态情绪识别。
提出了两种大脑与外周信号的多模态融合方法,用于情绪识别。输入信号为脑电图和面部表情。这些刺激是基于电影片段的一个子集,这些片段对应于价值唤起情感空间的四个特定区域(快乐、中性、悲伤和恐惧)。对于面部表情检测,通过神经网络分类器检测四种基本情绪状态(快乐、中性、悲伤和恐惧)。在EEG检测中,分别使用支持向量机(SVM)分类器检测四种基本情绪状态和三种情绪强度水平(强、普通、弱)。情感识别基于脑电和面部表情检测的两种决策级融合方法,分别采用和规则和产生规则。20名健康受试者参加了两个实验。结果表明,两种多模态融合检测的准确率分别为81.25%和82.75%,均高于面部表情检测的准确率(74.38%)和脑电检测的准确率(66.88%)。面部表情和脑电图信息的结合用于情绪识别,弥补了它们作为单一信息源的缺陷。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Computational Intelligence and Neuroscience
Computational Intelligence and Neuroscience MATHEMATICAL & COMPUTATIONAL BIOLOGY-NEUROSCIENCES
自引率
0.00%
发文量
3236
审稿时长
20 weeks
期刊介绍: The journal provides research and review papers at an interdisciplinary level, with the field of intelligent systems for computational neuroscience as its focus. This field includes areas like artificial intelligence, models and computational theories of human cognition, perception and motivation; brain models, artificial neural nets and neural computing. All items relevant to building theoretical and practical systems are within its scope, including contributions in the area of applicable neural networks theory, supervised and unsupervised learning methods, algorithms, architectures, performance measures, applied statistics, software simulations, hardware implementations, benchmarks, system engineering and integration and innovative applications.
期刊最新文献
RETRACTION: Intangible Cultural Heritage Reproduction and Revitalization: Value Feedback, Practice, and Exploration Based on the IPA Model. RETRACTION: CNN Based Multiclass Brain Tumor Detection Using Medical Imaging. RETRACTION: Distributed Scheduling Strategy of Virtual Power Plant Using the Particle Swarm Optimization Neural Network under Blockchain Background. RETRACTION: The Design of Adolescents' Physical Health Prediction System Based on Deep Reinforcement Learning. RETRACTION: Convolutional Neural Network Models Combined with Kansei Engineering in Product Design.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1