Emotion Recognition with the Help of Privileged Information

Shangfei Wang, Yachen Zhu, Lihua Yue, Q. Ji
{"title":"Emotion Recognition with the Help of Privileged Information","authors":"Shangfei Wang, Yachen Zhu, Lihua Yue, Q. Ji","doi":"10.1109/TAMD.2015.2463113","DOIUrl":null,"url":null,"abstract":"In this article, we propose a novel approach to recognize emotions with the help of privileged information, which is only available during training, but not available during testing. Such additional information can be exploited during training to construct a better classifier. Specifically, we recognize audience's emotion from EEG signals with the help of the stimulus videos, and tag videos' emotions with the aid of electroencephalogram (EEG) signals. First, frequency features are extracted from EEG signals and audio/visual features are extracted from video stimulus. Second, features are selected by statistical tests. Third, a new EEG feature space and a new video feature space are constructed simultaneously using canonical correlation analysis (CCA). Finally, two support vector machines (SVM) are trained on the new EEG and video feature spaces respectively. During emotion recognition from EEG, only EEG signals are available, and the SVM classifier obtained on EEG feature space is used; while for video emotion tagging, only video clips are available, and the SVM classifier constructed on video feature space is adopted. Experiments of EEG-based emotion recognition and emotion video tagging are conducted on three benchmark databases, demonstrating that video content, as the context, can improve the emotion recognition from EEG signals and EEG signals available during training can enhance emotion video tagging.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"7 1","pages":"189-200"},"PeriodicalIF":0.0000,"publicationDate":"2015-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2015.2463113","citationCount":"33","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Autonomous Mental Development","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TAMD.2015.2463113","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 33

Abstract

In this article, we propose a novel approach to recognize emotions with the help of privileged information, which is only available during training, but not available during testing. Such additional information can be exploited during training to construct a better classifier. Specifically, we recognize audience's emotion from EEG signals with the help of the stimulus videos, and tag videos' emotions with the aid of electroencephalogram (EEG) signals. First, frequency features are extracted from EEG signals and audio/visual features are extracted from video stimulus. Second, features are selected by statistical tests. Third, a new EEG feature space and a new video feature space are constructed simultaneously using canonical correlation analysis (CCA). Finally, two support vector machines (SVM) are trained on the new EEG and video feature spaces respectively. During emotion recognition from EEG, only EEG signals are available, and the SVM classifier obtained on EEG feature space is used; while for video emotion tagging, only video clips are available, and the SVM classifier constructed on video feature space is adopted. Experiments of EEG-based emotion recognition and emotion video tagging are conducted on three benchmark databases, demonstrating that video content, as the context, can improve the emotion recognition from EEG signals and EEG signals available during training can enhance emotion video tagging.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于特权信息的情感识别
在本文中,我们提出了一种利用特权信息识别情绪的新方法,这种信息仅在训练期间可用,而在测试期间不可用。在训练期间可以利用这些附加信息来构建更好的分类器。具体来说,我们借助刺激视频从脑电图信号中识别观众的情绪,并借助脑电图信号对视频的情绪进行标记。首先从脑电信号中提取频率特征,从视频刺激中提取音频/视觉特征。其次,通过统计检验选择特征。第三,利用典型相关分析(canonical correlation analysis, CCA)同时构建新的脑电特征空间和新的视频特征空间。最后,分别在新的EEG和视频特征空间上训练两个支持向量机(SVM)。在对脑电信号进行情绪识别时,只使用脑电信号,利用脑电信号特征空间得到的SVM分类器;而对于视频情感标注,只选取视频片段,采用基于视频特征空间构建的SVM分类器。在三个基准数据库上进行了基于脑电图的情感识别和情感视频标注实验,结果表明,视频内容作为上下文可以提高对脑电图信号的情感识别,训练过程中可用的脑电图信号可以增强情感视频标注。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Autonomous Mental Development
IEEE Transactions on Autonomous Mental Development COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-ROBOTICS
自引率
0.00%
发文量
0
审稿时长
3 months
期刊最新文献
Types, Locations, and Scales from Cluttered Natural Video and Actions Guest Editorial Multimodal Modeling and Analysis Informed by Brain Imaging—Part 1 Discriminating Bipolar Disorder From Major Depression Based on SVM-FoBa: Efficient Feature Selection With Multimodal Brain Imaging Data A Robust Gradient-Based Algorithm to Correct Bias Fields of Brain MR Images Editorial Announcing the Title Change of the IEEE Transactions on Autonomous Mental Development in 2016
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1