Attention-Guided Context-aware Emotional State Recognition

S. Jaiswal, Sandeep Misra, G. Nandi
{"title":"Attention-Guided Context-aware Emotional State Recognition","authors":"S. Jaiswal, Sandeep Misra, G. Nandi","doi":"10.1109/UPCON50219.2020.9376440","DOIUrl":null,"url":null,"abstract":"For effective communication between two humans, one needs to understand the emotional state of its fellow beings. Predicting human emotion involves several factors, which include, but not limited to, facial expression. Many researches in this direction are based on features extracted from facial expressions. In this paper we are proposing a model, which can predict human emotion by considering facial as well as context information. Our model not only extracts features from facial expressions but also is aware of the background context in the image dataset. We have shown the relevance of contextual information with facial expression and its impact on the predicted results. Our model captures facial expressions and contextual information with the most relevant part boosted up to capture feature and utilize it to predict human expressions. We are using Attention model in our architecture to boost relevant part, and learn what to boost to make relevant prediction. We have performed several experiments and compare the relevance of facial expressions with context and context free environments. Our proposed model is robust, capable of predicting emotions in real time with improved accuracy of 8% over state of the art accuracy to the best of our knowledge. In addition, it is implemented over dataset, which contains mostly spontaneous images and not posed one, leading to improved results.","PeriodicalId":192190,"journal":{"name":"2020 IEEE 7th Uttar Pradesh Section International Conference on Electrical, Electronics and Computer Engineering (UPCON)","volume":"101 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 7th Uttar Pradesh Section International Conference on Electrical, Electronics and Computer Engineering (UPCON)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/UPCON50219.2020.9376440","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

For effective communication between two humans, one needs to understand the emotional state of its fellow beings. Predicting human emotion involves several factors, which include, but not limited to, facial expression. Many researches in this direction are based on features extracted from facial expressions. In this paper we are proposing a model, which can predict human emotion by considering facial as well as context information. Our model not only extracts features from facial expressions but also is aware of the background context in the image dataset. We have shown the relevance of contextual information with facial expression and its impact on the predicted results. Our model captures facial expressions and contextual information with the most relevant part boosted up to capture feature and utilize it to predict human expressions. We are using Attention model in our architecture to boost relevant part, and learn what to boost to make relevant prediction. We have performed several experiments and compare the relevance of facial expressions with context and context free environments. Our proposed model is robust, capable of predicting emotions in real time with improved accuracy of 8% over state of the art accuracy to the best of our knowledge. In addition, it is implemented over dataset, which contains mostly spontaneous images and not posed one, leading to improved results.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
注意引导的情境感知情绪状态识别
为了两个人之间的有效沟通,一个人需要了解他的同伴的情绪状态。预测人类情绪涉及几个因素,包括但不限于面部表情。这方面的许多研究都是基于从面部表情中提取的特征。在本文中,我们提出了一个模型,该模型可以通过考虑面部和上下文信息来预测人类的情绪。我们的模型不仅可以从面部表情中提取特征,还可以感知图像数据集中的背景环境。我们已经展示了上下文信息与面部表情的相关性及其对预测结果的影响。我们的模型捕获面部表情和上下文信息,其中最相关的部分被增强以捕获特征并利用它来预测人类的表情。我们在我们的架构中使用注意力模型来提升相关部分,并学习提升哪些部分来做出相关预测。我们进行了几个实验,比较了面部表情与语境和无语境环境的相关性。我们提出的模型是稳健的,能够实时预测情绪,据我们所知,其准确率比目前最先进的准确率提高了8%。此外,它是在数据集上实现的,这些数据集主要包含自发图像而不是构成图像,从而提高了结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Underwater Image Enhancement Using Neighbourhood Based Two Level Contrast Stretching and Modified Artificial Bee Colony Further LMI conditions to the stability of the delayed discrete-time systems subject to generalized overflow nonlinearities and parameter uncertainties Lossy Medical Image Compression using Residual Learning-based Dual Autoencoder Model Compact Circularly Polarized Patch Antenna for 5G Applications Air Quality Monitoring and Analysis Network
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1