{"title":"Attention-Guided Context-aware Emotional State Recognition","authors":"S. Jaiswal, Sandeep Misra, G. Nandi","doi":"10.1109/UPCON50219.2020.9376440","DOIUrl":null,"url":null,"abstract":"For effective communication between two humans, one needs to understand the emotional state of its fellow beings. Predicting human emotion involves several factors, which include, but not limited to, facial expression. Many researches in this direction are based on features extracted from facial expressions. In this paper we are proposing a model, which can predict human emotion by considering facial as well as context information. Our model not only extracts features from facial expressions but also is aware of the background context in the image dataset. We have shown the relevance of contextual information with facial expression and its impact on the predicted results. Our model captures facial expressions and contextual information with the most relevant part boosted up to capture feature and utilize it to predict human expressions. We are using Attention model in our architecture to boost relevant part, and learn what to boost to make relevant prediction. We have performed several experiments and compare the relevance of facial expressions with context and context free environments. Our proposed model is robust, capable of predicting emotions in real time with improved accuracy of 8% over state of the art accuracy to the best of our knowledge. In addition, it is implemented over dataset, which contains mostly spontaneous images and not posed one, leading to improved results.","PeriodicalId":192190,"journal":{"name":"2020 IEEE 7th Uttar Pradesh Section International Conference on Electrical, Electronics and Computer Engineering (UPCON)","volume":"101 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 7th Uttar Pradesh Section International Conference on Electrical, Electronics and Computer Engineering (UPCON)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/UPCON50219.2020.9376440","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
For effective communication between two humans, one needs to understand the emotional state of its fellow beings. Predicting human emotion involves several factors, which include, but not limited to, facial expression. Many researches in this direction are based on features extracted from facial expressions. In this paper we are proposing a model, which can predict human emotion by considering facial as well as context information. Our model not only extracts features from facial expressions but also is aware of the background context in the image dataset. We have shown the relevance of contextual information with facial expression and its impact on the predicted results. Our model captures facial expressions and contextual information with the most relevant part boosted up to capture feature and utilize it to predict human expressions. We are using Attention model in our architecture to boost relevant part, and learn what to boost to make relevant prediction. We have performed several experiments and compare the relevance of facial expressions with context and context free environments. Our proposed model is robust, capable of predicting emotions in real time with improved accuracy of 8% over state of the art accuracy to the best of our knowledge. In addition, it is implemented over dataset, which contains mostly spontaneous images and not posed one, leading to improved results.