{"title":"动态面部情感识别从4D视频序列","authors":"P. Suja, P. KalyanKumarV., Shikha Tripathi","doi":"10.1109/IC3.2015.7346705","DOIUrl":null,"url":null,"abstract":"Emotions are characterized as responses to internal and external events of a person. Emotion recognition through facial expressions from videos plays a vital role in human computer interaction where the dynamic changes in face movements needs to be realized quickly. In this work, we propose a simple method, using the geometrical based approach for the recognition of six basic emotions in video sequences of BU-4DFE database. We have chosen optimum feature points out of the 83 feature points provided in the BU-4DFE database. A video expressing emotion will have frames containing neutral, onset, apex and offset of that emotion. We have dynamically identified the frame that is most expressive for an emotion (apex). The Euclidean distance between the feature points in apex and neutral frame is determined and their difference in corresponding neutral and the apex frame is calculated to form the feature vector. The feature vectors thus formed for all the emotions and subjects are given to Neural Networks (NN) and Support Vector Machine (SVM) with different kernels for classification. We have compared the accuracy obtained by NN & SVM. Our proposed method is simple, uses only two frames and yields good accuracy for BU-4DFE database. Very complex algorithms exist in literature using BU-4DFE database and our proposed simple method gives comparable results. It can be applied for real time implementation and kinesics in future.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"13","resultStr":"{\"title\":\"Dynamic facial emotion recognition from 4D video sequences\",\"authors\":\"P. Suja, P. KalyanKumarV., Shikha Tripathi\",\"doi\":\"10.1109/IC3.2015.7346705\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Emotions are characterized as responses to internal and external events of a person. Emotion recognition through facial expressions from videos plays a vital role in human computer interaction where the dynamic changes in face movements needs to be realized quickly. In this work, we propose a simple method, using the geometrical based approach for the recognition of six basic emotions in video sequences of BU-4DFE database. We have chosen optimum feature points out of the 83 feature points provided in the BU-4DFE database. A video expressing emotion will have frames containing neutral, onset, apex and offset of that emotion. We have dynamically identified the frame that is most expressive for an emotion (apex). The Euclidean distance between the feature points in apex and neutral frame is determined and their difference in corresponding neutral and the apex frame is calculated to form the feature vector. The feature vectors thus formed for all the emotions and subjects are given to Neural Networks (NN) and Support Vector Machine (SVM) with different kernels for classification. We have compared the accuracy obtained by NN & SVM. Our proposed method is simple, uses only two frames and yields good accuracy for BU-4DFE database. Very complex algorithms exist in literature using BU-4DFE database and our proposed simple method gives comparable results. It can be applied for real time implementation and kinesics in future.\",\"PeriodicalId\":217950,\"journal\":{\"name\":\"2015 Eighth International Conference on Contemporary Computing (IC3)\",\"volume\":\"26 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-08-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"13\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2015 Eighth International Conference on Contemporary Computing (IC3)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IC3.2015.7346705\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 Eighth International Conference on Contemporary Computing (IC3)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IC3.2015.7346705","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Dynamic facial emotion recognition from 4D video sequences
Emotions are characterized as responses to internal and external events of a person. Emotion recognition through facial expressions from videos plays a vital role in human computer interaction where the dynamic changes in face movements needs to be realized quickly. In this work, we propose a simple method, using the geometrical based approach for the recognition of six basic emotions in video sequences of BU-4DFE database. We have chosen optimum feature points out of the 83 feature points provided in the BU-4DFE database. A video expressing emotion will have frames containing neutral, onset, apex and offset of that emotion. We have dynamically identified the frame that is most expressive for an emotion (apex). The Euclidean distance between the feature points in apex and neutral frame is determined and their difference in corresponding neutral and the apex frame is calculated to form the feature vector. The feature vectors thus formed for all the emotions and subjects are given to Neural Networks (NN) and Support Vector Machine (SVM) with different kernels for classification. We have compared the accuracy obtained by NN & SVM. Our proposed method is simple, uses only two frames and yields good accuracy for BU-4DFE database. Very complex algorithms exist in literature using BU-4DFE database and our proposed simple method gives comparable results. It can be applied for real time implementation and kinesics in future.