U Tariq, Kai-Hsiang Lin, Zhen Li, Xi Zhou, Zhaowen Wang, Vuong Le, T S Huang, Xutao Lv, T X Han
{"title":"从特征集合中识别情感。","authors":"U Tariq, Kai-Hsiang Lin, Zhen Li, Xi Zhou, Zhaowen Wang, Vuong Le, T S Huang, Xutao Lv, T X Han","doi":"10.1109/TSMCB.2012.2194701","DOIUrl":null,"url":null,"abstract":"<p><p>This paper details the authors' efforts to push the baseline of emotion recognition performance on the Geneva Multimodal Emotion Portrayals (GEMEP) Facial Expression Recognition and Analysis database. Both subject-dependent and subject-independent emotion recognition scenarios are addressed in this paper. The approach toward solving this problem involves face detection, followed by key-point identification, then feature generation, and then, finally, classification. An ensemble of features consisting of hierarchical Gaussianization, scale-invariant feature transform, and some coarse motion features have been used. In the classification stage, we used support vector machines. The classification task has been divided into person-specific and person-independent emotion recognitions using face recognition with either manual labels or automatic algorithms. We achieve 100% performance for the person-specific one, 66% performance for the person-independent one, and 80% performance for overall results, in terms of classification rate, for emotion recognition with manual identification of subjects. </p>","PeriodicalId":55006,"journal":{"name":"IEEE Transactions on Systems Man and Cybernetics Part B-Cybernetics","volume":" ","pages":"1017-26"},"PeriodicalIF":0.0000,"publicationDate":"2012-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TSMCB.2012.2194701","citationCount":"29","resultStr":"{\"title\":\"Recognizing Emotions From an Ensemble of Features.\",\"authors\":\"U Tariq, Kai-Hsiang Lin, Zhen Li, Xi Zhou, Zhaowen Wang, Vuong Le, T S Huang, Xutao Lv, T X Han\",\"doi\":\"10.1109/TSMCB.2012.2194701\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>This paper details the authors' efforts to push the baseline of emotion recognition performance on the Geneva Multimodal Emotion Portrayals (GEMEP) Facial Expression Recognition and Analysis database. Both subject-dependent and subject-independent emotion recognition scenarios are addressed in this paper. The approach toward solving this problem involves face detection, followed by key-point identification, then feature generation, and then, finally, classification. An ensemble of features consisting of hierarchical Gaussianization, scale-invariant feature transform, and some coarse motion features have been used. In the classification stage, we used support vector machines. The classification task has been divided into person-specific and person-independent emotion recognitions using face recognition with either manual labels or automatic algorithms. We achieve 100% performance for the person-specific one, 66% performance for the person-independent one, and 80% performance for overall results, in terms of classification rate, for emotion recognition with manual identification of subjects. </p>\",\"PeriodicalId\":55006,\"journal\":{\"name\":\"IEEE Transactions on Systems Man and Cybernetics Part B-Cybernetics\",\"volume\":\" \",\"pages\":\"1017-26\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1109/TSMCB.2012.2194701\",\"citationCount\":\"29\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Systems Man and Cybernetics Part B-Cybernetics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/TSMCB.2012.2194701\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2012/5/3 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Systems Man and Cybernetics Part B-Cybernetics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TSMCB.2012.2194701","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2012/5/3 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
Recognizing Emotions From an Ensemble of Features.
This paper details the authors' efforts to push the baseline of emotion recognition performance on the Geneva Multimodal Emotion Portrayals (GEMEP) Facial Expression Recognition and Analysis database. Both subject-dependent and subject-independent emotion recognition scenarios are addressed in this paper. The approach toward solving this problem involves face detection, followed by key-point identification, then feature generation, and then, finally, classification. An ensemble of features consisting of hierarchical Gaussianization, scale-invariant feature transform, and some coarse motion features have been used. In the classification stage, we used support vector machines. The classification task has been divided into person-specific and person-independent emotion recognitions using face recognition with either manual labels or automatic algorithms. We achieve 100% performance for the person-specific one, 66% performance for the person-independent one, and 80% performance for overall results, in terms of classification rate, for emotion recognition with manual identification of subjects.