{"title":"基于面部动态的情绪识别","authors":"Svetoslav Nedkov, D. Dimov","doi":"10.1145/2516775.2516794","DOIUrl":null,"url":null,"abstract":"The paper proposes an accessible method for emotion recognition from facial dynamics in video streams. The emotions considered are anger, disgust, fear, happiness, sadness, surprise, and the neutral expression as well. The method is based on the Facial Action Coding System (FACS) that regards individual action units (AU) as features for the recognition of emotions. On the basis of FACS we propose an a'priori juxtaposition between the well known Candide model vertexes and the landmarks selected in each individual video frame with human face. We use a Linear Discriminant Analysis (LDA) approach to define an emotion classifier. To this end our approach is facilitated by some assumptions like the need of well defined start and peak frames for each emotion under recognition. The experiments show that the method we propose can be successfully further developed for most of the real cases of face emotion recognition.","PeriodicalId":316788,"journal":{"name":"International Conference on Computer Systems and Technologies","volume":"93 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Emotion recognition by face dynamics\",\"authors\":\"Svetoslav Nedkov, D. Dimov\",\"doi\":\"10.1145/2516775.2516794\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The paper proposes an accessible method for emotion recognition from facial dynamics in video streams. The emotions considered are anger, disgust, fear, happiness, sadness, surprise, and the neutral expression as well. The method is based on the Facial Action Coding System (FACS) that regards individual action units (AU) as features for the recognition of emotions. On the basis of FACS we propose an a'priori juxtaposition between the well known Candide model vertexes and the landmarks selected in each individual video frame with human face. We use a Linear Discriminant Analysis (LDA) approach to define an emotion classifier. To this end our approach is facilitated by some assumptions like the need of well defined start and peak frames for each emotion under recognition. The experiments show that the method we propose can be successfully further developed for most of the real cases of face emotion recognition.\",\"PeriodicalId\":316788,\"journal\":{\"name\":\"International Conference on Computer Systems and Technologies\",\"volume\":\"93 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2013-06-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Conference on Computer Systems and Technologies\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2516775.2516794\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Computer Systems and Technologies","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2516775.2516794","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The paper proposes an accessible method for emotion recognition from facial dynamics in video streams. The emotions considered are anger, disgust, fear, happiness, sadness, surprise, and the neutral expression as well. The method is based on the Facial Action Coding System (FACS) that regards individual action units (AU) as features for the recognition of emotions. On the basis of FACS we propose an a'priori juxtaposition between the well known Candide model vertexes and the landmarks selected in each individual video frame with human face. We use a Linear Discriminant Analysis (LDA) approach to define an emotion classifier. To this end our approach is facilitated by some assumptions like the need of well defined start and peak frames for each emotion under recognition. The experiments show that the method we propose can be successfully further developed for most of the real cases of face emotion recognition.