{"title":"基于深度特征的表情不变关联因子分析在情感识别中的应用","authors":"Sarasi Munasinghe, C. Fookes, S. Sridharan","doi":"10.1109/BTAS.2017.8272741","DOIUrl":null,"url":null,"abstract":"Video-based facial expression recognition is an open research challenge not solved by the current state-of-the-art. On the other hand, static image based emotion recognition is highly important when videos are not available and human emotions need to be determined from a single shot only. This paper proposes sequential-based and image-based tied factor analysis frameworks with a deep network that simultaneously addresses these two problems. For video-based data, we first extract deep convolutional temporal appearance features from image sequences and then these features are fed into a generative model that constructs a low-dimensional observed space for all individuals, depending on the facial expression sequences. After learning the sequential expression components of the transition matrices among the expression manifolds, we use a Gaussian probabilistic approach to design an efficient classifier for temporal facial expression recognition. Furthermore, we analyse the utility of proposed video-based methods for image-based emotion recognition learning static tied factor analysis parameters. Meanwhile, this model can be used to predict the expressive face image sequences from given neutral faces. Recognition results achieved on three public benchmark databases: CK+, JAFFE, and FER2013, clearly indicate our approach achieves effective performance over the current techniques of handling sequential and static facial expression variations.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"84 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Deep features-based expression-invariant tied factor analysis for emotion recognition\",\"authors\":\"Sarasi Munasinghe, C. Fookes, S. Sridharan\",\"doi\":\"10.1109/BTAS.2017.8272741\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Video-based facial expression recognition is an open research challenge not solved by the current state-of-the-art. On the other hand, static image based emotion recognition is highly important when videos are not available and human emotions need to be determined from a single shot only. This paper proposes sequential-based and image-based tied factor analysis frameworks with a deep network that simultaneously addresses these two problems. For video-based data, we first extract deep convolutional temporal appearance features from image sequences and then these features are fed into a generative model that constructs a low-dimensional observed space for all individuals, depending on the facial expression sequences. After learning the sequential expression components of the transition matrices among the expression manifolds, we use a Gaussian probabilistic approach to design an efficient classifier for temporal facial expression recognition. Furthermore, we analyse the utility of proposed video-based methods for image-based emotion recognition learning static tied factor analysis parameters. Meanwhile, this model can be used to predict the expressive face image sequences from given neutral faces. Recognition results achieved on three public benchmark databases: CK+, JAFFE, and FER2013, clearly indicate our approach achieves effective performance over the current techniques of handling sequential and static facial expression variations.\",\"PeriodicalId\":372008,\"journal\":{\"name\":\"2017 IEEE International Joint Conference on Biometrics (IJCB)\",\"volume\":\"84 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 IEEE International Joint Conference on Biometrics (IJCB)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/BTAS.2017.8272741\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE International Joint Conference on Biometrics (IJCB)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/BTAS.2017.8272741","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Deep features-based expression-invariant tied factor analysis for emotion recognition
Video-based facial expression recognition is an open research challenge not solved by the current state-of-the-art. On the other hand, static image based emotion recognition is highly important when videos are not available and human emotions need to be determined from a single shot only. This paper proposes sequential-based and image-based tied factor analysis frameworks with a deep network that simultaneously addresses these two problems. For video-based data, we first extract deep convolutional temporal appearance features from image sequences and then these features are fed into a generative model that constructs a low-dimensional observed space for all individuals, depending on the facial expression sequences. After learning the sequential expression components of the transition matrices among the expression manifolds, we use a Gaussian probabilistic approach to design an efficient classifier for temporal facial expression recognition. Furthermore, we analyse the utility of proposed video-based methods for image-based emotion recognition learning static tied factor analysis parameters. Meanwhile, this model can be used to predict the expressive face image sequences from given neutral faces. Recognition results achieved on three public benchmark databases: CK+, JAFFE, and FER2013, clearly indicate our approach achieves effective performance over the current techniques of handling sequential and static facial expression variations.