{"title":"基于判别公共向量的面部表情识别","authors":"Yuan-Kai Wang, Chun-Hao Huang","doi":"10.1109/IIH-MSP.2007.403","DOIUrl":null,"url":null,"abstract":"Extracting stable features from face images is very important for automatic recognition of facial expression. In this paper, we apply a face feature extraction approach, namely discriminative common vectors, for the recognition of the six expressions including happy, sad, angry, disgust, fear and surprise. By applying discriminative common vector, we can reduce the dimensionality of image feature and classify them in a lower dimension. Then we use HMM as our classifier to find the time series information of the feature vector projected by common vector. Experimental results on the Cohn-Kanade database demonstrate the validity and efficiency of our approach.","PeriodicalId":385132,"journal":{"name":"Third International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP 2007)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2007-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Facial Expression Recognition with Discriminative Common Vector\",\"authors\":\"Yuan-Kai Wang, Chun-Hao Huang\",\"doi\":\"10.1109/IIH-MSP.2007.403\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Extracting stable features from face images is very important for automatic recognition of facial expression. In this paper, we apply a face feature extraction approach, namely discriminative common vectors, for the recognition of the six expressions including happy, sad, angry, disgust, fear and surprise. By applying discriminative common vector, we can reduce the dimensionality of image feature and classify them in a lower dimension. Then we use HMM as our classifier to find the time series information of the feature vector projected by common vector. Experimental results on the Cohn-Kanade database demonstrate the validity and efficiency of our approach.\",\"PeriodicalId\":385132,\"journal\":{\"name\":\"Third International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP 2007)\",\"volume\":\"35 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2007-11-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Third International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP 2007)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IIH-MSP.2007.403\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Third International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP 2007)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IIH-MSP.2007.403","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Facial Expression Recognition with Discriminative Common Vector
Extracting stable features from face images is very important for automatic recognition of facial expression. In this paper, we apply a face feature extraction approach, namely discriminative common vectors, for the recognition of the six expressions including happy, sad, angry, disgust, fear and surprise. By applying discriminative common vector, we can reduce the dimensionality of image feature and classify them in a lower dimension. Then we use HMM as our classifier to find the time series information of the feature vector projected by common vector. Experimental results on the Cohn-Kanade database demonstrate the validity and efficiency of our approach.