{"title":"Emotion space for analysis and synthesis of facial expression","authors":"S. Morishima, H. Harashima","doi":"10.1109/ROMAN.1993.367724","DOIUrl":null,"url":null,"abstract":"This paper presents a new emotion model which gives a criteria to decide human's emotion condition from the face image. Our final goal is to realize very natural and user-friendly human-machine communication environment by giving a face to computer terminal or communication system which can also understand the user's emotion condition. So it is necessary for the emotion model to express emotional meanings of a parameterized face expression and its motion quantitatively. Our emotion model is based on 5-layered neural network which has generalization and nonlinear mapping performance. Both input and output layer has the same number of units. So identity mapping can be realized and emotion space can be constructed in the middle-layer (3rd layer). The mapping from input layer to middle layer means emotion recognition and that from middle layer to output layer corresponds to expression synthesis from the emotion value. Training is performed by typical 13 emotion patterns which are expressed by expression parameters. Subjective test of this emotion space proves the propriety of this model. The facial action coding system is selected as an efficient criteria to describe delicate face expression and motion.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"38","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ROMAN.1993.367724","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 38
Abstract
This paper presents a new emotion model which gives a criteria to decide human's emotion condition from the face image. Our final goal is to realize very natural and user-friendly human-machine communication environment by giving a face to computer terminal or communication system which can also understand the user's emotion condition. So it is necessary for the emotion model to express emotional meanings of a parameterized face expression and its motion quantitatively. Our emotion model is based on 5-layered neural network which has generalization and nonlinear mapping performance. Both input and output layer has the same number of units. So identity mapping can be realized and emotion space can be constructed in the middle-layer (3rd layer). The mapping from input layer to middle layer means emotion recognition and that from middle layer to output layer corresponds to expression synthesis from the emotion value. Training is performed by typical 13 emotion patterns which are expressed by expression parameters. Subjective test of this emotion space proves the propriety of this model. The facial action coding system is selected as an efficient criteria to describe delicate face expression and motion.<>