{"title":"3D emotional facial animation synthesis with factored conditional Restricted Boltzmann Machines","authors":"Yong Zhao, D. Jiang, H. Sahli","doi":"10.1109/ACII.2015.7344664","DOIUrl":null,"url":null,"abstract":"This paper presents a 3D emotional facial animation synthesis approach based on the Factored Conditional Restricted Boltzmann Machines (FCRBM). Facial Action Parameters (FAPs) extracted from 2D face image sequences, are adopted to train the FCRBM model parameters. Based on the trained model, given an emotion label sequence and several initial frames of FAPs, the corresponding FAP sequence is generated via the Gibbs sampling, and then used to construct the MPEG-4 compliant 3D facial animation. Emotion recognition and subjective evaluation on the synthesized animations show that the proposed method can obtain natural facial animations representing well the dynamic process of emotions. Besides, facial animation with smooth emotion transitions can be obtained by blending the emotion labels.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"42 1","pages":"797-803"},"PeriodicalIF":0.0000,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ACII.2015.7344664","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
This paper presents a 3D emotional facial animation synthesis approach based on the Factored Conditional Restricted Boltzmann Machines (FCRBM). Facial Action Parameters (FAPs) extracted from 2D face image sequences, are adopted to train the FCRBM model parameters. Based on the trained model, given an emotion label sequence and several initial frames of FAPs, the corresponding FAP sequence is generated via the Gibbs sampling, and then used to construct the MPEG-4 compliant 3D facial animation. Emotion recognition and subjective evaluation on the synthesized animations show that the proposed method can obtain natural facial animations representing well the dynamic process of emotions. Besides, facial animation with smooth emotion transitions can be obtained by blending the emotion labels.