{"title":"3DFacePolicy: Speech-Driven 3D Facial Animation with Diffusion Policy","authors":"Xuanmeng Sha, Liyun Zhang, Tomohiro Mashita, Yuki Uranishi","doi":"arxiv-2409.10848","DOIUrl":null,"url":null,"abstract":"Audio-driven 3D facial animation has made immersive progress both in research\nand application developments. The newest approaches focus on Transformer-based\nmethods and diffusion-based methods, however, there is still gap in the\nvividness and emotional expression between the generated animation and real\nhuman face. To tackle this limitation, we propose 3DFacePolicy, a diffusion\npolicy model for 3D facial animation prediction. This method generates variable\nand realistic human facial movements by predicting the 3D vertex trajectory on\nthe 3D facial template with diffusion policy instead of facial generation for\nevery frame. It takes audio and vertex states as observations to predict the\nvertex trajectory and imitate real human facial expressions, which keeps the\ncontinuous and natural flow of human emotions. The experiments show that our\napproach is effective in variable and dynamic facial motion synthesizing.","PeriodicalId":501284,"journal":{"name":"arXiv - EE - Audio and Speech Processing","volume":"49 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Audio and Speech Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.10848","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Audio-driven 3D facial animation has made immersive progress both in research
and application developments. The newest approaches focus on Transformer-based
methods and diffusion-based methods, however, there is still gap in the
vividness and emotional expression between the generated animation and real
human face. To tackle this limitation, we propose 3DFacePolicy, a diffusion
policy model for 3D facial animation prediction. This method generates variable
and realistic human facial movements by predicting the 3D vertex trajectory on
the 3D facial template with diffusion policy instead of facial generation for
every frame. It takes audio and vertex states as observations to predict the
vertex trajectory and imitate real human facial expressions, which keeps the
continuous and natural flow of human emotions. The experiments show that our
approach is effective in variable and dynamic facial motion synthesizing.