{"title":"Analysis and synthesis of facial expressions with hand-generated muscle actuation basis","authors":"Byoungwon Choe, Hyeongseok Ko","doi":"10.1109/CA.2001.982372","DOIUrl":null,"url":null,"abstract":"We present a performance-driven facial animation system for analyzing captured expressions to find muscle actuation and synthesizing expressions with the actuation values. A significantly different approach of our work is that we let artists sculpt the initial draft of the actuation basis: the basic facial shapes corresponding to the isolated actuation of individual muscles, instead of calculating skin surface deformation entirely, relying on mathematical models such as finite element methods. We synthesize expressions by linear combinations of the basis elements, and analyze expressions by finding the weights for the combinations. Even though the hand-generated actuation basis represents the essence of the subject's characteristic expressions, it is not accurate enough to be used in the subsequent computational procedures. We also describe an iterative algorithm to increase the accuracy of the actuation basis. The experimental results suggest that our artist-in-the-loop method produces a more predictable and controllable outcome than pure mathematical models, and thus can be a quite useful tool in animation productions.","PeriodicalId":244191,"journal":{"name":"Proceedings Computer Animation 2001. Fourteenth Conference on Computer Animation (Cat. No.01TH8596)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2001-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"58","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings Computer Animation 2001. Fourteenth Conference on Computer Animation (Cat. No.01TH8596)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CA.2001.982372","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 58
Abstract
We present a performance-driven facial animation system for analyzing captured expressions to find muscle actuation and synthesizing expressions with the actuation values. A significantly different approach of our work is that we let artists sculpt the initial draft of the actuation basis: the basic facial shapes corresponding to the isolated actuation of individual muscles, instead of calculating skin surface deformation entirely, relying on mathematical models such as finite element methods. We synthesize expressions by linear combinations of the basis elements, and analyze expressions by finding the weights for the combinations. Even though the hand-generated actuation basis represents the essence of the subject's characteristic expressions, it is not accurate enough to be used in the subsequent computational procedures. We also describe an iterative algorithm to increase the accuracy of the actuation basis. The experimental results suggest that our artist-in-the-loop method produces a more predictable and controllable outcome than pure mathematical models, and thus can be a quite useful tool in animation productions.
我们提出了一种性能驱动的面部动画系统,用于分析捕获的表情以发现肌肉驱动并将表情与驱动值合成。我们工作的一个明显不同的方法是,我们让艺术家雕刻驱动基础的初稿:与单个肌肉的孤立驱动相对应的基本面部形状,而不是完全计算皮肤表面变形,依靠有限元方法等数学模型。我们通过基元的线性组合来合成表达式,并通过寻找组合的权值来分析表达式。尽管手工生成的驱动基代表了主体特征表达式的本质,但它不够精确,无法用于后续的计算过程。我们还描述了一种迭代算法来提高驱动基的精度。实验结果表明,我们的艺术家-in- The -loop方法比纯数学模型产生更可预测和可控的结果,因此可以成为动画制作中非常有用的工具。