Reading Between the Dots: Combining 3D Markers and FACS Classification for High-Quality Blendshape Facial Animation

Shridhar Ravikumar, Colin Davidson, Dmitry Kit, N. Campbell, L. Benedetti, D. Cosker
{"title":"Reading Between the Dots: Combining 3D Markers and FACS Classification for High-Quality Blendshape Facial Animation","authors":"Shridhar Ravikumar, Colin Davidson, Dmitry Kit, N. Campbell, L. Benedetti, D. Cosker","doi":"10.20380/GI2016.18","DOIUrl":null,"url":null,"abstract":"Marker based performance capture is one of the most widely used approaches for facial tracking owing to its robustness. In practice, marker based systems do not capture the performance with complete fidelity and often require subsequent manual adjustment to incorporate missing visual details. This problem persists even when using larger number of markers. Tracking a large number of markers can also quickly become intractable due to issues such as occlusion, swapping and merging of markers. We present a new approach for fitting blendshape models to motion-capture data that improves quality, by exploiting information from sparse make-up patches in the video between the markers, while using fewer markers. Our method uses a classification based approach that detects FACS Action Units and their intensities to assist the solver in predicting optimal blendshape weights while taking perceptual quality into consideration. Our classifier is independent of the performer; once trained, it can be applied to multiple performers. Given performances captured using a Head Mounted Camera (HMC), which provides 3D facial marker based tracking and corresponding video, we fit accurate, production quality blendshape models to this data resulting in high-quality animations.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"8 1","pages":"143-151"},"PeriodicalIF":0.0000,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. Graphics Interface (Conference)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.20380/GI2016.18","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

Marker based performance capture is one of the most widely used approaches for facial tracking owing to its robustness. In practice, marker based systems do not capture the performance with complete fidelity and often require subsequent manual adjustment to incorporate missing visual details. This problem persists even when using larger number of markers. Tracking a large number of markers can also quickly become intractable due to issues such as occlusion, swapping and merging of markers. We present a new approach for fitting blendshape models to motion-capture data that improves quality, by exploiting information from sparse make-up patches in the video between the markers, while using fewer markers. Our method uses a classification based approach that detects FACS Action Units and their intensities to assist the solver in predicting optimal blendshape weights while taking perceptual quality into consideration. Our classifier is independent of the performer; once trained, it can be applied to multiple performers. Given performances captured using a Head Mounted Camera (HMC), which provides 3D facial marker based tracking and corresponding video, we fit accurate, production quality blendshape models to this data resulting in high-quality animations.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
阅读点之间:结合3D标记和FACS分类高质量的混合形状面部动画
基于标记的性能捕获由于其鲁棒性而成为应用最广泛的人脸跟踪方法之一。实际上,基于标记的系统不能完全准确地捕捉到性能,并且经常需要随后的手动调整来包含缺失的视觉细节。即使使用大量的标记,这个问题仍然存在。由于遮挡、交换和合并标记等问题,跟踪大量标记也可能很快变得棘手。我们提出了一种新的方法来拟合混合形状模型,以提高运动捕捉数据的质量,通过利用标记之间的视频中稀疏补片的信息,同时使用更少的标记。我们的方法使用基于分类的方法来检测FACS动作单元及其强度,以帮助求解器在考虑感知质量的同时预测最佳混合形状权重。我们的分类器独立于表演者;一旦训练,它可以应用于多个表演者。考虑到使用头戴式摄像机(HMC)捕获的性能,它提供基于3D面部标记的跟踪和相应的视频,我们将准确的、生产质量的混合形状模型与这些数据相匹配,从而产生高质量的动画。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
2.20
自引率
0.00%
发文量
0
期刊最新文献
Towards Enabling Blind People to Fill Out Paper Forms with a Wearable Smartphone Assistant. BayesGaze: A Bayesian Approach to Eye-Gaze Based Target Selection. Personal+Context navigation: combining AR and shared displays in network path-following Interactive Exploration of Genomic Conservation AffordIt!: A Tool for Authoring Object Component Behavior in Virtual Reality
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1