用于全息投影的半自动化生成真人大小的交互式3D角色模型的过程

Xinyu Huang, J. Twycross, Fridolin Wild
{"title":"用于全息投影的半自动化生成真人大小的交互式3D角色模型的过程","authors":"Xinyu Huang, J. Twycross, Fridolin Wild","doi":"10.1109/IC3D48390.2019.8975993","DOIUrl":null,"url":null,"abstract":"By mixing digital data into the real world, Augmented Reality (AR) can deliver potent immersive and interactive experience to its users. In many application contexts, this requires the capability to deploy animated, high fidelity 3D character models. In this paper, we propose a novel approach to efficiently transform – using 3D scanning – an actor to a photorealistic, animated character. This generated 3D assistant must be able to move to perform recorded motion capture data, and it must be able to generate dialogue with lip sync to naturally interact with the users. The approach we propose for creating these virtual AR assistants utilizes photogrammetric scanning, motion capture, and free viewpoint video for their integration in Unity. We deploy the Occipital Structure sensor to acquire static high-resolution textured surfaces, and a Vicon motion capture system to track series of movements. The proposed capturing process consists of the steps scanning, reconstruction with Wrap 3 and Maya, editing texture maps to reduce artefacts with Photoshop, and rigging with Maya and Motion Builder to render the models fit for animation and lip-sync using LipSyncPro. We test the approach in Unity by scanning two human models with 23 captured animations each. Our findings indicate that the major factors affecting the result quality are environment setup, lighting, and processing constraints.","PeriodicalId":344706,"journal":{"name":"2019 International Conference on 3D Immersion (IC3D)","volume":"271 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"A Process for the Semi-Automated Generation of Life-Sized, Interactive 3D Character Models for Holographic Projection\",\"authors\":\"Xinyu Huang, J. Twycross, Fridolin Wild\",\"doi\":\"10.1109/IC3D48390.2019.8975993\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"By mixing digital data into the real world, Augmented Reality (AR) can deliver potent immersive and interactive experience to its users. In many application contexts, this requires the capability to deploy animated, high fidelity 3D character models. In this paper, we propose a novel approach to efficiently transform – using 3D scanning – an actor to a photorealistic, animated character. This generated 3D assistant must be able to move to perform recorded motion capture data, and it must be able to generate dialogue with lip sync to naturally interact with the users. The approach we propose for creating these virtual AR assistants utilizes photogrammetric scanning, motion capture, and free viewpoint video for their integration in Unity. We deploy the Occipital Structure sensor to acquire static high-resolution textured surfaces, and a Vicon motion capture system to track series of movements. The proposed capturing process consists of the steps scanning, reconstruction with Wrap 3 and Maya, editing texture maps to reduce artefacts with Photoshop, and rigging with Maya and Motion Builder to render the models fit for animation and lip-sync using LipSyncPro. We test the approach in Unity by scanning two human models with 23 captured animations each. Our findings indicate that the major factors affecting the result quality are environment setup, lighting, and processing constraints.\",\"PeriodicalId\":344706,\"journal\":{\"name\":\"2019 International Conference on 3D Immersion (IC3D)\",\"volume\":\"271 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 International Conference on 3D Immersion (IC3D)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IC3D48390.2019.8975993\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 International Conference on 3D Immersion (IC3D)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IC3D48390.2019.8975993","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

通过将数字数据混合到现实世界中,增强现实(AR)可以为用户提供强大的沉浸式互动体验。在许多应用环境中,这需要部署动画、高保真3D角色模型的能力。在本文中,我们提出了一种新的方法来有效地转换-使用3D扫描-一个演员到一个逼真的动画人物。这个生成的3D助手必须能够移动来执行记录的动作捕捉数据,并且它必须能够生成与口型同步的对话,以便与用户自然互动。我们提出的创建这些虚拟AR助手的方法利用摄影测量扫描,动作捕捉和免费视点视频将其集成在Unity中。我们使用枕结构传感器来获取静态高分辨率纹理表面,并使用Vicon运动捕捉系统来跟踪一系列运动。提出的捕获过程包括步骤扫描,重建与Wrap 3和玛雅,编辑纹理映射,以减少与Photoshop的人工制品,并与Maya和Motion Builder索具,以渲染模型适合动画和口型同步使用LipSyncPro。我们在Unity中通过扫描两个具有23个捕获动画的人体模型来测试该方法。我们的研究结果表明,影响结果质量的主要因素是环境设置,照明和处理约束。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A Process for the Semi-Automated Generation of Life-Sized, Interactive 3D Character Models for Holographic Projection
By mixing digital data into the real world, Augmented Reality (AR) can deliver potent immersive and interactive experience to its users. In many application contexts, this requires the capability to deploy animated, high fidelity 3D character models. In this paper, we propose a novel approach to efficiently transform – using 3D scanning – an actor to a photorealistic, animated character. This generated 3D assistant must be able to move to perform recorded motion capture data, and it must be able to generate dialogue with lip sync to naturally interact with the users. The approach we propose for creating these virtual AR assistants utilizes photogrammetric scanning, motion capture, and free viewpoint video for their integration in Unity. We deploy the Occipital Structure sensor to acquire static high-resolution textured surfaces, and a Vicon motion capture system to track series of movements. The proposed capturing process consists of the steps scanning, reconstruction with Wrap 3 and Maya, editing texture maps to reduce artefacts with Photoshop, and rigging with Maya and Motion Builder to render the models fit for animation and lip-sync using LipSyncPro. We test the approach in Unity by scanning two human models with 23 captured animations each. Our findings indicate that the major factors affecting the result quality are environment setup, lighting, and processing constraints.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Consistent Long Sequences Deep Faces A Novel Randomize Hierarchical Extension of MV-HEVC for Improved Light Field Compression A Novel Algebaric Variety Based Model for High Quality Free-Viewpoint View Synthesis on a Krylov Subspace Relating Eye Dominance to Neurochemistry in the Human Visual Cortex Using Ultra High Field 7-Tesla MR Spectroscopy Frame-Wise CNN-Based View Synthesis for Light Field Camera Arrays
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1