Mesh representation matters: investigating the influence of different mesh features on perceptual and spatial fidelity of deep 3D morphable models

Q1 Computer Science Virtual Reality Intelligent Hardware Pub Date : 2024-10-01 DOI:10.1016/j.vrih.2024.08.006
{"title":"Mesh representation matters: investigating the influence of different mesh features on perceptual and spatial fidelity of deep 3D morphable models","authors":"","doi":"10.1016/j.vrih.2024.08.006","DOIUrl":null,"url":null,"abstract":"<div><h3>Background</h3><div>Deep 3D morphable models (deep 3DMMs) play an essential role in computer vision. They are used in facial synthesis, compression, reconstruction and animation, avatar creation, virtual try-on, facial recognition systems and medical imaging. These applications require high spatial and perceptual quality of synthesised meshes. Despite their significance, these models have not been compared with different mesh representations and evaluated jointly with point-wise distance and perceptual metrics.</div></div><div><h3>Methods</h3><div>We compare the influence of different mesh representation features to various deep 3DMMs on spatial and perceptual fidelity of the reconstructed meshes. This paper proves the hypothesis that building deep 3DMMs from meshes represented with global representations leads to lower spatial reconstruction error measured with <span><math><mrow><msub><mi>L</mi><mn>1</mn></msub></mrow></math></span> and <span><math><mrow><msub><mi>L</mi><mn>2</mn></msub></mrow></math></span> norm metrics and underperforms on perceptual metrics. In contrast, using differential mesh representations which describe differential surface properties yields lower perceptual FMPD and DAME and higher spatial fidelity error. The influence of mesh feature normalisation and standardisation is also compared and analysed from perceptual and spatial fidelity perspectives.</div></div><div><h3>Results</h3><div>The results presented in this paper provide guidance in selecting mesh representations to build deep 3DMMs accordingly to spatial and perceptual quality objectives and propose combinations of mesh representations and deep 3DMMs which improve either perceptual or spatial fidelity of existing methods.</div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Virtual Reality Intelligent Hardware","FirstCategoryId":"1093","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2096579624000536","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Computer Science","Score":null,"Total":0}
引用次数: 0

Abstract

Background

Deep 3D morphable models (deep 3DMMs) play an essential role in computer vision. They are used in facial synthesis, compression, reconstruction and animation, avatar creation, virtual try-on, facial recognition systems and medical imaging. These applications require high spatial and perceptual quality of synthesised meshes. Despite their significance, these models have not been compared with different mesh representations and evaluated jointly with point-wise distance and perceptual metrics.

Methods

We compare the influence of different mesh representation features to various deep 3DMMs on spatial and perceptual fidelity of the reconstructed meshes. This paper proves the hypothesis that building deep 3DMMs from meshes represented with global representations leads to lower spatial reconstruction error measured with L1 and L2 norm metrics and underperforms on perceptual metrics. In contrast, using differential mesh representations which describe differential surface properties yields lower perceptual FMPD and DAME and higher spatial fidelity error. The influence of mesh feature normalisation and standardisation is also compared and analysed from perceptual and spatial fidelity perspectives.

Results

The results presented in this paper provide guidance in selecting mesh representations to build deep 3DMMs accordingly to spatial and perceptual quality objectives and propose combinations of mesh representations and deep 3DMMs which improve either perceptual or spatial fidelity of existing methods.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
网格表示很重要:研究不同网格特征对深度三维可变形模型的感知和空间保真度的影响
背景深三维可变形模型(deep 3DMM)在计算机视觉中发挥着至关重要的作用。它们用于面部合成、压缩、重建和动画、头像创建、虚拟试穿、面部识别系统和医学成像。这些应用对合成网格的空间和感知质量要求很高。我们比较了不同网格表示特征对各种深度 3DMM 在重建网格的空间和感知保真度上的影响。本文证明了一个假设,即用全局表示法表示的网格构建深度 3DMM 会降低用 L1 和 L2 准则度量的空间重建误差,而在感知度量方面则表现不佳。与此相反,使用描述差异表面特性的差异网格表示法可获得较低的感知 FMPD 和 DAME,以及较高的空间保真度误差。本文介绍的结果为根据空间和感知质量目标选择网格表示法来构建深度 3DMM 提供了指导,并提出了网格表示法和深度 3DMM 的组合,从而提高了现有方法的感知或空间保真度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Virtual Reality  Intelligent Hardware
Virtual Reality Intelligent Hardware Computer Science-Computer Graphics and Computer-Aided Design
CiteScore
6.40
自引率
0.00%
发文量
35
审稿时长
12 weeks
期刊最新文献
Co-salient object detection with iterative purification and predictive optimization CURDIS: A template for incremental curve discretization algorithms and its application to conics Mesh representation matters: investigating the influence of different mesh features on perceptual and spatial fidelity of deep 3D morphable models Music-stylized hierarchical dance synthesis with user control Pre-training transformer with dual-branch context content module for table detection in document images
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1