GroomCap: High-Fidelity Prior-Free Hair Capture

Yuxiao Zhou, Menglei Chai, Daoye Wang, Sebastian Winberg, Erroll Wood, Kripasindhu Sarkar, Markus Gross, Thabo Beeler
{"title":"GroomCap: High-Fidelity Prior-Free Hair Capture","authors":"Yuxiao Zhou, Menglei Chai, Daoye Wang, Sebastian Winberg, Erroll Wood, Kripasindhu Sarkar, Markus Gross, Thabo Beeler","doi":"arxiv-2409.00831","DOIUrl":null,"url":null,"abstract":"Despite recent advances in multi-view hair reconstruction, achieving\nstrand-level precision remains a significant challenge due to inherent\nlimitations in existing capture pipelines. We introduce GroomCap, a novel\nmulti-view hair capture method that reconstructs faithful and high-fidelity\nhair geometry without relying on external data priors. To address the\nlimitations of conventional reconstruction algorithms, we propose a neural\nimplicit representation for hair volume that encodes high-resolution 3D\norientation and occupancy from input views. This implicit hair volume is\ntrained with a new volumetric 3D orientation rendering algorithm, coupled with\n2D orientation distribution supervision, to effectively prevent the loss of\nstructural information caused by undesired orientation blending. We further\npropose a Gaussian-based hair optimization strategy to refine the traced hair\nstrands with a novel chained Gaussian representation, utilizing direct\nphotometric supervision from images. Our results demonstrate that GroomCap is\nable to capture high-quality hair geometries that are not only more precise and\ndetailed than existing methods but also versatile enough for a range of\napplications.","PeriodicalId":501174,"journal":{"name":"arXiv - CS - Graphics","volume":"22 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Graphics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.00831","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Despite recent advances in multi-view hair reconstruction, achieving strand-level precision remains a significant challenge due to inherent limitations in existing capture pipelines. We introduce GroomCap, a novel multi-view hair capture method that reconstructs faithful and high-fidelity hair geometry without relying on external data priors. To address the limitations of conventional reconstruction algorithms, we propose a neural implicit representation for hair volume that encodes high-resolution 3D orientation and occupancy from input views. This implicit hair volume is trained with a new volumetric 3D orientation rendering algorithm, coupled with 2D orientation distribution supervision, to effectively prevent the loss of structural information caused by undesired orientation blending. We further propose a Gaussian-based hair optimization strategy to refine the traced hair strands with a novel chained Gaussian representation, utilizing direct photometric supervision from images. Our results demonstrate that GroomCap is able to capture high-quality hair geometries that are not only more precise and detailed than existing methods but also versatile enough for a range of applications.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
GroomCap:高保真无前发捕捉功能
尽管最近在多视角头发重建方面取得了进展,但由于现有捕捉管道的固有局限性,要实现最高级别的精确度仍然是一项重大挑战。我们介绍的 GroomCap 是一种新颖的多视角头发捕捉方法,它能重建忠实、高保真的头发几何形状,而无需依赖外部数据先验。为了解决传统重建算法的局限性,我们提出了一种头发体积的神经隐式表示法,它能对输入视图中的高分辨率三维方向和占位进行编码。这种隐式发量用一种新的体积三维方位渲染算法进行训练,并结合二维方位分布监督,以有效防止因不希望的方位混合而造成的结构信息丢失。我们进一步提出了一种基于高斯的发型优化策略,利用图像的直接光度监督,以一种新颖的链式高斯表示法来完善追踪的发型。我们的研究结果表明,GroomCap 能够捕捉到高质量的头发几何图形,不仅比现有方法更精确、更细致,而且用途广泛,适用于各种应用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
GaussianHeads: End-to-End Learning of Drivable Gaussian Head Avatars from Coarse-to-fine Representations A Missing Data Imputation GAN for Character Sprite Generation Visualizing Temporal Topic Embeddings with a Compass Playground v3: Improving Text-to-Image Alignment with Deep-Fusion Large Language Models Phys3DGS: Physically-based 3D Gaussian Splatting for Inverse Rendering
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1