Zhongsi Tang, Jiahao Geng, Yanlin Weng, Youyi Zheng, Kun Zhou
{"title":"Single-View 3D Hair Modeling with Clumping Optimization.","authors":"Zhongsi Tang, Jiahao Geng, Yanlin Weng, Youyi Zheng, Kun Zhou","doi":"10.1109/TVCG.2025.3552919","DOIUrl":null,"url":null,"abstract":"<p><p>Deep learning advancements have enabled the generation of visually plausible hair geometry from a single image, but the results still do not meet the realism required for further applications (e.g., high quality hair rendering and simulation). One of the essential element that is missing in previous single-view hair reconstruction methods is the clumping effect of hair, which is influenced by scalp secretions and oils, and is a key ingredient for high-quality hair rendering and simulation. Inspired by common practices in industrial production which simulates realistic hair clumping by allowing artists to adjust clumping parameters, we aim to integrate these clumping effects into single-view hair reconstruction. We introduce a hierarchical hair representation that incorporates a clumping modifier into the guide hair and skinning-based hair expressions. This representation utilizes guide strands and skinning weights to express the basic geometric structure of the hair. The clumping modifier allows for the expression of more detailed and realistic clumping effects. Based on this representation, We design a fully differentiable framework integrating a neural measurement of clumping and a line-based rasterization renderer to iteratively solve guide strands positions and clumping parameters. Our method demonstrates superior performance both qualitatively and quantitatively compared to state-of-the-art techniques.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on visualization and computer graphics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TVCG.2025.3552919","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Deep learning advancements have enabled the generation of visually plausible hair geometry from a single image, but the results still do not meet the realism required for further applications (e.g., high quality hair rendering and simulation). One of the essential element that is missing in previous single-view hair reconstruction methods is the clumping effect of hair, which is influenced by scalp secretions and oils, and is a key ingredient for high-quality hair rendering and simulation. Inspired by common practices in industrial production which simulates realistic hair clumping by allowing artists to adjust clumping parameters, we aim to integrate these clumping effects into single-view hair reconstruction. We introduce a hierarchical hair representation that incorporates a clumping modifier into the guide hair and skinning-based hair expressions. This representation utilizes guide strands and skinning weights to express the basic geometric structure of the hair. The clumping modifier allows for the expression of more detailed and realistic clumping effects. Based on this representation, We design a fully differentiable framework integrating a neural measurement of clumping and a line-based rasterization renderer to iteratively solve guide strands positions and clumping parameters. Our method demonstrates superior performance both qualitatively and quantitatively compared to state-of-the-art techniques.