Existing image enhancement algorithms often fail to effectively address issues of visual disbalance, such as brightness unevenness and color distortion, in low-light images. To overcome these challenges, we propose a TransISP-based image enhancement method specifically designed for low-light images. To mitigate color distortion, we design dual encoders based on decoupled representation learning, which enable complete decoupling of the reflection and illumination components, thereby preventing mutual interference during the image enhancement process. To address brightness unevenness, we introduce CNNformer, a hybrid model combining CNN and Transformer. This model efficiently captures local details and long-distance dependencies between pixels, contributing to the enhancement of brightness features across various local regions. Additionally, we integrate traditional image signal processing algorithms to achieve efficient color correction and denoising of the reflection component. Furthermore, we employ a generative adversarial network (GAN) as the overarching framework to facilitate unsupervised learning. The experimental results show that, compared with six SOTA image enhancement algorithms, our method obtains significant improvement in evaluation indexes (e.g., on LOL, PSNR: 15.59%, SSIM: 9.77%, VIF: 9.65%), and it can improve visual disbalance defects in low-light images captured from real-world coal mine underground scenarios.
{"title":"A TransISP Based Image Enhancement Method for Visual Disbalance in Low-light Images","authors":"Jiaqi Wu, Jing Guo, Rui Jing, Shihao Zhang, Zijian Tian, Wei Chen, Zehua Wang","doi":"10.1111/cgf.15209","DOIUrl":"https://doi.org/10.1111/cgf.15209","url":null,"abstract":"<p>Existing image enhancement algorithms often fail to effectively address issues of visual disbalance, such as brightness unevenness and color distortion, in low-light images. To overcome these challenges, we propose a TransISP-based image enhancement method specifically designed for low-light images. To mitigate color distortion, we design dual encoders based on decoupled representation learning, which enable complete decoupling of the reflection and illumination components, thereby preventing mutual interference during the image enhancement process. To address brightness unevenness, we introduce CNNformer, a hybrid model combining CNN and Transformer. This model efficiently captures local details and long-distance dependencies between pixels, contributing to the enhancement of brightness features across various local regions. Additionally, we integrate traditional image signal processing algorithms to achieve efficient color correction and denoising of the reflection component. Furthermore, we employ a generative adversarial network (GAN) as the overarching framework to facilitate unsupervised learning. The experimental results show that, compared with six SOTA image enhancement algorithms, our method obtains significant improvement in evaluation indexes (e.g., on LOL, PSNR: 15.59%, SSIM: 9.77%, VIF: 9.65%), and it can improve visual disbalance defects in low-light images captured from real-world coal mine underground scenarios.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce a novel framework for surface cutting and flattening, aiming to align the boundary of planar parameterization with a target shape. Diverging from traditional methods focused on minimizing distortion, we intend to also achieve shape similarity between the parameterized mesh and a specific planar target, which is important in some applications of art design and texture mapping. However, with existing methods commonly limited to ellipsoidal surfaces, it still remains a challenge to solve this problem on general surfaces. Our framework models the general case as a joint optimization of cuts and parameterization, guided by a novel metric assessing shape similarity. To circumvent the common issue of local minima, we introduce an extra global seam updating strategy which is guided by the target shape. Experimental results show that our framework not only aligns with previous approaches on ellipsoidal surfaces but also achieves satisfactory results on more complex ones.
{"title":"Surface Cutting and Flattening to Target Shapes","authors":"Yuanhao Li, Wenzheng Wu, Ligang Liu","doi":"10.1111/cgf.15223","DOIUrl":"https://doi.org/10.1111/cgf.15223","url":null,"abstract":"<p>We introduce a novel framework for surface cutting and flattening, aiming to align the boundary of planar parameterization with a target shape. Diverging from traditional methods focused on minimizing distortion, we intend to also achieve shape similarity between the parameterized mesh and a specific planar target, which is important in some applications of art design and texture mapping. However, with existing methods commonly limited to ellipsoidal surfaces, it still remains a challenge to solve this problem on general surfaces. Our framework models the general case as a joint optimization of cuts and parameterization, guided by a novel metric assessing shape similarity. To circumvent the common issue of local minima, we introduce an extra global seam updating strategy which is guided by the target shape. Experimental results show that our framework not only aligns with previous approaches on ellipsoidal surfaces but also achieves satisfactory results on more complex ones.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Unsupervised domain adaptation (UDA) is increasingly used for 3D point cloud semantic segmentation tasks due to its ability to address the issue of missing labels for new domains. However, most existing unsupervised domain adaptation methods focus only on uni-modal data and are rarely applied to multi-modal data. Therefore, we propose a cross-modal UDA on multi-modal datasets that contain 3D point clouds and 2D images for 3D Semantic Segmentation. Specifically, we first propose a Dual discriminator-based Domain Adaptation (Dd-bDA) module to enhance the adaptability of different domains. Second, given that the robustness of depth information to domain shifts can provide more details for semantic segmentation, we further employ a Dense depth Feature Fusion (DdFF) module to extract image features with rich depth cues. We evaluate our model in four unsupervised domain adaptation scenarios, i.e., dataset-to-dataset (A2D2 → SemanticKITTI), Day-to-Night, country-to-country (USA → Singapore), and synthetic-to-real (VirtualKITTI → SemanticKITTI). In all settings, the experimental results achieve significant improvements and surpass state-of-the-art models.
{"title":"Adversarial Unsupervised Domain Adaptation for 3D Semantic Segmentation with 2D Image Fusion of Dense Depth","authors":"Xindan Zhang, Ying Li, Huankun Sheng, Xinnian Zhang","doi":"10.1111/cgf.15250","DOIUrl":"https://doi.org/10.1111/cgf.15250","url":null,"abstract":"<p>Unsupervised domain adaptation (UDA) is increasingly used for 3D point cloud semantic segmentation tasks due to its ability to address the issue of missing labels for new domains. However, most existing unsupervised domain adaptation methods focus only on uni-modal data and are rarely applied to multi-modal data. Therefore, we propose a cross-modal UDA on multi-modal datasets that contain 3D point clouds and 2D images for 3D Semantic Segmentation. Specifically, we first propose a Dual discriminator-based Domain Adaptation (Dd-bDA) module to enhance the adaptability of different domains. Second, given that the robustness of depth information to domain shifts can provide more details for semantic segmentation, we further employ a Dense depth Feature Fusion (DdFF) module to extract image features with rich depth cues. We evaluate our model in four unsupervised domain adaptation scenarios, i.e., dataset-to-dataset (A2D2 → SemanticKITTI), Day-to-Night, country-to-country (USA → Singapore), and synthetic-to-real (VirtualKITTI → SemanticKITTI). In all settings, the experimental results achieve significant improvements and surpass state-of-the-art models.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ying He, Qing Fang, Zheng Zhang, Tielin Dai, Kang Wu, Ligang Liu, Xiao-Ming Fu
We propose a novel method for generating symmetric piecewise developable approximations for shapes in approximately global reflectional or rotational symmetry. Given a shape and its symmetry constraint, the algorithm contains two crucial steps: (i) a symmetric deformation to achieve a nearly developable model and (ii) a symmetric segmentation aided by the deformed shape. The key to the deformation step is the use of the symmetric implicit neural representations of the shape and the deformation field. A new mesh extraction from the implicit function is introduced to construct a strictly symmetric mesh for the subsequent segmentation. The symmetry constraint is carefully integrated into the partition to achieve the symmetric piecewise developable approximation. We demonstrate the effectiveness of our algorithm over various meshes.
{"title":"Symmetric Piecewise Developable Approximations","authors":"Ying He, Qing Fang, Zheng Zhang, Tielin Dai, Kang Wu, Ligang Liu, Xiao-Ming Fu","doi":"10.1111/cgf.15242","DOIUrl":"https://doi.org/10.1111/cgf.15242","url":null,"abstract":"<p>We propose a novel method for generating symmetric piecewise developable approximations for shapes in approximately global reflectional or rotational symmetry. Given a shape and its symmetry constraint, the algorithm contains two crucial steps: (i) a symmetric deformation to achieve a nearly developable model and (ii) a symmetric segmentation aided by the deformed shape. The key to the deformation step is the use of the symmetric implicit neural representations of the shape and the deformation field. A new mesh extraction from the implicit function is introduced to construct a strictly symmetric mesh for the subsequent segmentation. The symmetry constraint is carefully integrated into the partition to achieve the symmetric piecewise developable approximation. We demonstrate the effectiveness of our algorithm over various meshes.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Li, Z. Wen, L. Zhang, J. Hu, F. Hou, Z. Zhang, Y. He
The 3D Gaussian Splatting technique has significantly advanced the construction of radiance fields from multi-view images, enabling real-time rendering. While point-based rasterization effectively reduces computational demands for rendering, it often struggles to accurately reconstruct the geometry of the target object, especially under strong lighting conditions. Strong lighting can cause significant color variations on the object's surface when viewed from different directions, complicating the reconstruction process. To address this challenge, we introduce an approach that combines octree-based implicit surface representations with Gaussian Splatting. Initially, it reconstructs a signed distance field (SDF) and a radiance field through volume rendering, encoding them in a low-resolution octree. This initial SDF represents the coarse geometry of the target object. Subsequently, it introduces 3D Gaussians as additional degrees of freedom, which are guided by the initial SDF. In the third stage, the optimized Gaussians enhance the accuracy of the SDF, enabling the recovery of finer geometric details compared to the initial SDF. Finally, the refined SDF is used to further optimize the 3D Gaussians via splatting, eliminating those that contribute little to the visual appearance. Experimental results show that our method, which leverages the distribution of 3D Gaussians with SDFs, reconstructs more accurate geometry, particularly in images with specular highlights caused by strong lighting. The source code can be downloaded from https://github.com/LaoChui999/GS-Octree.
{"title":"GS-Octree: Octree-based 3D Gaussian Splatting for Robust Object-level 3D Reconstruction Under Strong Lighting","authors":"J. Li, Z. Wen, L. Zhang, J. Hu, F. Hou, Z. Zhang, Y. He","doi":"10.1111/cgf.15206","DOIUrl":"https://doi.org/10.1111/cgf.15206","url":null,"abstract":"<p>The 3D Gaussian Splatting technique has significantly advanced the construction of radiance fields from multi-view images, enabling real-time rendering. While point-based rasterization effectively reduces computational demands for rendering, it often struggles to accurately reconstruct the geometry of the target object, especially under strong lighting conditions. Strong lighting can cause significant color variations on the object's surface when viewed from different directions, complicating the reconstruction process. To address this challenge, we introduce an approach that combines octree-based implicit surface representations with Gaussian Splatting. Initially, it reconstructs a signed distance field (SDF) and a radiance field through volume rendering, encoding them in a low-resolution octree. This initial SDF represents the coarse geometry of the target object. Subsequently, it introduces 3D Gaussians as additional degrees of freedom, which are guided by the initial SDF. In the third stage, the optimized Gaussians enhance the accuracy of the SDF, enabling the recovery of finer geometric details compared to the initial SDF. Finally, the refined SDF is used to further optimize the 3D Gaussians via splatting, eliminating those that contribute little to the visual appearance. Experimental results show that our method, which leverages the distribution of 3D Gaussians with SDFs, reconstructs more accurate geometry, particularly in images with specular highlights caused by strong lighting. The source code can be downloaded from https://github.com/LaoChui999/GS-Octree.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a novel ray reordering technique designed to accelerate the ray tracing process by encoding and sorting rays prior to traversal. Our method, called “hierarchy cut code”, involves encoding rays based on the cuts of the hierarchical acceleration structure, rather than relying solely on spatial coordinates. This approach allows for a more effective adaptation to the acceleration structure, resulting in a more reliable and efficient encoding outcome. Furthermore, our research identifies “bounding drift” as a major obstacle in achieving better acceleration effects using longer sorting keys in existing reordering methods. Fortunately, our hierarchy cut code successfully overcomes this issue, providing improved performance in ray tracing. Experimental results demonstrate the effectiveness of our approach, showing up to a 1.81 times faster secondary ray tracing compared to existing methods. These promising results highlight the potential for further enhancement in the acceleration effect of reordering techniques, warranting further exploration and research in this exciting field.
{"title":"Faster Ray Tracing through Hierarchy Cut Code","authors":"WeiLai Xiang, FengQi Liu, Zaonan Tan, Dan Li, PengZhan Xu, MeiZhi Liu, QiLong Kou","doi":"10.1111/cgf.15226","DOIUrl":"https://doi.org/10.1111/cgf.15226","url":null,"abstract":"<p>We propose a novel ray reordering technique designed to accelerate the ray tracing process by encoding and sorting rays prior to traversal. Our method, called “hierarchy cut code”, involves encoding rays based on the cuts of the hierarchical acceleration structure, rather than relying solely on spatial coordinates. This approach allows for a more effective adaptation to the acceleration structure, resulting in a more reliable and efficient encoding outcome. Furthermore, our research identifies “bounding drift” as a major obstacle in achieving better acceleration effects using longer sorting keys in existing reordering methods. Fortunately, our hierarchy cut code successfully overcomes this issue, providing improved performance in ray tracing. Experimental results demonstrate the effectiveness of our approach, showing up to a 1.81 times faster secondary ray tracing compared to existing methods. These promising results highlight the potential for further enhancement in the acceleration effect of reordering techniques, warranting further exploration and research in this exciting field.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lifespan face age transformation aims to generate facial images that accurately depict an individual's appearance at different age stages. This task is highly challenging due to the need for reasonable changes in facial features while preserving identity characteristics. Existing methods tend to synthesize unsatisfactory results, such as entangled facial attributes and low identity preservation, especially when dealing with large age gaps. Furthermore, over-manipulating the style vector may deviate it from the latent space and damage image quality. To address these issues, this paper introduces a novel nonlinear regression model-Disentangled Lifespan face Aging (DL-Aging) to achieve high-quality age transformation images. Specifically, we propose an age modulation encoder to extract age-related multi-scale facial features as key and value, and use the reconstructed style vector of the image as the query. The multi-head cross-attention in the W+ space is utilized to update the query for aging image reconstruction iteratively. This nonlinear transformation enables the model to learn a more disentangled mode of transformation, which is crucial for alleviating facial attribute entanglement. Additionally, we introduce a W+ space age regularization term to prevent excessive manipulation of the style vector and ensure it remains within the W+ space during transformation, thereby improving generation quality and aging accuracy. Extensive qualitative and quantitative experiments demonstrate that the proposed DL-Aging outperforms state-of-the-art methods regarding aging accuracy, image quality, attribute disentanglement, and identity preservation, especially for large age gaps.
{"title":"Disentangled Lifespan Synthesis via Transformer-Based Nonlinear Regression","authors":"Mingyuan Li, Yingchun Guo","doi":"10.1111/cgf.15229","DOIUrl":"https://doi.org/10.1111/cgf.15229","url":null,"abstract":"<p>Lifespan face age transformation aims to generate facial images that accurately depict an individual's appearance at different age stages. This task is highly challenging due to the need for reasonable changes in facial features while preserving identity characteristics. Existing methods tend to synthesize unsatisfactory results, such as entangled facial attributes and low identity preservation, especially when dealing with large age gaps. Furthermore, over-manipulating the style vector may deviate it from the latent space and damage image quality. To address these issues, this paper introduces a novel nonlinear regression model-<b>D</b>isentangled <b>L</b>ifespan face <b>Aging</b> (DL-Aging) to achieve high-quality age transformation images. Specifically, we propose an age modulation encoder to extract age-related multi-scale facial features as key and value, and use the reconstructed style vector of the image as the query. The multi-head cross-attention in the W<sup>+</sup> space is utilized to update the query for aging image reconstruction iteratively. This nonlinear transformation enables the model to learn a more disentangled mode of transformation, which is crucial for alleviating facial attribute entanglement. Additionally, we introduce a W<sup>+</sup> space age regularization term to prevent excessive manipulation of the style vector and ensure it remains within the W<sup>+</sup> space during transformation, thereby improving generation quality and aging accuracy. Extensive qualitative and quantitative experiments demonstrate that the proposed DL-Aging outperforms state-of-the-art methods regarding aging accuracy, image quality, attribute disentanglement, and identity preservation, especially for large age gaps.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Existing image dehazing methods have made remarkable progress. However, they generally perform poorly on images with dense haze, and often suffer from unsatisfactory results with detail degradation or color distortion. In this paper, we propose a density-aware diffusion model (DADM) for image dehazing. Guided by the haze density, our DADM can handle images with dense haze and complex environments. Specifically, we introduce a density-aware dehazing network (DADNet) in the reverse diffusion process, which can help DADM gradually recover a clear haze-free image from a haze image. To improve the performance of the network, we design a cross-feature density extraction module (CDEModule) to extract the haze density for the image and a density-guided feature fusion block (DFFBlock) to learn the effective contextual features. Furthermore, we introduce an indirect sampling strategy in the test sampling process, which not only suppresses the accumulation of errors but also ensures the stability of the results. Extensive experiments on popular benchmarks validate the superior performance of the proposed method. The code is released in https://github.com/benchacha/DADM.
{"title":"Density-Aware Diffusion Model for Efficient Image Dehazing","authors":"Ling Zhang, Wenxu Bai, Chunxia Xiao","doi":"10.1111/cgf.15221","DOIUrl":"https://doi.org/10.1111/cgf.15221","url":null,"abstract":"<p>Existing image dehazing methods have made remarkable progress. However, they generally perform poorly on images with dense haze, and often suffer from unsatisfactory results with detail degradation or color distortion. In this paper, we propose a density-aware diffusion model (DADM) for image dehazing. Guided by the haze density, our DADM can handle images with dense haze and complex environments. Specifically, we introduce a density-aware dehazing network (DADNet) in the reverse diffusion process, which can help DADM gradually recover a clear haze-free image from a haze image. To improve the performance of the network, we design a cross-feature density extraction module (CDEModule) to extract the haze density for the image and a density-guided feature fusion block (DFFBlock) to learn the effective contextual features. Furthermore, we introduce an indirect sampling strategy in the test sampling process, which not only suppresses the accumulation of errors but also ensures the stability of the results. Extensive experiments on popular benchmarks validate the superior performance of the proposed method. The code is released in https://github.com/benchacha/DADM.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Image triangulation methods, which decompose an image into a series of triangles, are fundamental in artistic creation and image processing. This paper introduces a novel framework that integrates cubic Bézier curves into image triangulation, enabling the precise reconstruction of curved image features. Our developed framework constructs a well-structured curved triangle mesh, effectively preventing overlaps between curves. A refined energy function, grounded in differentiable rendering, establishes a direct link between mesh geometry and rendering effects and is instrumental in guiding the curved mesh generation. Additionally, we derive an explicit gradient formula with respect to mesh parameters, facilitating the adaptive and efficient optimization of these parameters to fully leverage the capabilities of cubic Bézier curves. Through experimental and comparative analyses with state-of-the-art methods, our approach demonstrates a significant enhancement in both numerical accuracy and visual quality.
{"title":"Curved Image Triangulation Based on Differentiable Rendering","authors":"Wanyi Wang, Zhonggui Chen, Lincong Fang, Juan Cao","doi":"10.1111/cgf.15232","DOIUrl":"https://doi.org/10.1111/cgf.15232","url":null,"abstract":"<p>Image triangulation methods, which decompose an image into a series of triangles, are fundamental in artistic creation and image processing. This paper introduces a novel framework that integrates cubic Bézier curves into image triangulation, enabling the precise reconstruction of curved image features. Our developed framework constructs a well-structured curved triangle mesh, effectively preventing overlaps between curves. A refined energy function, grounded in differentiable rendering, establishes a direct link between mesh geometry and rendering effects and is instrumental in guiding the curved mesh generation. Additionally, we derive an explicit gradient formula with respect to mesh parameters, facilitating the adaptive and efficient optimization of these parameters to fully leverage the capabilities of cubic Bézier curves. Through experimental and comparative analyses with state-of-the-art methods, our approach demonstrates a significant enhancement in both numerical accuracy and visual quality.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural surface reconstruction methods have demonstrated their ability to recover 3D surfaces from multiple images. However, current approaches struggle to rapidly achieve high-fidelity surface reconstructions. In this work, we propose TaNSR, which inherits the speed advantages of multi-resolution hash encodings and extends its representation capabilities. To reduce training time, we propose an efficient numerical gradient computation method that significantly reduces additional memory access overhead. To further improve reconstruction quality and expedite training, we propose a feature aggregation strategy in volume rendering. Building on this, we introduce an adaptively weighted aggregation function to ensure the network can accurately reconstruct the surface of objects and recover more geometric details. Experiments on multiple datasets indicate that TaNSR significantly reduces training time while achieving better reconstruction accuracy compared to state-of-the-art nerual implicit methods.
{"title":"TaNSR:Efficient 3D Reconstruction with Tetrahedral Difference and Feature Aggregation","authors":"Zhaohan Lv, Xingcan Bao, Yong Tang, Jing Zhao","doi":"10.1111/cgf.15207","DOIUrl":"https://doi.org/10.1111/cgf.15207","url":null,"abstract":"<p>Neural surface reconstruction methods have demonstrated their ability to recover 3D surfaces from multiple images. However, current approaches struggle to rapidly achieve high-fidelity surface reconstructions. In this work, we propose TaNSR, which inherits the speed advantages of multi-resolution hash encodings and extends its representation capabilities. To reduce training time, we propose an efficient numerical gradient computation method that significantly reduces additional memory access overhead. To further improve reconstruction quality and expedite training, we propose a feature aggregation strategy in volume rendering. Building on this, we introduce an adaptively weighted aggregation function to ensure the network can accurately reconstruct the surface of objects and recover more geometric details. Experiments on multiple datasets indicate that TaNSR significantly reduces training time while achieving better reconstruction accuracy compared to state-of-the-art nerual implicit methods.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}