首页 > 最新文献

Computer Graphics Forum最新文献

英文 中文
Importance Sampling of the Micrograin Visible NDF 微颗粒可见NDF的重要采样
IF 2.9 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-07-24 DOI: 10.1111/cgf.70174
S. Lucas, R. Pacanowski, P. Barla

Importance sampling of visible normal distribution functions (vNDF) is a required ingredient for the efficient rendering of microfacet-based materials. In this paper, we explain how to sample the vNDF for the micrograin material model [LRPB23], which has been recently improved to handle height-normal correlations through a new Geometric Attenuation Factor (GAF) [LRPB24], leading to a stronger impact on appearance compared to the earlier Smith approximation. To this end, we make two contributions: we derive analytic expressions for the marginal and conditional cumulative distribution functions (CDFs) of the vNDF; we provide efficient methods for inverting these CDFs based respectively on a 2D lookup table and on the triangle-cut method [Hei20].

可见正态分布函数(vNDF)的重要性采样是高效渲染微面材料的必要组成部分。在本文中,我们解释了如何为微颗粒材料模型[LRPB23]采样vNDF,该模型最近经过改进,通过新的几何衰减因子(GAF) [LRPB24]处理高度正态相关性,与早期的Smith近似相比,对外观的影响更大。为此,我们做出了两个贡献:我们导出了vNDF的边际和条件累积分布函数(CDFs)的解析表达式;我们分别基于二维查找表和三角切割方法提供了有效的方法来反演这些cdf [he20]。
{"title":"Importance Sampling of the Micrograin Visible NDF","authors":"S. Lucas,&nbsp;R. Pacanowski,&nbsp;P. Barla","doi":"10.1111/cgf.70174","DOIUrl":"https://doi.org/10.1111/cgf.70174","url":null,"abstract":"<div>\u0000 <p>Importance sampling of visible normal distribution functions (vNDF) is a required ingredient for the efficient rendering of microfacet-based materials. In this paper, we explain how to sample the vNDF for the micrograin material model [LRPB23], which has been recently improved to handle height-normal correlations through a new Geometric Attenuation Factor (GAF) [LRPB24], leading to a stronger impact on appearance compared to the earlier Smith approximation. To this end, we make two contributions: we derive analytic expressions for the marginal and conditional cumulative distribution functions (CDFs) of the vNDF; we provide efficient methods for inverting these CDFs based respectively on a 2D lookup table and on the triangle-cut method [Hei20].</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70174","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144768044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural field multi-view shape-from-polarisation 神经场多视图形状-从极化
IF 2.9 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-07-24 DOI: 10.1111/cgf.70177
R. Wanaset, G. C. Guarnera, W. A. P. Smith

We tackle the problem of multi-view shape-from-polarisation using a neural implicit surface representation and volume rendering of a polarised neural radiance field (P-NeRF). The P-NeRF predicts the parameters of a mixed diffuse/specular polarisation model. This directly relates polarisation behaviour to the surface normal without explicitly modelling illumination or BRDF. Via the implicit surface representation, this allows polarisation to directly inform the estimated geometry. This improves shape estimation and also allows separation of diffuse and specular radiance. For polarimetric images from division-of-focal-plane sensors, we fit directly to the raw data without first demosaicing. This avoids fitting to demosaicing artefacts and we propose losses and saturation masking specifically to handle HDR measurements. Our method achieves state-of-the-art performance on the PANDORA benchmark. We apply our method in a lightstage setting, providing single-shot face capture.

我们使用神经隐式表面表示和极化神经辐射场(P-NeRF)的体渲染来解决多视图偏振形状的问题。P-NeRF预测了混合漫射/镜面偏振模型的参数。这直接将极化行为与表面法线联系起来,而无需明确建模照明或BRDF。通过隐式表面表示,这允许偏振直接告知估计的几何形状。这改善了形状估计,也允许分离漫射和镜面辐射。对于来自焦平面分割传感器的偏振图像,我们直接拟合原始数据而不首先去马赛克。这避免了拟合去马赛克伪影,我们提出了专门处理HDR测量的损失和饱和掩蔽。我们的方法在PANDORA基准上实现了最先进的性能。我们在灯光舞台设置中应用我们的方法,提供单镜头面部捕捉。
{"title":"Neural field multi-view shape-from-polarisation","authors":"R. Wanaset,&nbsp;G. C. Guarnera,&nbsp;W. A. P. Smith","doi":"10.1111/cgf.70177","DOIUrl":"https://doi.org/10.1111/cgf.70177","url":null,"abstract":"<p>We tackle the problem of multi-view shape-from-polarisation using a neural implicit surface representation and volume rendering of a polarised neural radiance field (P-NeRF). The P-NeRF predicts the parameters of a mixed diffuse/specular polarisation model. This directly relates polarisation behaviour to the surface normal without explicitly modelling illumination or BRDF. Via the implicit surface representation, this allows polarisation to directly inform the estimated geometry. This improves shape estimation and also allows separation of diffuse and specular radiance. For polarimetric images from division-of-focal-plane sensors, we fit directly to the raw data without first demosaicing. This avoids fitting to demosaicing artefacts and we propose losses and saturation masking specifically to handle HDR measurements. Our method achieves state-of-the-art performance on the PANDORA benchmark. We apply our method in a lightstage setting, providing single-shot face capture.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144767947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Controllable Biophysical Human Faces 可控生物物理人脸
IF 2.9 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-07-24 DOI: 10.1111/cgf.70170
Minghao Liu, Stephane Grabli, Sébastien Speierer, Nikolaos Sarafianos, Lukas Bode, Matt Chiang, Christophe Hery, James Davis, Carlos Aliaga

We present a novel generative model that synthesizes photorealistic, biophysically plausible faces by capturing the intricate relationships between facial geometry and biophysical attributes. Our approach models facial appearance in a biophysically grounded manner, allowing for the editing of both high-level attributes such as age and gender, as well as low-level biophysical properties such as melanin level and blood content. This enables continuous modeling of physical skin properties that correlate changes in skin properties with shape changes. We showcase the capabilities of our framework beyond its role as a generative model through two practical applications: editing the texture maps of 3D faces that have already been captured, and serving as a strong prior for face reconstruction when combined with differentiable rendering. Our model allows for the creation of physically-based relightable, editable faces with consistent topology and uv layout that can be integrated into traditional computer graphics pipelines.

我们提出了一种新的生成模型,通过捕捉面部几何和生物物理属性之间的复杂关系,合成逼真的、生物物理上可信的面部。我们的方法以生物物理为基础的方式建模面部外观,允许编辑高级属性,如年龄和性别,以及低级生物物理属性,如黑色素水平和血液含量。这使物理皮肤属性能够连续建模,使皮肤属性的变化与形状变化相关联。我们通过两个实际应用展示了我们的框架的功能,超出了它作为生成模型的作用:编辑已经捕获的3D人脸的纹理图,并在与可微分渲染相结合时作为人脸重建的强大先验。我们的模型允许创建基于物理的可照明、可编辑的面,具有一致的拓扑和uv布局,可以集成到传统的计算机图形管道中。
{"title":"Controllable Biophysical Human Faces","authors":"Minghao Liu,&nbsp;Stephane Grabli,&nbsp;Sébastien Speierer,&nbsp;Nikolaos Sarafianos,&nbsp;Lukas Bode,&nbsp;Matt Chiang,&nbsp;Christophe Hery,&nbsp;James Davis,&nbsp;Carlos Aliaga","doi":"10.1111/cgf.70170","DOIUrl":"https://doi.org/10.1111/cgf.70170","url":null,"abstract":"<p>We present a novel generative model that synthesizes photorealistic, biophysically plausible faces by capturing the intricate relationships between facial geometry and biophysical attributes. Our approach models facial appearance in a biophysically grounded manner, allowing for the editing of both high-level attributes such as age and gender, as well as low-level biophysical properties such as melanin level and blood content. This enables continuous modeling of physical skin properties that correlate changes in skin properties with shape changes. We showcase the capabilities of our framework beyond its role as a generative model through two practical applications: editing the texture maps of 3D faces that have already been captured, and serving as a strong prior for face reconstruction when combined with differentiable rendering. Our model allows for the creation of physically-based relightable, editable faces with consistent topology and uv layout that can be integrated into traditional computer graphics pipelines.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144767951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiview Geometric Regularization of Gaussian Splatting for Accurate Radiance Fields 精确辐射场高斯散射的多视点几何正则化
IF 2.9 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-07-24 DOI: 10.1111/cgf.70179
Jungeon Kim, Geonsoo Park, Seungyong Lee

Recent methods, such as 2D Gaussian Splatting and Gaussian Opacity Fields, have aimed to address the geometric inaccuracies of 3D Gaussian Splatting while retaining its superior rendering quality. However, these approaches still struggle to reconstruct smooth and reliable geometry, particularly in scenes with significant color variation across viewpoints, due to their per-point appearance modeling and single-view optimization constraints. In this paper, we propose an effective multiview geometric regularization strategy that integrates multiview stereo (MVS) depth, RGB, and normal constraints into Gaussian Splatting initialization and optimization. Our key insight is the complementary relationship between MVS-derived depth points and Gaussian Splatting-optimized positions: MVS robustly estimates geometry in regions of high color variation through local patch-based matching and epipolar constraints, whereas Gaussian Splatting provides more reliable and less noisy depth estimates near object boundaries and regions with lower color variation. To leverage this insight, we introduce a median depth-based multiview relative depth loss with uncertainty estimation, effectively integrating MVS depth information into Gaussian Splatting optimization. We also propose an MVS-guided Gaussian Splatting initialization to avoid Gaussians falling into suboptimal positions. Extensive experiments validate that our approach successfully combines these strengths, enhancing both geometric accuracy and rendering quality across diverse indoor and outdoor scenes.

最近的方法,如二维高斯喷溅和高斯不透明度场,旨在解决三维高斯喷溅的几何不准确性,同时保持其优越的渲染质量。然而,由于它们的逐点外观建模和单视图优化约束,这些方法仍然难以重建光滑可靠的几何形状,特别是在视点颜色变化显著的场景中。在本文中,我们提出了一种有效的多视图几何正则化策略,该策略将多视图立体(MVS)深度、RGB和正态约束集成到高斯飞溅初始化和优化中。我们的关键见解是MVS衍生深度点和高斯飞溅优化位置之间的互补关系:MVS通过基于局部斑块的匹配和极面约束在高颜色变化区域中稳健地估计几何形状,而高斯飞溅在物体边界和颜色变化较小的区域附近提供更可靠和更少噪声的深度估计。为了利用这一见解,我们引入了一种基于中值深度的多视图相对深度损失和不确定性估计,有效地将MVS深度信息集成到高斯飞溅优化中。我们还提出了一个mvs引导的高斯溅射初始化,以避免高斯分布落入次优位置。大量的实验验证了我们的方法成功地结合了这些优势,提高了不同室内和室外场景的几何精度和渲染质量。
{"title":"Multiview Geometric Regularization of Gaussian Splatting for Accurate Radiance Fields","authors":"Jungeon Kim,&nbsp;Geonsoo Park,&nbsp;Seungyong Lee","doi":"10.1111/cgf.70179","DOIUrl":"https://doi.org/10.1111/cgf.70179","url":null,"abstract":"<p>Recent methods, such as 2D Gaussian Splatting and Gaussian Opacity Fields, have aimed to address the geometric inaccuracies of 3D Gaussian Splatting while retaining its superior rendering quality. However, these approaches still struggle to reconstruct smooth and reliable geometry, particularly in scenes with significant color variation across viewpoints, due to their per-point appearance modeling and single-view optimization constraints. In this paper, we propose an effective multiview geometric regularization strategy that integrates multiview stereo (MVS) depth, RGB, and normal constraints into Gaussian Splatting initialization and optimization. Our key insight is the complementary relationship between MVS-derived depth points and Gaussian Splatting-optimized positions: MVS robustly estimates geometry in regions of high color variation through local patch-based matching and epipolar constraints, whereas Gaussian Splatting provides more reliable and less noisy depth estimates near object boundaries and regions with lower color variation. To leverage this insight, we introduce a median depth-based multiview relative depth loss with uncertainty estimation, effectively integrating MVS depth information into Gaussian Splatting optimization. We also propose an MVS-guided Gaussian Splatting initialization to avoid Gaussians falling into suboptimal positions. Extensive experiments validate that our approach successfully combines these strengths, enhancing both geometric accuracy and rendering quality across diverse indoor and outdoor scenes.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144768040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Differentiable Search Based Halftoning 基于可微搜索的半调
IF 2.9 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-07-24 DOI: 10.1111/cgf.70173
E. Luci, K. T. Wijaya, V. Babaei

Halftoning is fundamental to image reproduction on devices with a limited set of output levels, such as printers. Halftoning algorithms reproduce continuous-tone images by distributing dots with a fixed tone but variable size or spacing. Search-based approaches optimize for a dot distribution that minimizes a given visual loss function w.r.t. an input image. This class of methods is not only the most intuitive and versatile but can also yield the highest quality results depending on the merit of the employed loss function. However, their combinatorial nature makes them computationally inefficient. We introduce the first differentiable search-based halftoning algorithm. Our proposed method can be natively used to perform multi-color, multi-level halftoning. Our main insight lies in introducing a relaxation in the discrete choice of dot assignment during the backward pass of the optimization. We achieve this by associating a fictitious distance from the image plane to each dot, embedding the problem in three dimensions. We also introduce a novel loss component that operates in the frequency domain and provides a better visual loss when combined with existing image similarity metrics. We validate our approach by demonstrating that it outperforms stochastic optimization methods in both speed and objective value, while also scaling significantly better to large images. The code is available at https:gitlab.mpi-klsb.mpg.de/aidam-public/differentiable-halftoning

半色调是在输出电平有限的设备(如打印机)上再现图像的基础。半色调算法通过分布具有固定色调但大小或间距可变的点来再现连续色调图像。基于搜索的方法优化点分布,使给定的视觉损失函数w.r.t.输入图像最小化。这类方法不仅是最直观和通用的,而且还可以根据所使用的损失函数的优点产生最高质量的结果。然而,它们的组合性质使它们在计算上效率低下。介绍了第一个基于可微搜索的半调算法。该方法可以实现多色、多级半调。我们的主要见解在于在优化的逆向传递过程中引入一个松弛的离散点分配选择。我们通过将图像平面到每个点的虚拟距离关联起来,将问题嵌入到三维空间中来实现这一点。我们还引入了一种新的损耗分量,它在频域中工作,当与现有的图像相似度指标相结合时,可以提供更好的视觉损失。我们通过证明它在速度和客观值方面优于随机优化方法来验证我们的方法,同时也显着更好地缩放到大图像。代码可在https:gitlab.mpi-klsb.mpg.de/aidam-public/ differentiablehalftoning获得
{"title":"Differentiable Search Based Halftoning","authors":"E. Luci,&nbsp;K. T. Wijaya,&nbsp;V. Babaei","doi":"10.1111/cgf.70173","DOIUrl":"https://doi.org/10.1111/cgf.70173","url":null,"abstract":"<div>\u0000 <p>Halftoning is fundamental to image reproduction on devices with a limited set of output levels, such as printers. Halftoning algorithms reproduce continuous-tone images by distributing dots with a fixed tone but variable size or spacing. Search-based approaches optimize for a dot distribution that minimizes a given visual loss function w.r.t. an input image. This class of methods is not only the most intuitive and versatile but can also yield the highest quality results depending on the merit of the employed loss function. However, their combinatorial nature makes them computationally inefficient. We introduce the first differentiable search-based halftoning algorithm. Our proposed method can be natively used to perform multi-color, multi-level halftoning. Our main insight lies in introducing a relaxation in the discrete choice of dot assignment during the backward pass of the optimization. We achieve this by associating a fictitious distance from the image plane to each dot, embedding the problem in three dimensions. We also introduce a novel loss component that operates in the frequency domain and provides a better visual loss when combined with existing image similarity metrics. We validate our approach by demonstrating that it outperforms stochastic optimization methods in both speed and objective value, while also scaling significantly better to large images. The code is available at https:gitlab.mpi-klsb.mpg.de/aidam-public/differentiable-halftoning</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70173","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144768042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High-Fidelity Texture Transfer Using Multi-Scale Depth-Aware Diffusion 使用多尺度深度感知扩散的高保真纹理传输
IF 2.9 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-07-24 DOI: 10.1111/cgf.70172
Rongzhen Lin, Zichong Chen, Xiaoyong Hao, Yang Zhou, Hui Huang

Textures are a key component of 3D assets. Transferring textures from one shape to another, without user interaction or additional semantic guidance, is a classical yet challenging problem. It can enhance the diversity of existing shape collections, augmenting their application scope. This paper proposes an innovative 3D texture transfer framework that leverages the generative power of pre-trained diffusion models. While diffusion models have achieved significant success in 2D image generation, their application to 3D domains faces great challenges in preserving coherence across different viewpoints. Addressing this issue, we designed a multi-scale generation framework to optimize the UV maps coarse-to-fine. To ensure multi-view consistency, we use depth info as geometric guidance; meanwhile, a novel consistency loss is proposed to further constrain the color coherence and reduce artifacts. Experimental results demonstrate that our multi-scale framework not only produces high-quality texture transfer results but also excels in handling complex shapes while preserving correct semantic correspondences. Compared to existing techniques, our method achieves improvements in both consistency and texture clarity, as well as time efficiency.

纹理是3D资产的关键组成部分。将纹理从一种形状转移到另一种形状,没有用户交互或额外的语义指导,是一个经典但具有挑战性的问题。它可以增强现有形状集合的多样性,扩大其应用范围。本文提出了一个创新的3D纹理传输框架,利用预训练扩散模型的生成能力。虽然扩散模型在二维图像生成中取得了显著的成功,但它们在三维领域的应用在保持不同视点的一致性方面面临着巨大的挑战。针对这一问题,我们设计了一个多尺度生成框架来优化UV图的粗到精。为了保证多视图的一致性,我们使用深度信息作为几何导向;同时,提出了一种新的一致性损失算法,进一步约束了图像的色彩一致性,减少了伪影。实验结果表明,我们的多尺度框架不仅能产生高质量的纹理传递结果,而且在处理复杂形状的同时还能保持正确的语义对应。与现有技术相比,我们的方法在一致性和纹理清晰度以及时间效率方面都有所提高。
{"title":"High-Fidelity Texture Transfer Using Multi-Scale Depth-Aware Diffusion","authors":"Rongzhen Lin,&nbsp;Zichong Chen,&nbsp;Xiaoyong Hao,&nbsp;Yang Zhou,&nbsp;Hui Huang","doi":"10.1111/cgf.70172","DOIUrl":"https://doi.org/10.1111/cgf.70172","url":null,"abstract":"<p>Textures are a key component of 3D assets. Transferring textures from one shape to another, without user interaction or additional semantic guidance, is a classical yet challenging problem. It can enhance the diversity of existing shape collections, augmenting their application scope. This paper proposes an innovative 3D texture transfer framework that leverages the generative power of pre-trained diffusion models. While diffusion models have achieved significant success in 2D image generation, their application to 3D domains faces great challenges in preserving coherence across different viewpoints. Addressing this issue, we designed a multi-scale generation framework to optimize the UV maps coarse-to-fine. To ensure multi-view consistency, we use depth info as geometric guidance; meanwhile, a novel consistency loss is proposed to further constrain the color coherence and reduce artifacts. Experimental results demonstrate that our multi-scale framework not only produces high-quality texture transfer results but also excels in handling complex shapes while preserving correct semantic correspondences. Compared to existing techniques, our method achieves improvements in both consistency and texture clarity, as well as time efficiency.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144767949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-Time Image-based Lighting of Glints 基于实时图像的闪烁照明
IF 2.9 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-07-24 DOI: 10.1111/cgf.70175
Tom Kneiphof, Reinhard Klein

Image-based lighting is a widely used technique to reproduce shading under real-world lighting conditions, especially in real-time rendering applications. A particularly challenging scenario involves materials exhibiting a sparkling or glittering appearance, caused by discrete microfacets scattered across their surface. In this paper, we propose an efficient approximation for image-based lighting of glints, enabling fully dynamic material properties and environment maps. Our novel approach is grounded in real-time glint rendering under area light illumination and employs standard environment map filtering techniques. Crucially, our environment map filtering process is sufficiently fast to be executed on a per-frame basis. Our method assumes that the environment map is partitioned into few homogeneous regions of constant radiance. By filtering the corresponding indicator functions with the normal distribution function, we obtain the probabilities for individual microfacets to reflect light from each region. During shading, these probabilities are utilized to hierarchically sample a multinomial distribution, facilitated by our novel dual-gated Gaussian approximation of binomial distributions. We validate that our real-time approximation is close to ground-truth renderings for a range of material properties and lighting conditions, and demonstrate robust and stable performance, with little overhead over rendering glints from a single directional light. Compared to rendering smooth materials without glints, our approach requires twice as much memory to store the prefiltered environment map.

基于图像的照明是一种广泛使用的技术,用于在现实世界的照明条件下再现阴影,特别是在实时渲染应用中。一个特别具有挑战性的场景涉及到材料表现出闪闪发光的外观,这是由分散在其表面的离散微面引起的。在本文中,我们提出了一种基于图像的闪烁照明的有效近似,实现了完全动态的材料属性和环境映射。我们的新方法基于区域光照下的实时闪烁渲染,并采用标准的环境地图滤波技术。至关重要的是,我们的环境映射过滤过程足够快,可以在每帧的基础上执行。我们的方法假设环境地图被划分为几个均匀的恒定辐射区域。通过用正态分布函数过滤对应的指示函数,我们得到了各个微面反射来自每个区域的光的概率。在遮光期间,这些概率被用来分层采样一个多项分布,由我们新的双门高斯近似的二项分布。我们验证了我们的实时近似值在一系列材料属性和照明条件下接近地面真实渲染,并展示了强大而稳定的性能,并且在单个方向光渲染闪烁时几乎没有开销。与渲染没有闪烁的平滑材质相比,我们的方法需要两倍的内存来存储预过滤的环境贴图。
{"title":"Real-Time Image-based Lighting of Glints","authors":"Tom Kneiphof,&nbsp;Reinhard Klein","doi":"10.1111/cgf.70175","DOIUrl":"https://doi.org/10.1111/cgf.70175","url":null,"abstract":"<div>\u0000 <p>Image-based lighting is a widely used technique to reproduce shading under real-world lighting conditions, especially in real-time rendering applications. A particularly challenging scenario involves materials exhibiting a sparkling or glittering appearance, caused by discrete microfacets scattered across their surface. In this paper, we propose an efficient approximation for image-based lighting of glints, enabling fully dynamic material properties and environment maps. Our novel approach is grounded in real-time glint rendering under area light illumination and employs standard environment map filtering techniques. Crucially, our environment map filtering process is sufficiently fast to be executed on a per-frame basis. Our method assumes that the environment map is partitioned into few homogeneous regions of constant radiance. By filtering the corresponding indicator functions with the normal distribution function, we obtain the probabilities for individual microfacets to reflect light from each region. During shading, these probabilities are utilized to hierarchically sample a multinomial distribution, facilitated by our novel dual-gated Gaussian approximation of binomial distributions. We validate that our real-time approximation is close to ground-truth renderings for a range of material properties and lighting conditions, and demonstrate robust and stable performance, with little overhead over rendering glints from a single directional light. Compared to rendering smooth materials without glints, our approach requires twice as much memory to store the prefiltered environment map.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70175","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144767945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Continuous-Line Image Stylization Based on Hilbert Curve 基于希尔伯特曲线的连续线图像样式化
IF 2.9 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-07-24 DOI: 10.1111/cgf.70169
Zhifang Tong, Bolei Zuov, Xiaoxia Yang, Shengjun Liu, Xinru Liu

Horizontal and vertical lines hold significant aesthetic and psychological importance, providing a sense of order, stability, and security. This paper presents an image stylization method that quickly generates non-self-intersecting and regular continuous lines based on the Hilbert curve, a well-known space-filling curve consisting of only horizontal and vertical segments. We first calculate the grayscale threshold based on gray quantization for the original image and recursively subdivide the cells according to the density in each cell. To avoid generating new feature curves due to limited gray quantization, a recursive subdivision with probability is designed to smooth the density. Then, we utilize the rule of Hilbert curve to generate continuous lines connecting all the cells. Between different degrees of Hilbert curves, bridge curves composed of horizontal and vertical lines are constructed, which are also intersection-free, instead of a straight line linking them directly. There are two parameters provided for feasibly adjusting variate effects. The image stylization framework could be generalized to other space-filling curves like the Peano curve. Compared to existing methods, our approach can generate pleasing results quickly and is fully automated. Many results show our method is robust and effective.

水平和垂直的线条具有重要的美学和心理意义,提供了一种秩序、稳定和安全的感觉。本文提出了一种基于希尔伯特曲线(Hilbert curve)的图像样式化方法,该方法可以快速生成非自相交且规则连续的线条。希尔伯特曲线是一种众所周知的仅由水平和垂直段组成的空间填充曲线。首先对原始图像进行灰度量化计算灰度阈值,并根据每个单元格的密度递归细分单元格。为了避免由于灰度量化有限而产生新的特征曲线,设计了一种带概率的递归细分方法来平滑密度。然后,我们利用希尔伯特曲线规则生成连接所有单元格的连续线。在不同程度的希尔伯特曲线之间,构造由水平线和垂直线组成的桥梁曲线,这两条曲线也是不相交的,而不是一条直线直接连接它们。文中提供了两个参数,可用于调整变量效应。图像样式化框架可以推广到其他空间填充曲线,如Peano曲线。与现有方法相比,我们的方法可以快速产生令人满意的结果,并且是完全自动化的。实验结果表明,该方法具有较好的鲁棒性和有效性。
{"title":"Continuous-Line Image Stylization Based on Hilbert Curve","authors":"Zhifang Tong,&nbsp;Bolei Zuov,&nbsp;Xiaoxia Yang,&nbsp;Shengjun Liu,&nbsp;Xinru Liu","doi":"10.1111/cgf.70169","DOIUrl":"https://doi.org/10.1111/cgf.70169","url":null,"abstract":"<p>Horizontal and vertical lines hold significant aesthetic and psychological importance, providing a sense of order, stability, and security. This paper presents an image stylization method that quickly generates non-self-intersecting and regular continuous lines based on the Hilbert curve, a well-known space-filling curve consisting of only horizontal and vertical segments. We first calculate the grayscale threshold based on gray quantization for the original image and recursively subdivide the cells according to the density in each cell. To avoid generating new feature curves due to limited gray quantization, a recursive subdivision with probability is designed to smooth the density. Then, we utilize the rule of Hilbert curve to generate continuous lines connecting all the cells. Between different degrees of Hilbert curves, bridge curves composed of horizontal and vertical lines are constructed, which are also intersection-free, instead of a straight line linking them directly. There are two parameters provided for feasibly adjusting variate effects. The image stylization framework could be generalized to other space-filling curves like the Peano curve. Compared to existing methods, our approach can generate pleasing results quickly and is fully automated. Many results show our method is robust and effective.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144767946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Wavelet Representation and Sampling of Complex Luminaires 复杂灯具的小波表示与采样
IF 2.9 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-07-24 DOI: 10.1111/cgf.70163
A. Atanasov, V. Koylazov

We contribute a technique for rendering the illumination of complex luminaires based on wavelet-compressed light fields while the direct appearance of the luminaire is handled with previous techniques. During a brief photon tracing phase, we pre-compute the radiance field of the luminaire. Then, we employ a compression scheme which is designed to facilitate fast per-ray run-time reconstructions of the field and importance sampling. To treat aliasing, we propose a two-component filtering solution: a 4D Gaussian filter during the pre-computation stage and a 4D stochastic Gaussian filter during rendering. We have developed an importance sampling strategy based on providing an initial guess from low-resolution and low-memory viewpoint samplers that is subsequently refined by a hierarchical process over the wavelet frequency bands. Our technique is straightforward to integrate in rendering systems and has all the features that make it practical for production renderers — MIS compatibility, brief pre-computation, low memory requirements, and efficient field evaluation and importance sampling.

我们提出了一种基于小波压缩光场的复杂灯具照明渲染技术,而灯具的直接外观是用以前的技术处理的。在一个短暂的光子跟踪阶段,我们预先计算了灯具的亮度场。然后,我们采用了一种压缩方案,该方案旨在促进每条射线运行时的场和重要采样的快速重建。为了处理混叠,我们提出了一种双分量滤波解决方案:在预计算阶段使用四维高斯滤波器,在绘制阶段使用四维随机高斯滤波器。我们开发了一种重要采样策略,该策略基于从低分辨率和低内存视点采样器提供的初始猜测,随后通过小波频带的分层过程进行改进。我们的技术可以直接集成到渲染系统中,并且具有使其对生产渲染器实用的所有功能- MIS兼容性,简短的预计算,低内存需求,高效的现场评估和重要性采样。
{"title":"Wavelet Representation and Sampling of Complex Luminaires","authors":"A. Atanasov,&nbsp;V. Koylazov","doi":"10.1111/cgf.70163","DOIUrl":"https://doi.org/10.1111/cgf.70163","url":null,"abstract":"<p>We contribute a technique for rendering the illumination of complex luminaires based on wavelet-compressed light fields while the direct appearance of the luminaire is handled with previous techniques. During a brief photon tracing phase, we pre-compute the radiance field of the luminaire. Then, we employ a compression scheme which is designed to facilitate fast per-ray run-time reconstructions of the field and importance sampling. To treat aliasing, we propose a two-component filtering solution: a 4D Gaussian filter during the pre-computation stage and a 4D stochastic Gaussian filter during rendering. We have developed an importance sampling strategy based on providing an initial guess from low-resolution and low-memory viewpoint samplers that is subsequently refined by a hierarchical process over the wavelet frequency bands. Our technique is straightforward to integrate in rendering systems and has all the features that make it practical for production renderers — MIS compatibility, brief pre-computation, low memory requirements, and efficient field evaluation and importance sampling.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144768035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A wave-optics BSDF for correlated scatterers 相关散射体的波光学BSDF
IF 2.9 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-07-24 DOI: 10.1111/cgf.70167
Ruomai Yang, Juhyeon Kim, Adithya Pediredla, Wojciech Jarosz

We present a wave-optics-based BSDF for simulating the corona effect observed when viewing strong light sources through materials such as certain fabrics or glass surfaces with condensation. These visual phenomena arise from the interference of diffraction patterns caused by correlated, disordered arrangements of droplets or pores. Our method leverages the pair correlation function (PCF) to decouple the spatial relationships between scatterers from the diffraction behavior of individual scatterers. This two-level decomposition allows us to derive a physically based BSDF that provides explicit control over both scatterer shape and spatial correlation. We also introduce a practical importance sampling strategy for integrating our BSDF within a Monte Carlo renderer. Our simulation results and real-world comparisons demonstrate that the method can reliably reproduce the characteristics of the corona effects in various real-world diffractive materials.

我们提出了一种基于波光学的BSDF,用于模拟当通过某些材料(如某些织物或具有冷凝的玻璃表面)观察强光源时观察到的日冕效应。这些视觉现象是由相关的、无序排列的液滴或孔隙引起的衍射图样的干扰引起的。我们的方法利用对相关函数(PCF)将散射体之间的空间关系与单个散射体的衍射行为解耦。这种两级分解允许我们派生出基于物理的BSDF,该BSDF提供了对散射体形状和空间相关性的显式控制。我们还介绍了一个实用的重要采样策略,用于在蒙特卡罗渲染器中集成我们的BSDF。仿真结果和实际对比表明,该方法可以可靠地再现各种实际衍射材料的日冕效应特征。
{"title":"A wave-optics BSDF for correlated scatterers","authors":"Ruomai Yang,&nbsp;Juhyeon Kim,&nbsp;Adithya Pediredla,&nbsp;Wojciech Jarosz","doi":"10.1111/cgf.70167","DOIUrl":"https://doi.org/10.1111/cgf.70167","url":null,"abstract":"<p>We present a wave-optics-based BSDF for simulating the corona effect observed when viewing strong light sources through materials such as certain fabrics or glass surfaces with condensation. These visual phenomena arise from the interference of diffraction patterns caused by correlated, disordered arrangements of droplets or pores. Our method leverages the pair correlation function (PCF) to decouple the spatial relationships between scatterers from the diffraction behavior of individual scatterers. This two-level decomposition allows us to derive a physically based BSDF that provides explicit control over both scatterer shape and spatial correlation. We also introduce a practical importance sampling strategy for integrating our BSDF within a Monte Carlo renderer. Our simulation results and real-world comparisons demonstrate that the method can reliably reproduce the characteristics of the corona effects in various real-world diffractive materials.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144767950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Graphics Forum
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1