首页 > 最新文献

Computer Graphics Forum最新文献

英文 中文
Learning to Rasterize Differentiably 学习不同的光栅化
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-24 DOI: 10.1111/cgf.15145
C. Wu, H. Mailee, Z. Montazeri, T. Ritschel

Differentiable rasterization changes the standard formulation of primitive rasterization — by enabling gradient flow from a pixel to its underlying triangles — using distribution functions in different stages of rendering, creating a “soft” version of the original rasterizer. However, choosing the optimal softening function that ensures the best performance and convergence to a desired goal requires trial and error. Previous work has analyzed and compared several combinations of softening. In this work, we take it a step further and, instead of making a combinatorial choice of softening operations, parameterize the continuous space of common softening operations. We study meta-learning tunable softness functions over a set of inverse rendering tasks (2D and 3D shape, pose and occlusion) so it generalizes to new and unseen differentiable rendering tasks with optimal softness.

可微分光栅化改变了原始光栅化的标准表述--通过在渲染的不同阶段使用分布函数,实现从像素到其底层三角形的梯度流,从而创建出原始光栅化的 "软 "版本。然而,要选择最佳的软化函数,以确保最佳性能并收敛到预期目标,需要反复试验。之前的工作已经分析和比较了几种软化组合。在这项工作中,我们更进一步,不是对软化操作进行组合选择,而是对常见软化操作的连续空间进行参数化。我们在一组反渲染任务(二维和三维形状、姿势和遮挡)上研究元学习可调柔化函数,从而将其推广到具有最佳柔化效果的新的、未见的可微分渲染任务中。
{"title":"Learning to Rasterize Differentiably","authors":"C. Wu,&nbsp;H. Mailee,&nbsp;Z. Montazeri,&nbsp;T. Ritschel","doi":"10.1111/cgf.15145","DOIUrl":"10.1111/cgf.15145","url":null,"abstract":"<p>Differentiable rasterization changes the standard formulation of primitive rasterization — by enabling gradient flow from a pixel to its underlying triangles — using distribution functions in different stages of rendering, creating a “soft” version of the original rasterizer. However, choosing the optimal softening function that ensures the best performance and convergence to a desired goal requires trial and error. Previous work has analyzed and compared several combinations of softening. In this work, we take it a step further and, instead of making a combinatorial choice of softening operations, parameterize the continuous space of common softening operations. We study meta-learning tunable softness functions over a set of inverse rendering tasks (2D and 3D shape, pose and occlusion) so it generalizes to new and unseen differentiable rendering tasks with optimal softness.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141779334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MatUp: Repurposing Image Upsamplers for SVBRDFs MatUp:为 SVBRDF 重用图像升维器
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-24 DOI: 10.1111/cgf.15151
A. Gauthier, B. Kerbl, J. Levallois, R. Faury, J. M. Thiery, T. Boubekeur

We propose MatUp, an upsampling filter for material super-resolution. Our method takes as input a low-resolution SVBRDF and upscales its maps so that their rendering under various lighting conditions fits upsampled renderings inferred in the radiance domain with pre-trained RGB upsamplers. We formulate our local filter as a compact Multilayer Perceptron (MLP), which acts on a small window of the input SVBRDF and is optimized using a data-fitting loss defined over upsampled radiance at various locations. This optimization is entirely performed at the scale of a single, independent material. Doing so, MatUp leverages the reconstruction capabilities acquired over large collections of natural images by pre-trained RGB models and provides regularization over self-similar structures. In particular, our light-weight neural filter avoids retraining complex architectures from scratch or accessing any large collection of low/high resolution material pairs – which do not actually exist at the scale RGB upsamplers are trained with. As a result, MatUp provides fine and coherent details in the upscaled material maps, as shown in the extensive evaluation we provide.

我们提出的 MatUp 是一种用于材质超分辨率的上采样滤波器。我们的方法将低分辨率 SVBRDF 作为输入,并对其贴图进行升频,使其在各种光照条件下的渲染效果符合使用预训练 RGB 升频器在辐射域推断出的升频渲染效果。我们将本地滤波器设计为一个紧凑的多层感知器(MLP),它作用于输入 SVBRDF 的一个小窗口,并使用定义在不同位置的上采样辐射率上的数据拟合损失进行优化。这种优化完全是在单个独立材料的尺度上进行的。在此过程中,MatUp 利用预先训练的 RGB 模型在大量自然图像集合中获得的重建能力,并对自相似结构进行正则化。特别是,我们的轻量级神经滤波器避免了从头开始重新训练复杂的架构,也避免了访问低/高分辨率材料对的任何大型集合--在 RGB 上采样器的训练规模下,这些材料对实际上并不存在。因此,正如我们提供的大量评估结果所显示的那样,MatUp 可以在放大的材质映射中提供精细、连贯的细节。
{"title":"MatUp: Repurposing Image Upsamplers for SVBRDFs","authors":"A. Gauthier,&nbsp;B. Kerbl,&nbsp;J. Levallois,&nbsp;R. Faury,&nbsp;J. M. Thiery,&nbsp;T. Boubekeur","doi":"10.1111/cgf.15151","DOIUrl":"10.1111/cgf.15151","url":null,"abstract":"<p>We propose M<span>at</span>U<span>p</span>, an upsampling filter for material super-resolution. Our method takes as input a low-resolution SVBRDF and upscales its maps so that their rendering under various lighting conditions fits upsampled renderings inferred in the radiance domain with pre-trained RGB upsamplers. We formulate our local filter as a compact Multilayer Perceptron (MLP), which acts on a small window of the input SVBRDF and is optimized using a data-fitting loss defined over upsampled radiance at various locations. This optimization is entirely performed at the scale of a single, independent material. Doing so, M<span>at</span>U<span>p</span> leverages the reconstruction capabilities acquired over large collections of natural images by pre-trained RGB models and provides regularization over self-similar structures. In particular, our light-weight neural filter avoids retraining complex architectures from scratch or accessing any large collection of low/high resolution material pairs – which do not actually exist at the scale RGB upsamplers are trained with. As a result, M<span>at</span>U<span>p</span> provides fine and coherent details in the upscaled material maps, as shown in the extensive evaluation we provide.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141785739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lossless Basis Expansion for Gradient-Domain Rendering 梯度域渲染的无损基础扩展
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-24 DOI: 10.1111/cgf.15153
Q. Fang, T. Hachisuka

Gradient-domain rendering utilizes difference estimates with shift mapping to reduce variance in Monte Carlo rendering. Such difference estimates are effective under the assumption that pixels for difference estimates have similar integrands. This assumption is often violated because it is common to have spatially varying BSDFs with material maps, which potentially result in a very different integrand per pixel. We introduce an extension of gradient-domain rendering that effectively supports such per-pixel variation in BSDFs based on basis expansion. Basis expansion for BSDFs has been used extensively in other problems in rendering, where the goal is to approximate a given BSDF by a weighted sum of predefined basis functions. We instead utilize lossless basis expansion, representing a BSDF without any approximation by adding the remaining difference in the original basis expansion. This lossless basis expansion allows us to cancel more terms via shift mapping, resulting in low variance difference estimates even with per-pixel BSDF variation. We also extend the Poisson reconstruction process to support this basis expansion. Regular gradient-domain rendering can be expressed as a special case of our extension, where the basis is simply the BSDF per pixel (i.e., no basis expansion). We provide proof-of-concept experiments and showcase the effectiveness of our method for scenes with highly varying material maps. Our results show noticeable improvement over regular gradient-domain rendering under both L1 and L2 reconstructions. The resulting formulation via basis expansion essentially serves as a new way of path reuse among pixels in the presence of per-pixel variation.

梯度域渲染利用带有移位映射的差值估计来减少蒙特卡罗渲染中的差异。这种差值估计有效的前提是差值估计的像素具有相似的积分。但这一假设经常被违反,因为具有材料贴图的空间变化 BSDFs 很常见,这可能导致每个像素的积分非常不同。我们介绍了梯度域渲染的一种扩展方法,它能有效支持基于基础扩展的 BSDF 的这种每像素变化。BSDF 的基扩展已广泛应用于渲染中的其他问题,其目标是通过预定义基函数的加权和来逼近给定的 BSDF。而我们采用的是无损基扩展,通过添加原始基扩展中的剩余差值来表示 BSDF,而不进行任何近似。这种无损基扩展允许我们通过移位映射来抵消更多的项,从而获得低方差的差异估计值,即使每个像素的 BSDF 存在差异。我们还扩展了泊松重建过程,以支持这种基础扩展。常规梯度域渲染可以表示为我们扩展的一个特例,其中的基础只是每个像素的 BSDF(即没有基础扩展)。我们提供了概念验证实验,并展示了我们的方法在具有高度变化的材质贴图的场景中的有效性。我们的结果表明,在 L1 和 L2 重构下,我们的方法比常规梯度域渲染方法有明显的改进。在每个像素都存在变化的情况下,通过基础扩展得出的表述基本上可以作为像素间路径重用的一种新方法。
{"title":"Lossless Basis Expansion for Gradient-Domain Rendering","authors":"Q. Fang,&nbsp;T. Hachisuka","doi":"10.1111/cgf.15153","DOIUrl":"10.1111/cgf.15153","url":null,"abstract":"<p>Gradient-domain rendering utilizes difference estimates with shift mapping to reduce variance in Monte Carlo rendering. Such difference estimates are effective under the assumption that pixels for difference estimates have similar integrands. This assumption is often violated because it is common to have spatially varying BSDFs with material maps, which potentially result in a very different integrand per pixel. We introduce an extension of gradient-domain rendering that effectively supports such per-pixel variation in BSDFs based on basis expansion. Basis expansion for BSDFs has been used extensively in other problems in rendering, where the goal is to approximate a given BSDF by a weighted sum of predefined basis functions. We instead utilize lossless basis expansion, representing a BSDF without any approximation by adding the remaining difference in the original basis expansion. This lossless basis expansion allows us to cancel more terms via shift mapping, resulting in low variance difference estimates even with per-pixel BSDF variation. We also extend the Poisson reconstruction process to support this basis expansion. Regular gradient-domain rendering can be expressed as a special case of our extension, where the basis is simply the BSDF per pixel (i.e., no basis expansion). We provide proof-of-concept experiments and showcase the effectiveness of our method for scenes with highly varying material maps. Our results show noticeable improvement over regular gradient-domain rendering under both L<sup>1</sup> and L<sup>2</sup> reconstructions. The resulting formulation via basis expansion essentially serves as a new way of path reuse among pixels in the presence of per-pixel variation.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141785868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VMF Diffuse: A unified rough diffuse BRDF VMF 漫反射:统一的粗糙漫反射 BRDF
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-24 DOI: 10.1111/cgf.15149
Eugene d'Eon, Andrea Weidlich

We present a practical analytic BRDF that approximates scattering from a generalized microfacet volume with a von Mises-Fischer NDF. Our BRDF seamlessly blends from smooth Lambertian, through moderately rough height fields with Beckmann-like statistics and into highly rough/porous behaviours that have been lacking from prior models. At maximum roughness, our model reduces to the recent Lambert-sphere BRDF. We validate our model by comparing to simulations of scattering from geometries with randomly-placed Lambertian spheres and show an improvement relative to a rough Beckmann BRDF with very high roughness.

我们提出了一种实用的分析 BRDF,它可以用 von Mises-Fischer NDF 近似处理来自广义微面体积的散射。我们的 BRDF 可以无缝融合光滑的朗伯场、具有类似贝克曼统计量的中等粗糙高度场以及先前模型所缺乏的高度粗糙/多孔行为。在最大粗糙度时,我们的模型还原为最新的朗伯球 BRDF。我们通过与随机放置朗伯球的几何散射模拟进行比较,验证了我们的模型,并显示出相对于粗糙度极高的贝克曼 BRDF 的改进。
{"title":"VMF Diffuse: A unified rough diffuse BRDF","authors":"Eugene d'Eon,&nbsp;Andrea Weidlich","doi":"10.1111/cgf.15149","DOIUrl":"10.1111/cgf.15149","url":null,"abstract":"<p>We present a practical analytic BRDF that approximates scattering from a generalized microfacet volume with a von Mises-Fischer NDF. Our BRDF seamlessly blends from smooth Lambertian, through moderately rough height fields with Beckmann-like statistics and into highly rough/porous behaviours that have been lacking from prior models. At maximum roughness, our model reduces to the recent Lambert-sphere BRDF. We validate our model by comparing to simulations of scattering from geometries with randomly-placed Lambertian spheres and show an improvement relative to a rough Beckmann BRDF with very high roughness.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141779332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Diffusion Approach to Radiance Field Relighting using Multi-Illumination Synthesis 利用多重照明合成进行辐射场再照明的扩散方法
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-24 DOI: 10.1111/cgf.15147
Y. Poirier-Ginter, A. Gauthier, J. Phillip, J.-F. Lalonde, G. Drettakis

Relighting radiance fields is severely underconstrained for multi-view data, which is most often captured under a single illumination condition; It is especially hard for full scenes containing multiple objects. We introduce a method to create relightable radiance fields using such single-illumination data by exploiting priors extracted from 2D image diffusion models. We first fine-tune a 2D diffusion model on a multi-illumination dataset conditioned by light direction, allowing us to augment a single-illumination capture into a realistic – but possibly inconsistent – multi-illumination dataset from directly defined light directions. We use this augmented data to create a relightable radiance field represented by 3D Gaussian splats. To allow direct control of light direction for low-frequency lighting, we represent appearance with a multi-layer perceptron parameterized on light direction. To enforce multi-view consistency and overcome inaccuracies we optimize a per-image auxiliary feature vector. We show results on synthetic and real multi-view data under single illumination, demonstrating that our method successfully exploits 2D diffusion model priors to allow realistic 3D relighting for complete scenes.

多视角数据通常是在单一光照条件下捕获的,因此重照辐射场对多视角数据的约束严重不足;对于包含多个物体的全场景来说,重照辐射场尤其困难。我们介绍了一种方法,通过利用从二维图像扩散模型中提取的先验值,使用此类单光照数据创建可重照辐射场。我们首先在以光照方向为条件的多光照数据集上微调 2D 扩散模型,从而将单光照捕捉数据增强为现实的多光照数据集,但可能不一致,这些数据集来自直接定义的光照方向。我们利用这些增强数据来创建由三维高斯光斑表示的可再照明辐射场。为了对低频照明的光照方向进行直接控制,我们用一个以光照方向为参数的多层感知器来表示外观。为了实现多视角一致性并克服不准确性,我们优化了每个图像的辅助特征向量。我们展示了在单一照明下合成和真实多视角数据的结果,证明我们的方法成功地利用了二维扩散模型先验,为完整场景提供了逼真的三维重新照明。
{"title":"A Diffusion Approach to Radiance Field Relighting using Multi-Illumination Synthesis","authors":"Y. Poirier-Ginter,&nbsp;A. Gauthier,&nbsp;J. Phillip,&nbsp;J.-F. Lalonde,&nbsp;G. Drettakis","doi":"10.1111/cgf.15147","DOIUrl":"10.1111/cgf.15147","url":null,"abstract":"<p>Relighting radiance fields is severely underconstrained for multi-view data, which is most often captured under a single illumination condition; It is especially hard for full scenes containing multiple objects. We introduce a method to create relightable radiance fields using such single-illumination data by exploiting priors extracted from 2D image diffusion models. We first fine-tune a 2D diffusion model on a multi-illumination dataset conditioned by light direction, allowing us to augment a single-illumination capture into a realistic – but possibly inconsistent – multi-illumination dataset from directly defined light directions. We use this augmented data to create a relightable radiance field represented by 3D Gaussian splats. To allow direct control of light direction for low-frequency lighting, we represent appearance with a multi-layer perceptron parameterized on light direction. To enforce multi-view consistency and overcome inaccuracies we optimize a per-image auxiliary feature vector. We show results on synthetic and real multi-view data under single illumination, demonstrating that our method successfully exploits 2D diffusion model priors to allow realistic 3D relighting for complete scenes.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141779327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Realistic Facial Age Transformation with 3D Uplifting 通过 3D 提升技术实现逼真的面部年龄变化
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-24 DOI: 10.1111/cgf.15146
X. Li, G. C. Guarnera, A. Lin, A. Ghosh

While current facial re-ageing methods can produce realistic results, they purely focus on the 2D age transformation. In this work, we present an approach to transform the age of a person in both facial appearance and shape across different ages while preserving their identity. We employ an α-(de)blending diffusion network with an age-to-α transformation to generate coarse structure changes, such as wrinkles. Additionally, we edit biophysical skin properties, including melanin and hemoglobin, to simulate skin color changes, producing realistic re-ageing results from ages 10 to 80 years. We also propose a geometric neural network that alters the coarse scale facial geometry according to age, followed by a lightweight and efficient network that adds appropriate skin displacement on top of the coarse geometry. Both qualitative and quantitative comparisons show that our method outperforms current state-of-the-art approaches.

虽然目前的面部年龄重塑方法可以产生逼真的效果,但它们只关注二维年龄转换。在这项工作中,我们提出了一种在不同年龄段对人的面部外观和形状进行年龄转换,同时保留其身份的方法。我们采用α-(去)混合扩散网络和年龄-α变换来生成粗结构变化,如皱纹。此外,我们还编辑了皮肤的生物物理属性,包括黑色素和血红蛋白,以模拟皮肤颜色的变化,从而产生从 10 岁到 80 岁的逼真再衰老结果。我们还提出了一种几何神经网络,可根据年龄改变粗比例的面部几何图形,然后再使用一种轻量级的高效网络,在粗几何图形的基础上增加适当的皮肤位移。定性和定量比较表明,我们的方法优于目前最先进的方法。
{"title":"Realistic Facial Age Transformation with 3D Uplifting","authors":"X. Li,&nbsp;G. C. Guarnera,&nbsp;A. Lin,&nbsp;A. Ghosh","doi":"10.1111/cgf.15146","DOIUrl":"10.1111/cgf.15146","url":null,"abstract":"<div>\u0000 <p>While current facial re-ageing methods can produce realistic results, they purely focus on the 2D age transformation. In this work, we present an approach to transform the age of a person in both facial appearance and shape across different ages while preserving their identity. We employ an α-(de)blending diffusion network with an age-to-α transformation to generate coarse structure changes, such as wrinkles. Additionally, we edit biophysical skin properties, including melanin and hemoglobin, to simulate skin color changes, producing realistic re-ageing results from ages 10 to 80 years. We also propose a geometric neural network that alters the coarse scale facial geometry according to age, followed by a lightweight and efficient network that adds appropriate skin displacement on top of the coarse geometry. Both qualitative and quantitative comparisons show that our method outperforms current state-of-the-art approaches.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15146","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141785745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bridge Sampling for Connections via Multiple Scattering Events 通过多个散射事件进行连接的桥接采样
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-24 DOI: 10.1111/cgf.15160
Vincent Schüßler, Johannes Hanika, Carsten Dachsbacher

Explicit sampling of and connecting to light sources is often essential for reducing variance in Monte Carlo rendering. In dense, forward-scattering participating media, its benefit declines, as significant transport happens over longer multiple-scattering paths around the straight connection to the light. Sampling these paths is challenging, as their contribution is shaped by the product of reciprocal squared distance terms and the phase functions. Previous work demonstrates that sampling several of these terms jointly is crucial. However, these methods are tied to low-order scattering or struggle with highly-peaked phase functions.

We present a method for sampling a bridge: a subpath of arbitrary vertex count connecting two vertices. Its probability density is proportional to all phase functions at inner vertices and reciprocal squared distance terms. To achieve this, we importance sample the phase functions first, and subsequently all distances at once. For the latter, we sample an independent, preliminary distance for each edge of the bridge, and afterwards scale the bridge such that it matches the connection distance. The scale factor can be marginalized out analytically to obtain the probability density of the bridge. This approach leads to a simple algorithm and can construct bridges of any vertex count. For the case of one or two inserted vertices, we also show an alternative without scaling or marginalization.

For practical path sampling, we present a method to sample the number of bridge vertices whose distribution depends on the connection distance, the phase function, and the collision coefficient. While our importance sampling treats media as homogeneous we demonstrate its effectiveness on heterogeneous media.

光源的明确采样和连接对于减少蒙特卡洛渲染中的差异通常是至关重要的。在致密的前向散射参与介质中,这种方法的优势会减弱,因为在与光源直线连接的周围,会有较长的多重散射路径发生大量传输。对这些路径进行采样具有挑战性,因为它们的贡献是由倒数平方距离项和相位函数的乘积决定的。之前的研究表明,联合对其中几个项进行取样至关重要。我们提出了一种桥采样方法:连接两个顶点的任意顶点数子路径。它的概率密度与内顶点的所有相位函数和倒数平方距离项成正比。为此,我们首先对相位函数进行采样,然后一次性对所有距离进行采样。对于后者,我们对桥的每条边进行独立的初步距离采样,然后对桥进行缩放,使其与连接距离相匹配。缩放因子可以通过分析方法边际化,从而得到桥梁的概率密度。这种方法算法简单,可以构建任意顶点数的桥。对于只有一个或两个插入顶点的情况,我们还展示了一种无需缩放或边际化的替代方法。对于实际的路径采样,我们提出了一种对桥接顶点数进行采样的方法,其分布取决于连接距离、相位函数和碰撞系数。虽然我们的重要性采样将介质视为同质介质,但我们证明了它在异质介质中的有效性。
{"title":"Bridge Sampling for Connections via Multiple Scattering Events","authors":"Vincent Schüßler,&nbsp;Johannes Hanika,&nbsp;Carsten Dachsbacher","doi":"10.1111/cgf.15160","DOIUrl":"10.1111/cgf.15160","url":null,"abstract":"<div>\u0000 <p>Explicit sampling of and connecting to light sources is often essential for reducing variance in Monte Carlo rendering. In dense, forward-scattering participating media, its benefit declines, as significant transport happens over longer multiple-scattering paths around the straight connection to the light. Sampling these paths is challenging, as their contribution is shaped by the product of reciprocal squared distance terms and the phase functions. Previous work demonstrates that sampling several of these terms jointly is crucial. However, these methods are tied to low-order scattering or struggle with highly-peaked phase functions.</p>\u0000 <p>We present a method for sampling a bridge: a subpath of arbitrary vertex count connecting two vertices. Its probability density is proportional to all phase functions at inner vertices and reciprocal squared distance terms. To achieve this, we importance sample the phase functions first, and subsequently all distances at once. For the latter, we sample an independent, preliminary distance for each edge of the bridge, and afterwards scale the bridge such that it matches the connection distance. The scale factor can be marginalized out analytically to obtain the probability density of the bridge. This approach leads to a simple algorithm and can construct bridges of any vertex count. For the case of one or two inserted vertices, we also show an alternative without scaling or marginalization.</p>\u0000 <p>For practical path sampling, we present a method to sample the number of bridge vertices whose distribution depends on the connection distance, the phase function, and the collision coefficient. While our importance sampling treats media as homogeneous we demonstrate its effectiveness on heterogeneous media.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15160","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141785867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stereo-consistent Screen Space Reflection 立体一致的屏幕空间反射
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-24 DOI: 10.1111/cgf.15159
X. Wu, Y. Xu, L. Wang

Screen Space Reflection (SSR) can reliably achieve highly efficient reflective effects, significantly enhancing users' sense of realism in real-time applications. However, when directly applied to stereo rendering, popular SSR algorithms lead to inconsistencies due to the differing information between the left and right eyes. This inconsistency, invisible to human vision, results in visual discomfort. This paper analyzes and demonstrates how screen-space geometries, fade boundaries, and reflection samples introduce inconsistent cues. Considering the complementary nature of screen information, we introduce a stereo-aware SSR method to alleviate visual discomfort caused by screen space disparities. By contrasting our stereo-aware SSR with conventional SSR and ray-traced results, we showcase the effectiveness of our approach in mitigating the inconsistencies stemming from screen space differences while introducing affordable performance overhead for real-time rendering.

屏幕空间反射(SSR)可以可靠地实现高效的反射效果,大大增强用户在实时应用中的真实感。然而,当直接应用于立体渲染时,由于左右眼的信息不同,流行的 SSR 算法会导致不一致。这种不一致性在人的视觉中是不可见的,会导致视觉不适。本文分析并演示了屏幕空间几何图形、渐变边界和反射样本如何引入不一致的线索。考虑到屏幕信息的互补性,我们引入了一种立体感 SSR 方法,以减轻屏幕空间差异造成的视觉不适。通过将我们的立体感 SSR 与传统 SSR 和光线追踪结果进行对比,我们展示了我们的方法在减轻屏幕空间差异引起的不一致性方面的有效性,同时为实时渲染带来了可承受的性能开销。
{"title":"Stereo-consistent Screen Space Reflection","authors":"X. Wu,&nbsp;Y. Xu,&nbsp;L. Wang","doi":"10.1111/cgf.15159","DOIUrl":"10.1111/cgf.15159","url":null,"abstract":"<div>\u0000 <p>Screen Space Reflection (SSR) can reliably achieve highly efficient reflective effects, significantly enhancing users' sense of realism in real-time applications. However, when directly applied to stereo rendering, popular SSR algorithms lead to inconsistencies due to the differing information between the left and right eyes. This inconsistency, invisible to human vision, results in visual discomfort. This paper analyzes and demonstrates how screen-space geometries, fade boundaries, and reflection samples introduce inconsistent cues. Considering the complementary nature of screen information, we introduce a stereo-aware SSR method to alleviate visual discomfort caused by screen space disparities. By contrasting our stereo-aware SSR with conventional SSR and ray-traced results, we showcase the effectiveness of our approach in mitigating the inconsistencies stemming from screen space differences while introducing affordable performance overhead for real-time rendering.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15159","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141785870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural SSS: Lightweight Object Appearance Representation 神经 SSS:轻量级物体外观表示法
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-24 DOI: 10.1111/cgf.15158
T. TG, D. M. Tran, H. W. Jensen, R. Ramamoorthi, J. R. Frisvad

We present a method for capturing the BSSRDF (bidirectional scattering-surface reflectance distribution function) of arbitrary geometry with a neural network. We demonstrate how a compact neural network can represent the full 8-dimensional light transport within an object including heterogeneous scattering. We develop an efficient rendering method using importance sampling that is able to render complex translucent objects under arbitrary lighting. Our method can also leverage the common planar half-space assumption, which allows it to represent one BSSRDF model that can be used across a variety of geometries. Our results demonstrate that we can render heterogeneous translucent objects under arbitrary lighting and obtain results that match the reference rendered using volumetric path tracing.

我们介绍了一种利用神经网络捕捉任意几何形状的双向散射-表面反射分布函数(BSSRDF)的方法。我们展示了一个紧凑的神经网络如何表现物体内部包括异质散射在内的全部 8 维光传输。我们开发了一种使用重要性采样的高效渲染方法,能够在任意光照条件下渲染复杂的半透明物体。我们的方法还可以利用常见的平面半空间假设,这使得它可以表示一个 BSSRDF 模型,并可用于各种几何形状。我们的结果表明,我们可以在任意光照下渲染异质半透明物体,并获得与使用体积路径追踪技术渲染的参照物相匹配的结果。
{"title":"Neural SSS: Lightweight Object Appearance Representation","authors":"T. TG,&nbsp;D. M. Tran,&nbsp;H. W. Jensen,&nbsp;R. Ramamoorthi,&nbsp;J. R. Frisvad","doi":"10.1111/cgf.15158","DOIUrl":"10.1111/cgf.15158","url":null,"abstract":"<div>\u0000 <p>We present a method for capturing the BSSRDF (bidirectional scattering-surface reflectance distribution function) of arbitrary geometry with a neural network. We demonstrate how a compact neural network can represent the full 8-dimensional light transport within an object including heterogeneous scattering. We develop an efficient rendering method using importance sampling that is able to render complex translucent objects under arbitrary lighting. Our method can also leverage the common planar half-space assumption, which allows it to represent one BSSRDF model that can be used across a variety of geometries. Our results demonstrate that we can render heterogeneous translucent objects under arbitrary lighting and obtain results that match the reference rendered using volumetric path tracing.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15158","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141785869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Patch Decomposition for Efficient Mesh Contours Extraction 高效提取网格轮廓的补丁分解法
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-24 DOI: 10.1111/cgf.15154
P. Tsiapkolis, P. Bénard

Object-space occluding contours of triangular meshes (a.k.a. mesh contours) are at the core of many methods in computer graphics and computational geometry. A number of hierarchical data-structures have been proposed to accelerate their computation on the CPU, but they do not map well to the GPU for real-time applications, such as video games. We show that a simple, flat data-structure composed of patches bounded by a normal cone and a bounding sphere may reach this goal, provided it is constructed to maximize the probability for a patch to be culled over all viewpoints. We derive a heuristic metric to efficiently estimate this probability, and present a greedy, bottom-up algorithm that constructs patches by grouping mesh edges according to this metric. In addition, we propose an effective way of computing their bounding sphere. We demonstrate through extensive experiments that this data-structure achieves similar performance as the state-of-the-art on the CPU but is also perfectly adapted to the GPU, leading to up to ×5 speedups.

三角形网格的物体空间遮挡轮廓(又称网格轮廓)是计算机图形学和计算几何中许多方法的核心。为了加速在 CPU 上的计算,人们提出了许多分层数据结构,但它们并不能很好地映射到 GPU 上的实时应用,如视频游戏。我们展示了一种简单的平面数据结构,它由以法向锥和边界球为边界的补丁组成,只要其构造能最大限度地提高补丁在所有视点上被剔除的概率,就能实现这一目标。我们推导出了一种启发式度量方法来有效估算这种概率,并提出了一种贪婪的自下而上算法,该算法通过根据该度量方法对网格边缘进行分组来构建补丁。此外,我们还提出了计算补丁边界球的有效方法。我们通过大量实验证明,这种数据结构在 CPU 上的性能与最先进的算法类似,但在 GPU 上也能完美适应,速度最多可提高 5 倍。
{"title":"Patch Decomposition for Efficient Mesh Contours Extraction","authors":"P. Tsiapkolis,&nbsp;P. Bénard","doi":"10.1111/cgf.15154","DOIUrl":"10.1111/cgf.15154","url":null,"abstract":"<p>Object-space occluding contours of triangular meshes (a.k.a. mesh contours) are at the core of many methods in computer graphics and computational geometry. A number of hierarchical data-structures have been proposed to accelerate their computation on the CPU, but they do not map well to the GPU for real-time applications, such as video games. We show that a simple, flat data-structure composed of patches bounded by a normal cone and a bounding sphere may reach this goal, provided it is constructed to maximize the probability for a patch to be culled over all viewpoints. We derive a heuristic metric to efficiently estimate this probability, and present a greedy, bottom-up algorithm that constructs patches by grouping mesh edges according to this metric. In addition, we propose an effective way of computing their bounding sphere. We demonstrate through extensive experiments that this data-structure achieves similar performance as the state-of-the-art on the CPU but is also perfectly adapted to the GPU, leading to up to ×5 speedups.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141779329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Graphics Forum
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1