首页 > 最新文献

Computer Graphics Forum最新文献

英文 中文
VMF Diffuse: A unified rough diffuse BRDF VMF 漫反射:统一的粗糙漫反射 BRDF
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-24 DOI: 10.1111/cgf.15149
Eugene d'Eon, Andrea Weidlich

We present a practical analytic BRDF that approximates scattering from a generalized microfacet volume with a von Mises-Fischer NDF. Our BRDF seamlessly blends from smooth Lambertian, through moderately rough height fields with Beckmann-like statistics and into highly rough/porous behaviours that have been lacking from prior models. At maximum roughness, our model reduces to the recent Lambert-sphere BRDF. We validate our model by comparing to simulations of scattering from geometries with randomly-placed Lambertian spheres and show an improvement relative to a rough Beckmann BRDF with very high roughness.

我们提出了一种实用的分析 BRDF,它可以用 von Mises-Fischer NDF 近似处理来自广义微面体积的散射。我们的 BRDF 可以无缝融合光滑的朗伯场、具有类似贝克曼统计量的中等粗糙高度场以及先前模型所缺乏的高度粗糙/多孔行为。在最大粗糙度时,我们的模型还原为最新的朗伯球 BRDF。我们通过与随机放置朗伯球的几何散射模拟进行比较,验证了我们的模型,并显示出相对于粗糙度极高的贝克曼 BRDF 的改进。
{"title":"VMF Diffuse: A unified rough diffuse BRDF","authors":"Eugene d'Eon,&nbsp;Andrea Weidlich","doi":"10.1111/cgf.15149","DOIUrl":"10.1111/cgf.15149","url":null,"abstract":"<p>We present a practical analytic BRDF that approximates scattering from a generalized microfacet volume with a von Mises-Fischer NDF. Our BRDF seamlessly blends from smooth Lambertian, through moderately rough height fields with Beckmann-like statistics and into highly rough/porous behaviours that have been lacking from prior models. At maximum roughness, our model reduces to the recent Lambert-sphere BRDF. We validate our model by comparing to simulations of scattering from geometries with randomly-placed Lambertian spheres and show an improvement relative to a rough Beckmann BRDF with very high roughness.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 4","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141779332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Diffusion Approach to Radiance Field Relighting using Multi-Illumination Synthesis 利用多重照明合成进行辐射场再照明的扩散方法
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-24 DOI: 10.1111/cgf.15147
Y. Poirier-Ginter, A. Gauthier, J. Phillip, J.-F. Lalonde, G. Drettakis

Relighting radiance fields is severely underconstrained for multi-view data, which is most often captured under a single illumination condition; It is especially hard for full scenes containing multiple objects. We introduce a method to create relightable radiance fields using such single-illumination data by exploiting priors extracted from 2D image diffusion models. We first fine-tune a 2D diffusion model on a multi-illumination dataset conditioned by light direction, allowing us to augment a single-illumination capture into a realistic – but possibly inconsistent – multi-illumination dataset from directly defined light directions. We use this augmented data to create a relightable radiance field represented by 3D Gaussian splats. To allow direct control of light direction for low-frequency lighting, we represent appearance with a multi-layer perceptron parameterized on light direction. To enforce multi-view consistency and overcome inaccuracies we optimize a per-image auxiliary feature vector. We show results on synthetic and real multi-view data under single illumination, demonstrating that our method successfully exploits 2D diffusion model priors to allow realistic 3D relighting for complete scenes.

多视角数据通常是在单一光照条件下捕获的,因此重照辐射场对多视角数据的约束严重不足;对于包含多个物体的全场景来说,重照辐射场尤其困难。我们介绍了一种方法,通过利用从二维图像扩散模型中提取的先验值,使用此类单光照数据创建可重照辐射场。我们首先在以光照方向为条件的多光照数据集上微调 2D 扩散模型,从而将单光照捕捉数据增强为现实的多光照数据集,但可能不一致,这些数据集来自直接定义的光照方向。我们利用这些增强数据来创建由三维高斯光斑表示的可再照明辐射场。为了对低频照明的光照方向进行直接控制,我们用一个以光照方向为参数的多层感知器来表示外观。为了实现多视角一致性并克服不准确性,我们优化了每个图像的辅助特征向量。我们展示了在单一照明下合成和真实多视角数据的结果,证明我们的方法成功地利用了二维扩散模型先验,为完整场景提供了逼真的三维重新照明。
{"title":"A Diffusion Approach to Radiance Field Relighting using Multi-Illumination Synthesis","authors":"Y. Poirier-Ginter,&nbsp;A. Gauthier,&nbsp;J. Phillip,&nbsp;J.-F. Lalonde,&nbsp;G. Drettakis","doi":"10.1111/cgf.15147","DOIUrl":"10.1111/cgf.15147","url":null,"abstract":"<p>Relighting radiance fields is severely underconstrained for multi-view data, which is most often captured under a single illumination condition; It is especially hard for full scenes containing multiple objects. We introduce a method to create relightable radiance fields using such single-illumination data by exploiting priors extracted from 2D image diffusion models. We first fine-tune a 2D diffusion model on a multi-illumination dataset conditioned by light direction, allowing us to augment a single-illumination capture into a realistic – but possibly inconsistent – multi-illumination dataset from directly defined light directions. We use this augmented data to create a relightable radiance field represented by 3D Gaussian splats. To allow direct control of light direction for low-frequency lighting, we represent appearance with a multi-layer perceptron parameterized on light direction. To enforce multi-view consistency and overcome inaccuracies we optimize a per-image auxiliary feature vector. We show results on synthetic and real multi-view data under single illumination, demonstrating that our method successfully exploits 2D diffusion model priors to allow realistic 3D relighting for complete scenes.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 4","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141779327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Realistic Facial Age Transformation with 3D Uplifting 通过 3D 提升技术实现逼真的面部年龄变化
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-24 DOI: 10.1111/cgf.15146
X. Li, G. C. Guarnera, A. Lin, A. Ghosh

While current facial re-ageing methods can produce realistic results, they purely focus on the 2D age transformation. In this work, we present an approach to transform the age of a person in both facial appearance and shape across different ages while preserving their identity. We employ an α-(de)blending diffusion network with an age-to-α transformation to generate coarse structure changes, such as wrinkles. Additionally, we edit biophysical skin properties, including melanin and hemoglobin, to simulate skin color changes, producing realistic re-ageing results from ages 10 to 80 years. We also propose a geometric neural network that alters the coarse scale facial geometry according to age, followed by a lightweight and efficient network that adds appropriate skin displacement on top of the coarse geometry. Both qualitative and quantitative comparisons show that our method outperforms current state-of-the-art approaches.

虽然目前的面部年龄重塑方法可以产生逼真的效果,但它们只关注二维年龄转换。在这项工作中,我们提出了一种在不同年龄段对人的面部外观和形状进行年龄转换,同时保留其身份的方法。我们采用α-(去)混合扩散网络和年龄-α变换来生成粗结构变化,如皱纹。此外,我们还编辑了皮肤的生物物理属性,包括黑色素和血红蛋白,以模拟皮肤颜色的变化,从而产生从 10 岁到 80 岁的逼真再衰老结果。我们还提出了一种几何神经网络,可根据年龄改变粗比例的面部几何图形,然后再使用一种轻量级的高效网络,在粗几何图形的基础上增加适当的皮肤位移。定性和定量比较表明,我们的方法优于目前最先进的方法。
{"title":"Realistic Facial Age Transformation with 3D Uplifting","authors":"X. Li,&nbsp;G. C. Guarnera,&nbsp;A. Lin,&nbsp;A. Ghosh","doi":"10.1111/cgf.15146","DOIUrl":"10.1111/cgf.15146","url":null,"abstract":"<div>\u0000 <p>While current facial re-ageing methods can produce realistic results, they purely focus on the 2D age transformation. In this work, we present an approach to transform the age of a person in both facial appearance and shape across different ages while preserving their identity. We employ an α-(de)blending diffusion network with an age-to-α transformation to generate coarse structure changes, such as wrinkles. Additionally, we edit biophysical skin properties, including melanin and hemoglobin, to simulate skin color changes, producing realistic re-ageing results from ages 10 to 80 years. We also propose a geometric neural network that alters the coarse scale facial geometry according to age, followed by a lightweight and efficient network that adds appropriate skin displacement on top of the coarse geometry. Both qualitative and quantitative comparisons show that our method outperforms current state-of-the-art approaches.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 4","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15146","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141785745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bridge Sampling for Connections via Multiple Scattering Events 通过多个散射事件进行连接的桥接采样
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-24 DOI: 10.1111/cgf.15160
Vincent Schüßler, Johannes Hanika, Carsten Dachsbacher

Explicit sampling of and connecting to light sources is often essential for reducing variance in Monte Carlo rendering. In dense, forward-scattering participating media, its benefit declines, as significant transport happens over longer multiple-scattering paths around the straight connection to the light. Sampling these paths is challenging, as their contribution is shaped by the product of reciprocal squared distance terms and the phase functions. Previous work demonstrates that sampling several of these terms jointly is crucial. However, these methods are tied to low-order scattering or struggle with highly-peaked phase functions.

We present a method for sampling a bridge: a subpath of arbitrary vertex count connecting two vertices. Its probability density is proportional to all phase functions at inner vertices and reciprocal squared distance terms. To achieve this, we importance sample the phase functions first, and subsequently all distances at once. For the latter, we sample an independent, preliminary distance for each edge of the bridge, and afterwards scale the bridge such that it matches the connection distance. The scale factor can be marginalized out analytically to obtain the probability density of the bridge. This approach leads to a simple algorithm and can construct bridges of any vertex count. For the case of one or two inserted vertices, we also show an alternative without scaling or marginalization.

For practical path sampling, we present a method to sample the number of bridge vertices whose distribution depends on the connection distance, the phase function, and the collision coefficient. While our importance sampling treats media as homogeneous we demonstrate its effectiveness on heterogeneous media.

光源的明确采样和连接对于减少蒙特卡洛渲染中的差异通常是至关重要的。在致密的前向散射参与介质中,这种方法的优势会减弱,因为在与光源直线连接的周围,会有较长的多重散射路径发生大量传输。对这些路径进行采样具有挑战性,因为它们的贡献是由倒数平方距离项和相位函数的乘积决定的。之前的研究表明,联合对其中几个项进行取样至关重要。我们提出了一种桥采样方法:连接两个顶点的任意顶点数子路径。它的概率密度与内顶点的所有相位函数和倒数平方距离项成正比。为此,我们首先对相位函数进行采样,然后一次性对所有距离进行采样。对于后者,我们对桥的每条边进行独立的初步距离采样,然后对桥进行缩放,使其与连接距离相匹配。缩放因子可以通过分析方法边际化,从而得到桥梁的概率密度。这种方法算法简单,可以构建任意顶点数的桥。对于只有一个或两个插入顶点的情况,我们还展示了一种无需缩放或边际化的替代方法。对于实际的路径采样,我们提出了一种对桥接顶点数进行采样的方法,其分布取决于连接距离、相位函数和碰撞系数。虽然我们的重要性采样将介质视为同质介质,但我们证明了它在异质介质中的有效性。
{"title":"Bridge Sampling for Connections via Multiple Scattering Events","authors":"Vincent Schüßler,&nbsp;Johannes Hanika,&nbsp;Carsten Dachsbacher","doi":"10.1111/cgf.15160","DOIUrl":"10.1111/cgf.15160","url":null,"abstract":"<div>\u0000 <p>Explicit sampling of and connecting to light sources is often essential for reducing variance in Monte Carlo rendering. In dense, forward-scattering participating media, its benefit declines, as significant transport happens over longer multiple-scattering paths around the straight connection to the light. Sampling these paths is challenging, as their contribution is shaped by the product of reciprocal squared distance terms and the phase functions. Previous work demonstrates that sampling several of these terms jointly is crucial. However, these methods are tied to low-order scattering or struggle with highly-peaked phase functions.</p>\u0000 <p>We present a method for sampling a bridge: a subpath of arbitrary vertex count connecting two vertices. Its probability density is proportional to all phase functions at inner vertices and reciprocal squared distance terms. To achieve this, we importance sample the phase functions first, and subsequently all distances at once. For the latter, we sample an independent, preliminary distance for each edge of the bridge, and afterwards scale the bridge such that it matches the connection distance. The scale factor can be marginalized out analytically to obtain the probability density of the bridge. This approach leads to a simple algorithm and can construct bridges of any vertex count. For the case of one or two inserted vertices, we also show an alternative without scaling or marginalization.</p>\u0000 <p>For practical path sampling, we present a method to sample the number of bridge vertices whose distribution depends on the connection distance, the phase function, and the collision coefficient. While our importance sampling treats media as homogeneous we demonstrate its effectiveness on heterogeneous media.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 4","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15160","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141785867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stereo-consistent Screen Space Reflection 立体一致的屏幕空间反射
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-24 DOI: 10.1111/cgf.15159
X. Wu, Y. Xu, L. Wang

Screen Space Reflection (SSR) can reliably achieve highly efficient reflective effects, significantly enhancing users' sense of realism in real-time applications. However, when directly applied to stereo rendering, popular SSR algorithms lead to inconsistencies due to the differing information between the left and right eyes. This inconsistency, invisible to human vision, results in visual discomfort. This paper analyzes and demonstrates how screen-space geometries, fade boundaries, and reflection samples introduce inconsistent cues. Considering the complementary nature of screen information, we introduce a stereo-aware SSR method to alleviate visual discomfort caused by screen space disparities. By contrasting our stereo-aware SSR with conventional SSR and ray-traced results, we showcase the effectiveness of our approach in mitigating the inconsistencies stemming from screen space differences while introducing affordable performance overhead for real-time rendering.

屏幕空间反射(SSR)可以可靠地实现高效的反射效果,大大增强用户在实时应用中的真实感。然而,当直接应用于立体渲染时,由于左右眼的信息不同,流行的 SSR 算法会导致不一致。这种不一致性在人的视觉中是不可见的,会导致视觉不适。本文分析并演示了屏幕空间几何图形、渐变边界和反射样本如何引入不一致的线索。考虑到屏幕信息的互补性,我们引入了一种立体感 SSR 方法,以减轻屏幕空间差异造成的视觉不适。通过将我们的立体感 SSR 与传统 SSR 和光线追踪结果进行对比,我们展示了我们的方法在减轻屏幕空间差异引起的不一致性方面的有效性,同时为实时渲染带来了可承受的性能开销。
{"title":"Stereo-consistent Screen Space Reflection","authors":"X. Wu,&nbsp;Y. Xu,&nbsp;L. Wang","doi":"10.1111/cgf.15159","DOIUrl":"10.1111/cgf.15159","url":null,"abstract":"<div>\u0000 <p>Screen Space Reflection (SSR) can reliably achieve highly efficient reflective effects, significantly enhancing users' sense of realism in real-time applications. However, when directly applied to stereo rendering, popular SSR algorithms lead to inconsistencies due to the differing information between the left and right eyes. This inconsistency, invisible to human vision, results in visual discomfort. This paper analyzes and demonstrates how screen-space geometries, fade boundaries, and reflection samples introduce inconsistent cues. Considering the complementary nature of screen information, we introduce a stereo-aware SSR method to alleviate visual discomfort caused by screen space disparities. By contrasting our stereo-aware SSR with conventional SSR and ray-traced results, we showcase the effectiveness of our approach in mitigating the inconsistencies stemming from screen space differences while introducing affordable performance overhead for real-time rendering.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 4","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15159","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141785870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural SSS: Lightweight Object Appearance Representation 神经 SSS:轻量级物体外观表示法
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-24 DOI: 10.1111/cgf.15158
T. TG, D. M. Tran, H. W. Jensen, R. Ramamoorthi, J. R. Frisvad

We present a method for capturing the BSSRDF (bidirectional scattering-surface reflectance distribution function) of arbitrary geometry with a neural network. We demonstrate how a compact neural network can represent the full 8-dimensional light transport within an object including heterogeneous scattering. We develop an efficient rendering method using importance sampling that is able to render complex translucent objects under arbitrary lighting. Our method can also leverage the common planar half-space assumption, which allows it to represent one BSSRDF model that can be used across a variety of geometries. Our results demonstrate that we can render heterogeneous translucent objects under arbitrary lighting and obtain results that match the reference rendered using volumetric path tracing.

我们介绍了一种利用神经网络捕捉任意几何形状的双向散射-表面反射分布函数(BSSRDF)的方法。我们展示了一个紧凑的神经网络如何表现物体内部包括异质散射在内的全部 8 维光传输。我们开发了一种使用重要性采样的高效渲染方法,能够在任意光照条件下渲染复杂的半透明物体。我们的方法还可以利用常见的平面半空间假设,这使得它可以表示一个 BSSRDF 模型,并可用于各种几何形状。我们的结果表明,我们可以在任意光照下渲染异质半透明物体,并获得与使用体积路径追踪技术渲染的参照物相匹配的结果。
{"title":"Neural SSS: Lightweight Object Appearance Representation","authors":"T. TG,&nbsp;D. M. Tran,&nbsp;H. W. Jensen,&nbsp;R. Ramamoorthi,&nbsp;J. R. Frisvad","doi":"10.1111/cgf.15158","DOIUrl":"10.1111/cgf.15158","url":null,"abstract":"<div>\u0000 <p>We present a method for capturing the BSSRDF (bidirectional scattering-surface reflectance distribution function) of arbitrary geometry with a neural network. We demonstrate how a compact neural network can represent the full 8-dimensional light transport within an object including heterogeneous scattering. We develop an efficient rendering method using importance sampling that is able to render complex translucent objects under arbitrary lighting. Our method can also leverage the common planar half-space assumption, which allows it to represent one BSSRDF model that can be used across a variety of geometries. Our results demonstrate that we can render heterogeneous translucent objects under arbitrary lighting and obtain results that match the reference rendered using volumetric path tracing.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 4","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15158","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141785869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Patch Decomposition for Efficient Mesh Contours Extraction 高效提取网格轮廓的补丁分解法
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-24 DOI: 10.1111/cgf.15154
P. Tsiapkolis, P. Bénard

Object-space occluding contours of triangular meshes (a.k.a. mesh contours) are at the core of many methods in computer graphics and computational geometry. A number of hierarchical data-structures have been proposed to accelerate their computation on the CPU, but they do not map well to the GPU for real-time applications, such as video games. We show that a simple, flat data-structure composed of patches bounded by a normal cone and a bounding sphere may reach this goal, provided it is constructed to maximize the probability for a patch to be culled over all viewpoints. We derive a heuristic metric to efficiently estimate this probability, and present a greedy, bottom-up algorithm that constructs patches by grouping mesh edges according to this metric. In addition, we propose an effective way of computing their bounding sphere. We demonstrate through extensive experiments that this data-structure achieves similar performance as the state-of-the-art on the CPU but is also perfectly adapted to the GPU, leading to up to ×5 speedups.

三角形网格的物体空间遮挡轮廓(又称网格轮廓)是计算机图形学和计算几何中许多方法的核心。为了加速在 CPU 上的计算,人们提出了许多分层数据结构,但它们并不能很好地映射到 GPU 上的实时应用,如视频游戏。我们展示了一种简单的平面数据结构,它由以法向锥和边界球为边界的补丁组成,只要其构造能最大限度地提高补丁在所有视点上被剔除的概率,就能实现这一目标。我们推导出了一种启发式度量方法来有效估算这种概率,并提出了一种贪婪的自下而上算法,该算法通过根据该度量方法对网格边缘进行分组来构建补丁。此外,我们还提出了计算补丁边界球的有效方法。我们通过大量实验证明,这种数据结构在 CPU 上的性能与最先进的算法类似,但在 GPU 上也能完美适应,速度最多可提高 5 倍。
{"title":"Patch Decomposition for Efficient Mesh Contours Extraction","authors":"P. Tsiapkolis,&nbsp;P. Bénard","doi":"10.1111/cgf.15154","DOIUrl":"10.1111/cgf.15154","url":null,"abstract":"<p>Object-space occluding contours of triangular meshes (a.k.a. mesh contours) are at the core of many methods in computer graphics and computational geometry. A number of hierarchical data-structures have been proposed to accelerate their computation on the CPU, but they do not map well to the GPU for real-time applications, such as video games. We show that a simple, flat data-structure composed of patches bounded by a normal cone and a bounding sphere may reach this goal, provided it is constructed to maximize the probability for a patch to be culled over all viewpoints. We derive a heuristic metric to efficiently estimate this probability, and present a greedy, bottom-up algorithm that constructs patches by grouping mesh edges according to this metric. In addition, we propose an effective way of computing their bounding sphere. We demonstrate through extensive experiments that this data-structure achieves similar performance as the state-of-the-art on the CPU but is also perfectly adapted to the GPU, leading to up to ×5 speedups.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 4","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141779329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Non-Orthogonal Reduction for Rendering Fluorescent Materials in Non-Spectral Engines 在非光谱引擎中渲染荧光材料的非正交还原法
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-24 DOI: 10.1111/cgf.15150
A. Fichet, L. Belcour, P. Barla

We propose a method to accurately handle fluorescence in a non-spectral (e.g., tristimulus) rendering engine, showcasing color-shifting and increased luminance effects. Core to our method is a principled reduction technique that encodes the reradiation into a low-dimensional matrix working in the space of the renderer's Color Matching Functions (CMFs). Our process is independent of a specific CMF set and allows for the addition of a non-visible ultraviolet band during light transport. Our representation visually matches full spectral light transport for measured fluorescent materials even for challenging illuminants.

我们提出了一种在非光谱(如三基色)渲染引擎中准确处理荧光的方法,展示了色彩偏移和亮度增加的效果。我们方法的核心是一种原则性的还原技术,它能在渲染器的色彩匹配函数(CMF)空间中将再辐射编码为低维矩阵。我们的流程独立于特定的 CMF 集,并允许在光传输过程中添加不可见光紫外波段。我们的表现形式在视觉上与所测荧光材料的全光谱光传输相匹配,即使是在具有挑战性的照明条件下也是如此。
{"title":"Non-Orthogonal Reduction for Rendering Fluorescent Materials in Non-Spectral Engines","authors":"A. Fichet,&nbsp;L. Belcour,&nbsp;P. Barla","doi":"10.1111/cgf.15150","DOIUrl":"10.1111/cgf.15150","url":null,"abstract":"<p>We propose a method to accurately handle fluorescence in a non-spectral (e.g., tristimulus) rendering engine, showcasing color-shifting and increased luminance effects. Core to our method is a principled reduction technique that encodes the reradiation into a low-dimensional matrix working in the space of the renderer's Color Matching Functions (CMFs). Our process is independent of a specific CMF set and allows for the addition of a non-visible ultraviolet band during light transport. Our representation visually matches full spectral light transport for measured fluorescent materials even for challenging illuminants.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 4","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141779330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scaling Painting Style Transfer 缩放绘画风格转移
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-24 DOI: 10.1111/cgf.15155
Bruno Galerne, Lara Raad, José Lezama, Jean-Michel Morel

Neural style transfer (NST) is a deep learning technique that produces an unprecedentedly rich style transfer from a style image to a content image. It is particularly impressive when it comes to transferring style from a painting to an image. NST was originally achieved by solving an optimization problem to match the global statistics of the style image while preserving the local geometric features of the content image. The two main drawbacks of this original approach is that it is computationally expensive and that the resolution of the output images is limited by high GPU memory requirements. Many solutions have been proposed to both accelerate NST and produce images with larger size. However, our investigation shows that these accelerated methods all compromise the quality of the produced images in the context of painting style transfer. Indeed, transferring the style of a painting is a complex task involving features at different scales, from the color palette and compositional style to the fine brushstrokes and texture of the canvas. This paper provides a solution to solve the original global optimization for ultra-high resolution (UHR) images, enabling multiscale NST at unprecedented image sizes. This is achieved by spatially localizing the computation of each forward and backward passes through the VGG network. Extensive qualitative and quantitative comparisons, as well as a perceptual study, show that our method produces style transfer of unmatched quality for such high-resolution painting styles. By a careful comparison, we show that state-of-the-art fast methods are still prone to artifacts, thus suggesting that fast painting style transfer remains an open problem.

神经风格转移(NST)是一种深度学习技术,它能从风格图像向内容图像进行前所未有的丰富风格转移。在将风格从绘画转移到图像时,它的效果尤其令人印象深刻。NST 最初是通过解决一个优化问题来实现的,即在保留内容图像局部几何特征的同时,匹配风格图像的全局统计数据。这种原始方法有两个主要缺点,一是计算成本高,二是输出图像的分辨率受限于对 GPU 内存的高要求。为了加速 NST 并生成更大尺寸的图像,人们提出了许多解决方案。然而,我们的调查显示,在绘画风格转移的背景下,这些加速方法都会影响所生成图像的质量。事实上,绘画风格的转换是一项复杂的任务,涉及不同尺度的特征,从调色板和构图风格到精细笔触和画布纹理。本文为解决超高分辨率(UHR)图像的原始全局优化问题提供了一种解决方案,从而在前所未有的图像尺寸下实现多尺度 NST。这是通过对 VGG 网络的每个前向和后向通路的计算进行空间定位来实现的。广泛的定性和定量比较以及感知研究表明,我们的方法能为这种高分辨率绘画风格提供无与伦比的风格转移质量。通过仔细比较,我们发现最先进的快速方法仍然容易产生伪影,这表明快速绘画风格转换仍然是一个有待解决的问题。
{"title":"Scaling Painting Style Transfer","authors":"Bruno Galerne,&nbsp;Lara Raad,&nbsp;José Lezama,&nbsp;Jean-Michel Morel","doi":"10.1111/cgf.15155","DOIUrl":"10.1111/cgf.15155","url":null,"abstract":"<p>Neural style transfer (NST) is a deep learning technique that produces an unprecedentedly rich style transfer from a style image to a content image. It is particularly impressive when it comes to transferring style from a painting to an image. NST was originally achieved by solving an optimization problem to match the global statistics of the style image while preserving the local geometric features of the content image. The two main drawbacks of this original approach is that it is computationally expensive and that the resolution of the output images is limited by high GPU memory requirements. Many solutions have been proposed to both accelerate NST and produce images with larger size. However, our investigation shows that these accelerated methods all compromise the quality of the produced images in the context of painting style transfer. Indeed, transferring the style of a painting is a complex task involving features at different scales, from the color palette and compositional style to the fine brushstrokes and texture of the canvas. This paper provides a solution to solve the original global optimization for ultra-high resolution (UHR) images, enabling multiscale NST at unprecedented image sizes. This is achieved by spatially localizing the computation of each forward and backward passes through the VGG network. Extensive qualitative and quantitative comparisons, as well as a perceptual study, show that our method produces style transfer of unmatched quality for such high-resolution painting styles. By a careful comparison, we show that state-of-the-art fast methods are still prone to artifacts, thus suggesting that fast painting style transfer remains an open problem.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 4","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141779333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural Histogram-Based Glint Rendering of Surfaces With Spatially Varying Roughness 基于神经直方图的空间粗糙度变化表面闪光渲染
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-24 DOI: 10.1111/cgf.15157
I. Shah, L. E. Gamboa, A. Gruson, P. J. Narayanan

The complex, glinty appearance of detailed normal-mapped surfaces at different scales requires expensive per-pixel Normal Distribution Function computations. Moreover, large light sources further compound this integration and increase the noise in the Monte Carlo renderer. Specialized rendering techniques that explicitly express the underlying normal distribution have been developed to improve performance for glinty surfaces controlled by a fixed material roughness. We present a new method that supports spatially varying roughness based on a neural histogram that computes per-pixel NDFs with arbitrary positions and sizes. Our representation is both memory and compute efficient. Additionally, we fully integrate direct illumination for all light directions in constant time. Our approach decouples roughness and normal distribution, allowing the live editing of the spatially varying roughness of complex normal-mapped objects. We demonstrate that our approach improves on previous work by achieving smaller footprints while offering GPU-friendly computation and compact representation.

不同尺度的详细法线贴图表面外观复杂、闪烁,需要对每个像素进行昂贵的法线分布函数计算。此外,大型光源会进一步加剧这种整合,并增加蒙特卡罗渲染器中的噪声。为了提高由固定材料粗糙度控制的闪烁表面的性能,我们开发了明确表达基本正态分布的专门渲染技术。我们提出的新方法支持基于神经直方图的空间变化粗糙度,可计算具有任意位置和大小的每像素 NDF。我们的表示方法既节省内存,又提高计算效率。此外,我们在恒定时间内完全整合了所有光照方向的直接光照。我们的方法将粗糙度和法线分布解耦,允许对复杂法线映射对象的空间变化粗糙度进行实时编辑。我们证明,我们的方法改进了之前的工作,实现了更小的足迹,同时提供了 GPU 友好的计算和紧凑的表示。
{"title":"Neural Histogram-Based Glint Rendering of Surfaces With Spatially Varying Roughness","authors":"I. Shah,&nbsp;L. E. Gamboa,&nbsp;A. Gruson,&nbsp;P. J. Narayanan","doi":"10.1111/cgf.15157","DOIUrl":"10.1111/cgf.15157","url":null,"abstract":"<p>The complex, glinty appearance of detailed normal-mapped surfaces at different scales requires expensive per-pixel Normal Distribution Function computations. Moreover, large light sources further compound this integration and increase the noise in the Monte Carlo renderer. Specialized rendering techniques that explicitly express the underlying normal distribution have been developed to improve performance for glinty surfaces controlled by a fixed material roughness. We present a new method that supports spatially varying roughness based on a neural histogram that computes per-pixel NDFs with arbitrary positions and sizes. Our representation is both memory and compute efficient. Additionally, we fully integrate direct illumination for all light directions in constant time. Our approach decouples roughness and normal distribution, allowing the live editing of the spatially varying roughness of complex normal-mapped objects. We demonstrate that our approach improves on previous work by achieving smaller footprints while offering GPU-friendly computation and compact representation.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 4","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141779399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Graphics Forum
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1