Differentiable rasterization changes the standard formulation of primitive rasterization — by enabling gradient flow from a pixel to its underlying triangles — using distribution functions in different stages of rendering, creating a “soft” version of the original rasterizer. However, choosing the optimal softening function that ensures the best performance and convergence to a desired goal requires trial and error. Previous work has analyzed and compared several combinations of softening. In this work, we take it a step further and, instead of making a combinatorial choice of softening operations, parameterize the continuous space of common softening operations. We study meta-learning tunable softness functions over a set of inverse rendering tasks (2D and 3D shape, pose and occlusion) so it generalizes to new and unseen differentiable rendering tasks with optimal softness.
{"title":"Learning to Rasterize Differentiably","authors":"C. Wu, H. Mailee, Z. Montazeri, T. Ritschel","doi":"10.1111/cgf.15145","DOIUrl":"10.1111/cgf.15145","url":null,"abstract":"<p>Differentiable rasterization changes the standard formulation of primitive rasterization — by enabling gradient flow from a pixel to its underlying triangles — using distribution functions in different stages of rendering, creating a “soft” version of the original rasterizer. However, choosing the optimal softening function that ensures the best performance and convergence to a desired goal requires trial and error. Previous work has analyzed and compared several combinations of softening. In this work, we take it a step further and, instead of making a combinatorial choice of softening operations, parameterize the continuous space of common softening operations. We study meta-learning tunable softness functions over a set of inverse rendering tasks (2D and 3D shape, pose and occlusion) so it generalizes to new and unseen differentiable rendering tasks with optimal softness.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141779334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Gauthier, B. Kerbl, J. Levallois, R. Faury, J. M. Thiery, T. Boubekeur
We propose MatUp, an upsampling filter for material super-resolution. Our method takes as input a low-resolution SVBRDF and upscales its maps so that their rendering under various lighting conditions fits upsampled renderings inferred in the radiance domain with pre-trained RGB upsamplers. We formulate our local filter as a compact Multilayer Perceptron (MLP), which acts on a small window of the input SVBRDF and is optimized using a data-fitting loss defined over upsampled radiance at various locations. This optimization is entirely performed at the scale of a single, independent material. Doing so, MatUp leverages the reconstruction capabilities acquired over large collections of natural images by pre-trained RGB models and provides regularization over self-similar structures. In particular, our light-weight neural filter avoids retraining complex architectures from scratch or accessing any large collection of low/high resolution material pairs – which do not actually exist at the scale RGB upsamplers are trained with. As a result, MatUp provides fine and coherent details in the upscaled material maps, as shown in the extensive evaluation we provide.
{"title":"MatUp: Repurposing Image Upsamplers for SVBRDFs","authors":"A. Gauthier, B. Kerbl, J. Levallois, R. Faury, J. M. Thiery, T. Boubekeur","doi":"10.1111/cgf.15151","DOIUrl":"10.1111/cgf.15151","url":null,"abstract":"<p>We propose M<span>at</span>U<span>p</span>, an upsampling filter for material super-resolution. Our method takes as input a low-resolution SVBRDF and upscales its maps so that their rendering under various lighting conditions fits upsampled renderings inferred in the radiance domain with pre-trained RGB upsamplers. We formulate our local filter as a compact Multilayer Perceptron (MLP), which acts on a small window of the input SVBRDF and is optimized using a data-fitting loss defined over upsampled radiance at various locations. This optimization is entirely performed at the scale of a single, independent material. Doing so, M<span>at</span>U<span>p</span> leverages the reconstruction capabilities acquired over large collections of natural images by pre-trained RGB models and provides regularization over self-similar structures. In particular, our light-weight neural filter avoids retraining complex architectures from scratch or accessing any large collection of low/high resolution material pairs – which do not actually exist at the scale RGB upsamplers are trained with. As a result, M<span>at</span>U<span>p</span> provides fine and coherent details in the upscaled material maps, as shown in the extensive evaluation we provide.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141785739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gradient-domain rendering utilizes difference estimates with shift mapping to reduce variance in Monte Carlo rendering. Such difference estimates are effective under the assumption that pixels for difference estimates have similar integrands. This assumption is often violated because it is common to have spatially varying BSDFs with material maps, which potentially result in a very different integrand per pixel. We introduce an extension of gradient-domain rendering that effectively supports such per-pixel variation in BSDFs based on basis expansion. Basis expansion for BSDFs has been used extensively in other problems in rendering, where the goal is to approximate a given BSDF by a weighted sum of predefined basis functions. We instead utilize lossless basis expansion, representing a BSDF without any approximation by adding the remaining difference in the original basis expansion. This lossless basis expansion allows us to cancel more terms via shift mapping, resulting in low variance difference estimates even with per-pixel BSDF variation. We also extend the Poisson reconstruction process to support this basis expansion. Regular gradient-domain rendering can be expressed as a special case of our extension, where the basis is simply the BSDF per pixel (i.e., no basis expansion). We provide proof-of-concept experiments and showcase the effectiveness of our method for scenes with highly varying material maps. Our results show noticeable improvement over regular gradient-domain rendering under both L1 and L2 reconstructions. The resulting formulation via basis expansion essentially serves as a new way of path reuse among pixels in the presence of per-pixel variation.
{"title":"Lossless Basis Expansion for Gradient-Domain Rendering","authors":"Q. Fang, T. Hachisuka","doi":"10.1111/cgf.15153","DOIUrl":"10.1111/cgf.15153","url":null,"abstract":"<p>Gradient-domain rendering utilizes difference estimates with shift mapping to reduce variance in Monte Carlo rendering. Such difference estimates are effective under the assumption that pixels for difference estimates have similar integrands. This assumption is often violated because it is common to have spatially varying BSDFs with material maps, which potentially result in a very different integrand per pixel. We introduce an extension of gradient-domain rendering that effectively supports such per-pixel variation in BSDFs based on basis expansion. Basis expansion for BSDFs has been used extensively in other problems in rendering, where the goal is to approximate a given BSDF by a weighted sum of predefined basis functions. We instead utilize lossless basis expansion, representing a BSDF without any approximation by adding the remaining difference in the original basis expansion. This lossless basis expansion allows us to cancel more terms via shift mapping, resulting in low variance difference estimates even with per-pixel BSDF variation. We also extend the Poisson reconstruction process to support this basis expansion. Regular gradient-domain rendering can be expressed as a special case of our extension, where the basis is simply the BSDF per pixel (i.e., no basis expansion). We provide proof-of-concept experiments and showcase the effectiveness of our method for scenes with highly varying material maps. Our results show noticeable improvement over regular gradient-domain rendering under both L<sup>1</sup> and L<sup>2</sup> reconstructions. The resulting formulation via basis expansion essentially serves as a new way of path reuse among pixels in the presence of per-pixel variation.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141785868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a practical analytic BRDF that approximates scattering from a generalized microfacet volume with a von Mises-Fischer NDF. Our BRDF seamlessly blends from smooth Lambertian, through moderately rough height fields with Beckmann-like statistics and into highly rough/porous behaviours that have been lacking from prior models. At maximum roughness, our model reduces to the recent Lambert-sphere BRDF. We validate our model by comparing to simulations of scattering from geometries with randomly-placed Lambertian spheres and show an improvement relative to a rough Beckmann BRDF with very high roughness.
{"title":"VMF Diffuse: A unified rough diffuse BRDF","authors":"Eugene d'Eon, Andrea Weidlich","doi":"10.1111/cgf.15149","DOIUrl":"10.1111/cgf.15149","url":null,"abstract":"<p>We present a practical analytic BRDF that approximates scattering from a generalized microfacet volume with a von Mises-Fischer NDF. Our BRDF seamlessly blends from smooth Lambertian, through moderately rough height fields with Beckmann-like statistics and into highly rough/porous behaviours that have been lacking from prior models. At maximum roughness, our model reduces to the recent Lambert-sphere BRDF. We validate our model by comparing to simulations of scattering from geometries with randomly-placed Lambertian spheres and show an improvement relative to a rough Beckmann BRDF with very high roughness.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141779332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Poirier-Ginter, A. Gauthier, J. Phillip, J.-F. Lalonde, G. Drettakis
Relighting radiance fields is severely underconstrained for multi-view data, which is most often captured under a single illumination condition; It is especially hard for full scenes containing multiple objects. We introduce a method to create relightable radiance fields using such single-illumination data by exploiting priors extracted from 2D image diffusion models. We first fine-tune a 2D diffusion model on a multi-illumination dataset conditioned by light direction, allowing us to augment a single-illumination capture into a realistic – but possibly inconsistent – multi-illumination dataset from directly defined light directions. We use this augmented data to create a relightable radiance field represented by 3D Gaussian splats. To allow direct control of light direction for low-frequency lighting, we represent appearance with a multi-layer perceptron parameterized on light direction. To enforce multi-view consistency and overcome inaccuracies we optimize a per-image auxiliary feature vector. We show results on synthetic and real multi-view data under single illumination, demonstrating that our method successfully exploits 2D diffusion model priors to allow realistic 3D relighting for complete scenes.
{"title":"A Diffusion Approach to Radiance Field Relighting using Multi-Illumination Synthesis","authors":"Y. Poirier-Ginter, A. Gauthier, J. Phillip, J.-F. Lalonde, G. Drettakis","doi":"10.1111/cgf.15147","DOIUrl":"10.1111/cgf.15147","url":null,"abstract":"<p>Relighting radiance fields is severely underconstrained for multi-view data, which is most often captured under a single illumination condition; It is especially hard for full scenes containing multiple objects. We introduce a method to create relightable radiance fields using such single-illumination data by exploiting priors extracted from 2D image diffusion models. We first fine-tune a 2D diffusion model on a multi-illumination dataset conditioned by light direction, allowing us to augment a single-illumination capture into a realistic – but possibly inconsistent – multi-illumination dataset from directly defined light directions. We use this augmented data to create a relightable radiance field represented by 3D Gaussian splats. To allow direct control of light direction for low-frequency lighting, we represent appearance with a multi-layer perceptron parameterized on light direction. To enforce multi-view consistency and overcome inaccuracies we optimize a per-image auxiliary feature vector. We show results on synthetic and real multi-view data under single illumination, demonstrating that our method successfully exploits 2D diffusion model priors to allow realistic 3D relighting for complete scenes.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141779327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}