We present a practical analytic BRDF that approximates scattering from a generalized microfacet volume with a von Mises-Fischer NDF. Our BRDF seamlessly blends from smooth Lambertian, through moderately rough height fields with Beckmann-like statistics and into highly rough/porous behaviours that have been lacking from prior models. At maximum roughness, our model reduces to the recent Lambert-sphere BRDF. We validate our model by comparing to simulations of scattering from geometries with randomly-placed Lambertian spheres and show an improvement relative to a rough Beckmann BRDF with very high roughness.
{"title":"VMF Diffuse: A unified rough diffuse BRDF","authors":"Eugene d'Eon, Andrea Weidlich","doi":"10.1111/cgf.15149","DOIUrl":"10.1111/cgf.15149","url":null,"abstract":"<p>We present a practical analytic BRDF that approximates scattering from a generalized microfacet volume with a von Mises-Fischer NDF. Our BRDF seamlessly blends from smooth Lambertian, through moderately rough height fields with Beckmann-like statistics and into highly rough/porous behaviours that have been lacking from prior models. At maximum roughness, our model reduces to the recent Lambert-sphere BRDF. We validate our model by comparing to simulations of scattering from geometries with randomly-placed Lambertian spheres and show an improvement relative to a rough Beckmann BRDF with very high roughness.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 4","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141779332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Poirier-Ginter, A. Gauthier, J. Phillip, J.-F. Lalonde, G. Drettakis
Relighting radiance fields is severely underconstrained for multi-view data, which is most often captured under a single illumination condition; It is especially hard for full scenes containing multiple objects. We introduce a method to create relightable radiance fields using such single-illumination data by exploiting priors extracted from 2D image diffusion models. We first fine-tune a 2D diffusion model on a multi-illumination dataset conditioned by light direction, allowing us to augment a single-illumination capture into a realistic – but possibly inconsistent – multi-illumination dataset from directly defined light directions. We use this augmented data to create a relightable radiance field represented by 3D Gaussian splats. To allow direct control of light direction for low-frequency lighting, we represent appearance with a multi-layer perceptron parameterized on light direction. To enforce multi-view consistency and overcome inaccuracies we optimize a per-image auxiliary feature vector. We show results on synthetic and real multi-view data under single illumination, demonstrating that our method successfully exploits 2D diffusion model priors to allow realistic 3D relighting for complete scenes.
{"title":"A Diffusion Approach to Radiance Field Relighting using Multi-Illumination Synthesis","authors":"Y. Poirier-Ginter, A. Gauthier, J. Phillip, J.-F. Lalonde, G. Drettakis","doi":"10.1111/cgf.15147","DOIUrl":"10.1111/cgf.15147","url":null,"abstract":"<p>Relighting radiance fields is severely underconstrained for multi-view data, which is most often captured under a single illumination condition; It is especially hard for full scenes containing multiple objects. We introduce a method to create relightable radiance fields using such single-illumination data by exploiting priors extracted from 2D image diffusion models. We first fine-tune a 2D diffusion model on a multi-illumination dataset conditioned by light direction, allowing us to augment a single-illumination capture into a realistic – but possibly inconsistent – multi-illumination dataset from directly defined light directions. We use this augmented data to create a relightable radiance field represented by 3D Gaussian splats. To allow direct control of light direction for low-frequency lighting, we represent appearance with a multi-layer perceptron parameterized on light direction. To enforce multi-view consistency and overcome inaccuracies we optimize a per-image auxiliary feature vector. We show results on synthetic and real multi-view data under single illumination, demonstrating that our method successfully exploits 2D diffusion model priors to allow realistic 3D relighting for complete scenes.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 4","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141779327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}