Yohan Poirier-Ginter, Alban Gauthier, Julien Phillip, Jean-Francois Lalonde, George Drettakis
{"title":"利用多重照明合成进行辐射场再照明的扩散方法","authors":"Yohan Poirier-Ginter, Alban Gauthier, Julien Phillip, Jean-Francois Lalonde, George Drettakis","doi":"arxiv-2409.08947","DOIUrl":null,"url":null,"abstract":"Relighting radiance fields is severely underconstrained for multi-view data,\nwhich is most often captured under a single illumination condition; It is\nespecially hard for full scenes containing multiple objects. We introduce a\nmethod to create relightable radiance fields using such single-illumination\ndata by exploiting priors extracted from 2D image diffusion models. We first\nfine-tune a 2D diffusion model on a multi-illumination dataset conditioned by\nlight direction, allowing us to augment a single-illumination capture into a\nrealistic -- but possibly inconsistent -- multi-illumination dataset from\ndirectly defined light directions. We use this augmented data to create a\nrelightable radiance field represented by 3D Gaussian splats. To allow direct\ncontrol of light direction for low-frequency lighting, we represent appearance\nwith a multi-layer perceptron parameterized on light direction. To enforce\nmulti-view consistency and overcome inaccuracies we optimize a per-image\nauxiliary feature vector. We show results on synthetic and real multi-view data\nunder single illumination, demonstrating that our method successfully exploits\n2D diffusion model priors to allow realistic 3D relighting for complete scenes.\nProject site\nhttps://repo-sam.inria.fr/fungraph/generative-radiance-field-relighting/","PeriodicalId":501174,"journal":{"name":"arXiv - CS - Graphics","volume":"105 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Diffusion Approach to Radiance Field Relighting using Multi-Illumination Synthesis\",\"authors\":\"Yohan Poirier-Ginter, Alban Gauthier, Julien Phillip, Jean-Francois Lalonde, George Drettakis\",\"doi\":\"arxiv-2409.08947\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Relighting radiance fields is severely underconstrained for multi-view data,\\nwhich is most often captured under a single illumination condition; It is\\nespecially hard for full scenes containing multiple objects. We introduce a\\nmethod to create relightable radiance fields using such single-illumination\\ndata by exploiting priors extracted from 2D image diffusion models. We first\\nfine-tune a 2D diffusion model on a multi-illumination dataset conditioned by\\nlight direction, allowing us to augment a single-illumination capture into a\\nrealistic -- but possibly inconsistent -- multi-illumination dataset from\\ndirectly defined light directions. We use this augmented data to create a\\nrelightable radiance field represented by 3D Gaussian splats. To allow direct\\ncontrol of light direction for low-frequency lighting, we represent appearance\\nwith a multi-layer perceptron parameterized on light direction. To enforce\\nmulti-view consistency and overcome inaccuracies we optimize a per-image\\nauxiliary feature vector. We show results on synthetic and real multi-view data\\nunder single illumination, demonstrating that our method successfully exploits\\n2D diffusion model priors to allow realistic 3D relighting for complete scenes.\\nProject site\\nhttps://repo-sam.inria.fr/fungraph/generative-radiance-field-relighting/\",\"PeriodicalId\":501174,\"journal\":{\"name\":\"arXiv - CS - Graphics\",\"volume\":\"105 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Graphics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.08947\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Graphics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.08947","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Diffusion Approach to Radiance Field Relighting using Multi-Illumination Synthesis
Relighting radiance fields is severely underconstrained for multi-view data,
which is most often captured under a single illumination condition; It is
especially hard for full scenes containing multiple objects. We introduce a
method to create relightable radiance fields using such single-illumination
data by exploiting priors extracted from 2D image diffusion models. We first
fine-tune a 2D diffusion model on a multi-illumination dataset conditioned by
light direction, allowing us to augment a single-illumination capture into a
realistic -- but possibly inconsistent -- multi-illumination dataset from
directly defined light directions. We use this augmented data to create a
relightable radiance field represented by 3D Gaussian splats. To allow direct
control of light direction for low-frequency lighting, we represent appearance
with a multi-layer perceptron parameterized on light direction. To enforce
multi-view consistency and overcome inaccuracies we optimize a per-image
auxiliary feature vector. We show results on synthetic and real multi-view data
under single illumination, demonstrating that our method successfully exploits
2D diffusion model priors to allow realistic 3D relighting for complete scenes.
Project site
https://repo-sam.inria.fr/fungraph/generative-radiance-field-relighting/