A Diffusion Approach to Radiance Field Relighting using Multi-Illumination Synthesis

IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Computer Graphics Forum Pub Date : 2024-07-24 DOI:10.1111/cgf.15147
Y. Poirier-Ginter, A. Gauthier, J. Phillip, J.-F. Lalonde, G. Drettakis
{"title":"A Diffusion Approach to Radiance Field Relighting using Multi-Illumination Synthesis","authors":"Y. Poirier-Ginter,&nbsp;A. Gauthier,&nbsp;J. Phillip,&nbsp;J.-F. Lalonde,&nbsp;G. Drettakis","doi":"10.1111/cgf.15147","DOIUrl":null,"url":null,"abstract":"<p>Relighting radiance fields is severely underconstrained for multi-view data, which is most often captured under a single illumination condition; It is especially hard for full scenes containing multiple objects. We introduce a method to create relightable radiance fields using such single-illumination data by exploiting priors extracted from 2D image diffusion models. We first fine-tune a 2D diffusion model on a multi-illumination dataset conditioned by light direction, allowing us to augment a single-illumination capture into a realistic – but possibly inconsistent – multi-illumination dataset from directly defined light directions. We use this augmented data to create a relightable radiance field represented by 3D Gaussian splats. To allow direct control of light direction for low-frequency lighting, we represent appearance with a multi-layer perceptron parameterized on light direction. To enforce multi-view consistency and overcome inaccuracies we optimize a per-image auxiliary feature vector. We show results on synthetic and real multi-view data under single illumination, demonstrating that our method successfully exploits 2D diffusion model priors to allow realistic 3D relighting for complete scenes.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 4","pages":""},"PeriodicalIF":2.7000,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Graphics Forum","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/cgf.15147","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0

Abstract

Relighting radiance fields is severely underconstrained for multi-view data, which is most often captured under a single illumination condition; It is especially hard for full scenes containing multiple objects. We introduce a method to create relightable radiance fields using such single-illumination data by exploiting priors extracted from 2D image diffusion models. We first fine-tune a 2D diffusion model on a multi-illumination dataset conditioned by light direction, allowing us to augment a single-illumination capture into a realistic – but possibly inconsistent – multi-illumination dataset from directly defined light directions. We use this augmented data to create a relightable radiance field represented by 3D Gaussian splats. To allow direct control of light direction for low-frequency lighting, we represent appearance with a multi-layer perceptron parameterized on light direction. To enforce multi-view consistency and overcome inaccuracies we optimize a per-image auxiliary feature vector. We show results on synthetic and real multi-view data under single illumination, demonstrating that our method successfully exploits 2D diffusion model priors to allow realistic 3D relighting for complete scenes.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用多重照明合成进行辐射场再照明的扩散方法
多视角数据通常是在单一光照条件下捕获的,因此重照辐射场对多视角数据的约束严重不足;对于包含多个物体的全场景来说,重照辐射场尤其困难。我们介绍了一种方法,通过利用从二维图像扩散模型中提取的先验值,使用此类单光照数据创建可重照辐射场。我们首先在以光照方向为条件的多光照数据集上微调 2D 扩散模型,从而将单光照捕捉数据增强为现实的多光照数据集,但可能不一致,这些数据集来自直接定义的光照方向。我们利用这些增强数据来创建由三维高斯光斑表示的可再照明辐射场。为了对低频照明的光照方向进行直接控制,我们用一个以光照方向为参数的多层感知器来表示外观。为了实现多视角一致性并克服不准确性,我们优化了每个图像的辅助特征向量。我们展示了在单一照明下合成和真实多视角数据的结果,证明我们的方法成功地利用了二维扩散模型先验,为完整场景提供了逼真的三维重新照明。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Computer Graphics Forum
Computer Graphics Forum 工程技术-计算机:软件工程
CiteScore
5.80
自引率
12.00%
发文量
175
审稿时长
3-6 weeks
期刊介绍: Computer Graphics Forum is the official journal of Eurographics, published in cooperation with Wiley-Blackwell, and is a unique, international source of information for computer graphics professionals interested in graphics developments worldwide. It is now one of the leading journals for researchers, developers and users of computer graphics in both commercial and academic environments. The journal reports on the latest developments in the field throughout the world and covers all aspects of the theory, practice and application of computer graphics.
期刊最新文献
DiffPop: Plausibility-Guided Object Placement Diffusion for Image Composition Front Matter LGSur-Net: A Local Gaussian Surface Representation Network for Upsampling Highly Sparse Point Cloud 𝒢-Style: Stylized Gaussian Splatting iShapEditing: Intelligent Shape Editing with Diffusion Models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1