Recolouring deep images

Rob Pieké, Yanli Zhao, F. Arrizabalaga
{"title":"Recolouring deep images","authors":"Rob Pieké, Yanli Zhao, F. Arrizabalaga","doi":"10.1145/3233085.3233095","DOIUrl":null,"url":null,"abstract":"This work describes in-progress research to investigate methods for manipulating and/or correcting the colours of samples in deep images. Motivations for wanting this include, but are not limited to: a preference to minimise data footprints by only rendering deep alpha images, better colour manipulation tools in Nuke for 2D (i.e., not-deep) images, and post-render denoising. The most naïve way to (re)colour deep images with 2D RGB images is via Nuke's DeepRecolor. This effectively projects the RGB colour of a 2D pixel onto each sample of the corresponding deep pixel - rgbdeep(x, y, z) = rgb2d(x, y). This approach has many limitations: introducing halos when applying depth-of-field as a post-process (see Figure 2 below), and edge artefacts where bright background objects can \"spill\" into the edges of foreground objects when other objects are composited between them (see Figure 1 above). The work by [Egstad et al. 2015] on OpenDCX is perhaps the most advanced we've seen presented in this area, but it still seems to lack broad adoption. Further, we continued to identify other issues/workflows, and thus decided to pursue our own blue-sky thinking about the overall problem space. Much of what we describe may be conceptually easy to solve by changing upstream departments' workflows (e.g., \"just get lighting to split that out into a separate pass\", etc), but the practical challenges associated with these types of suggestions are often prohibitive as deadlines start looming.","PeriodicalId":378765,"journal":{"name":"Proceedings of the 8th Annual Digital Production Symposium","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 8th Annual Digital Production Symposium","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3233085.3233095","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This work describes in-progress research to investigate methods for manipulating and/or correcting the colours of samples in deep images. Motivations for wanting this include, but are not limited to: a preference to minimise data footprints by only rendering deep alpha images, better colour manipulation tools in Nuke for 2D (i.e., not-deep) images, and post-render denoising. The most naïve way to (re)colour deep images with 2D RGB images is via Nuke's DeepRecolor. This effectively projects the RGB colour of a 2D pixel onto each sample of the corresponding deep pixel - rgbdeep(x, y, z) = rgb2d(x, y). This approach has many limitations: introducing halos when applying depth-of-field as a post-process (see Figure 2 below), and edge artefacts where bright background objects can "spill" into the edges of foreground objects when other objects are composited between them (see Figure 1 above). The work by [Egstad et al. 2015] on OpenDCX is perhaps the most advanced we've seen presented in this area, but it still seems to lack broad adoption. Further, we continued to identify other issues/workflows, and thus decided to pursue our own blue-sky thinking about the overall problem space. Much of what we describe may be conceptually easy to solve by changing upstream departments' workflows (e.g., "just get lighting to split that out into a separate pass", etc), but the practical challenges associated with these types of suggestions are often prohibitive as deadlines start looming.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
重新着色深图像
这项工作描述了正在进行的研究,以调查在深度图像中操纵和/或校正样本颜色的方法。想要这样做的动机包括,但不限于:偏好通过只渲染深alpha图像来最小化数据足迹,在Nuke中为2D(即非深度)图像提供更好的颜色处理工具,以及渲染后去噪。最naïve的方式(重新)颜色深图像与2D RGB图像是通过Nuke的DeepRecolor。这有效地将2D像素的RGB颜色投射到相应深度像素的每个样本上- rgbdeep(x, y, z) = rgb2d(x, y)。这种方法有许多局限性:在应用景深作为后处理时引入晕(参见下面的图2),以及当其他对象在它们之间合成时,明亮背景对象可能“溢出”到前景对象的边缘的边缘伪影(参见上面的图1)。[Egstad et al. 2015]在OpenDCX上的工作可能是我们在这个领域看到的最先进的,但它似乎仍然缺乏广泛的采用。此外,我们继续确定其他问题/工作流,并因此决定追求我们自己对整个问题空间的蓝天思维。我们所描述的许多问题可能在概念上很容易通过改变上游部门的工作流程来解决(例如,“只需将灯光拆分为单独的通道”等),但是随着最后期限的临近,与这些类型的建议相关的实际挑战通常是令人望而却步的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A JIT expression language for fast manipulation of VDB points and volumes Merging procedural and non-procedural hair grooming Just get on with it: a managed approach to AOV manipulation A scheme for storing object ID manifests in openEXR images LibEE 2: enabling fast edits and evaluation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1