{"title":"Recolouring deep images","authors":"Rob Pieké, Yanli Zhao, F. Arrizabalaga","doi":"10.1145/3233085.3233095","DOIUrl":null,"url":null,"abstract":"This work describes in-progress research to investigate methods for manipulating and/or correcting the colours of samples in deep images. Motivations for wanting this include, but are not limited to: a preference to minimise data footprints by only rendering deep alpha images, better colour manipulation tools in Nuke for 2D (i.e., not-deep) images, and post-render denoising. The most naïve way to (re)colour deep images with 2D RGB images is via Nuke's DeepRecolor. This effectively projects the RGB colour of a 2D pixel onto each sample of the corresponding deep pixel - rgbdeep(x, y, z) = rgb2d(x, y). This approach has many limitations: introducing halos when applying depth-of-field as a post-process (see Figure 2 below), and edge artefacts where bright background objects can \"spill\" into the edges of foreground objects when other objects are composited between them (see Figure 1 above). The work by [Egstad et al. 2015] on OpenDCX is perhaps the most advanced we've seen presented in this area, but it still seems to lack broad adoption. Further, we continued to identify other issues/workflows, and thus decided to pursue our own blue-sky thinking about the overall problem space. Much of what we describe may be conceptually easy to solve by changing upstream departments' workflows (e.g., \"just get lighting to split that out into a separate pass\", etc), but the practical challenges associated with these types of suggestions are often prohibitive as deadlines start looming.","PeriodicalId":378765,"journal":{"name":"Proceedings of the 8th Annual Digital Production Symposium","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 8th Annual Digital Production Symposium","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3233085.3233095","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This work describes in-progress research to investigate methods for manipulating and/or correcting the colours of samples in deep images. Motivations for wanting this include, but are not limited to: a preference to minimise data footprints by only rendering deep alpha images, better colour manipulation tools in Nuke for 2D (i.e., not-deep) images, and post-render denoising. The most naïve way to (re)colour deep images with 2D RGB images is via Nuke's DeepRecolor. This effectively projects the RGB colour of a 2D pixel onto each sample of the corresponding deep pixel - rgbdeep(x, y, z) = rgb2d(x, y). This approach has many limitations: introducing halos when applying depth-of-field as a post-process (see Figure 2 below), and edge artefacts where bright background objects can "spill" into the edges of foreground objects when other objects are composited between them (see Figure 1 above). The work by [Egstad et al. 2015] on OpenDCX is perhaps the most advanced we've seen presented in this area, but it still seems to lack broad adoption. Further, we continued to identify other issues/workflows, and thus decided to pursue our own blue-sky thinking about the overall problem space. Much of what we describe may be conceptually easy to solve by changing upstream departments' workflows (e.g., "just get lighting to split that out into a separate pass", etc), but the practical challenges associated with these types of suggestions are often prohibitive as deadlines start looming.