There are various approaches to storing numeric IDs as extra channels within CG rendered images. Using these channels, individual objects can be selected and separately modified. To associate an object with a text string a table or manifest is required mapping numeric IDs to text strings. This allows readable identification of ID-based selections, as well as the ability to make a selection using a text search. A scheme for storage of this ID Manifest is proposed which is independent of the approach used to store the IDs within the image. The total size of the raw strings within an ID Manifest may be very large but often contains much repeated information. A novel compression scheme is therefore employed which significantly reduces the size of the manifest.
{"title":"A scheme for storing object ID manifests in openEXR images","authors":"P. Hillman","doi":"10.1145/3233085.3233086","DOIUrl":"https://doi.org/10.1145/3233085.3233086","url":null,"abstract":"There are various approaches to storing numeric IDs as extra channels within CG rendered images. Using these channels, individual objects can be selected and separately modified. To associate an object with a text string a table or manifest is required mapping numeric IDs to text strings. This allows readable identification of ID-based selections, as well as the ability to make a selection using a text search. A scheme for storage of this ID Manifest is proposed which is independent of the approach used to store the IDs within the image. The total size of the raw strings within an ID Manifest may be very large but often contains much repeated information. A novel compression scheme is therefore employed which significantly reduces the size of the manifest.","PeriodicalId":378765,"journal":{"name":"Proceedings of the 8th Annual Digital Production Symposium","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122132003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Premo animation platform [Gong et al. 2014] developed by DreamWorks utilized LibEE v1 [Watt et al. 2012] for high performance graph evaluation. The animator experience required fast evaluation, but did not require fast editing of the graph. LibEE v1, therefore, was never designed to support efficient edits. This talk presents an overview of how we developed LibEE v2 to enable fast editing of character rigs while still maintaining or improving upon the speed of evaluation. Overall, LibEE v2 achieves a 100x speedup of authoring operations compared with LibEE v1.
梦工厂开发的Premo动画平台[Gong et al. 2014]利用LibEE v1 [Watt et al. 2012]进行高性能图形评估。动画师的经验需要快速评估,但不需要快速编辑图形。因此,LibEE v1的设计从来不支持高效的编辑。本演讲概述了我们如何开发LibEE v2,以实现角色装备的快速编辑,同时仍然保持或提高评估速度。总的来说,与LibEE v1相比,LibEE v2的创作操作速度提高了100倍。
{"title":"LibEE 2: enabling fast edits and evaluation","authors":"Stuart Bryson, E. Papp","doi":"10.1145/3233085.3233089","DOIUrl":"https://doi.org/10.1145/3233085.3233089","url":null,"abstract":"The Premo animation platform [Gong et al. 2014] developed by DreamWorks utilized LibEE v1 [Watt et al. 2012] for high performance graph evaluation. The animator experience required fast evaluation, but did not require fast editing of the graph. LibEE v1, therefore, was never designed to support efficient edits. This talk presents an overview of how we developed LibEE v2 to enable fast editing of character rigs while still maintaining or improving upon the speed of evaluation. Overall, LibEE v2 achieves a 100x speedup of authoring operations compared with LibEE v1.","PeriodicalId":378765,"journal":{"name":"Proceedings of the 8th Annual Digital Production Symposium","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122168989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Patrick Coleman, D. Peachey, T. Nettleship, Ryusuke Villemin, T. Jones
In Incredibles 2, a character named Voyd has the ability to create portals that connect two locations in space. A particular challenge for this film is the presence of portals in a number of fast-paced action sequences with multiple characters and objects passing through them, causing multiple views of the scene to be visible in a single shot. To enable the production of this effect while allowing production artists to focus on creative work, we've developed a system that allows for the rendering of portals while solving for light transport inside a path tracer, as well as a suite of interactive tools for creating shots and animating characters and objects as they interact with and pass through portals. In addition, we've designed an effects animation pipeline that allows for the art-directible creation of boundary elements that allow artists to clearly show the presence of visually distinctive portals in a number of fast-paced action sequences.
{"title":"Into the voyd: teleportation of light transport in incredibles 2","authors":"Patrick Coleman, D. Peachey, T. Nettleship, Ryusuke Villemin, T. Jones","doi":"10.1145/3233085.3233092","DOIUrl":"https://doi.org/10.1145/3233085.3233092","url":null,"abstract":"In Incredibles 2, a character named Voyd has the ability to create portals that connect two locations in space. A particular challenge for this film is the presence of portals in a number of fast-paced action sequences with multiple characters and objects passing through them, causing multiple views of the scene to be visible in a single shot. To enable the production of this effect while allowing production artists to focus on creative work, we've developed a system that allows for the rendering of portals while solving for light transport inside a path tracer, as well as a suite of interactive tools for creating shots and animating characters and objects as they interact with and pass through portals. In addition, we've designed an effects animation pipeline that allows for the art-directible creation of boundary elements that allow artists to clearly show the presence of visually distinctive portals in a number of fast-paced action sequences.","PeriodicalId":378765,"journal":{"name":"Proceedings of the 8th Annual Digital Production Symposium","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127120219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Manne Öhrström, J. Tomlinson, Rudy Cortes, Satish Goda
We present a secure, cloud based distribution system for just-in-time artist workflows built on the Shotgun Toolkit platform. We cover the original motivations behind this work, the challenges faced, and the lessons learned as the technology has come to unlock new patterns for managing where and how artists contribute on production. A case study of the technology and its use on production by Pearl Studio is included, showing how the company uses the system to meet their distributed organizational needs and why adoption has been beneficial for their technological and business goals. We show how the system began as a means of downloading and caching individual pipeline components via an app store, before organically evolving into a distribution mechanism for a studio's entire pipeline. We include real-world examples of these patterns that are in use by Toolkit clients and illustrate how this technology can be applied to cloud-based collaboration in a variety of ways.
{"title":"Cloud-based pipeline distribution for effective and secure remote workflows","authors":"Manne Öhrström, J. Tomlinson, Rudy Cortes, Satish Goda","doi":"10.1145/3233085.3233096","DOIUrl":"https://doi.org/10.1145/3233085.3233096","url":null,"abstract":"We present a secure, cloud based distribution system for just-in-time artist workflows built on the Shotgun Toolkit platform. We cover the original motivations behind this work, the challenges faced, and the lessons learned as the technology has come to unlock new patterns for managing where and how artists contribute on production. A case study of the technology and its use on production by Pearl Studio is included, showing how the company uses the system to meet their distributed organizational needs and why adoption has been beneficial for their technological and business goals. We show how the system began as a means of downloading and caching individual pipeline components via an app store, before organically evolving into a distribution mechanism for a studio's entire pipeline. We include real-world examples of these patterns that are in use by Toolkit clients and illustrate how this technology can be applied to cloud-based collaboration in a variety of ways.","PeriodicalId":378765,"journal":{"name":"Proceedings of the 8th Annual Digital Production Symposium","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123166716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Due to MPC's departmental and multi-site nature, there is an increasing need from artists to make changes to grooms at different points in the pipeline. We describe a new system we've built into Furtility (MPC's in-house grooming tool) to support these requests, providing a powerful tool for non-destructively layering changes on top of a base groom description.
{"title":"Layering changes in a procedural grooming pipeline","authors":"Curtis Andrus","doi":"10.1145/3233085.3233094","DOIUrl":"https://doi.org/10.1145/3233085.3233094","url":null,"abstract":"Due to MPC's departmental and multi-site nature, there is an increasing need from artists to make changes to grooms at different points in the pipeline. We describe a new system we've built into Furtility (MPC's in-house grooming tool) to support these requests, providing a powerful tool for non-destructively layering changes on top of a base groom description.","PeriodicalId":378765,"journal":{"name":"Proceedings of the 8th Annual Digital Production Symposium","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126476481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is 5 pm and you have just finished addressing the notes on your comp of the giant alien robot ship explosion and are looking forward to the end of the work day. Then out of nowhere your supervisor comes in and says "The client called....they want a big change to the look of the CG....and they want it now". Depending how the compositor managed their script, addressing the changes can range from mild discomfort to excruciating pain. Many of these problems come from the way that AOVs (Arbitrary Output Variables) are managed when grading CG. We present a system using a node based graph in a novel way to manipulate AOVs to maintain mathematical continuity which can be shared between CG renders. This approach normalizes the complexity of changing the grading on a CG render allowing the artist to focus on the task at hand and not juggling of complex channel arithmetics.
{"title":"Just get on with it: a managed approach to AOV manipulation","authors":"Colin Alway, Patrick Nagle, G. Keech","doi":"10.1145/3233085.3233091","DOIUrl":"https://doi.org/10.1145/3233085.3233091","url":null,"abstract":"It is 5 pm and you have just finished addressing the notes on your comp of the giant alien robot ship explosion and are looking forward to the end of the work day. Then out of nowhere your supervisor comes in and says \"The client called....they want a big change to the look of the CG....and they want it now\". Depending how the compositor managed their script, addressing the changes can range from mild discomfort to excruciating pain. Many of these problems come from the way that AOVs (Arbitrary Output Variables) are managed when grading CG. We present a system using a node based graph in a novel way to manipulate AOVs to maintain mathematical continuity which can be shared between CG renders. This approach normalizes the complexity of changing the grading on a CG render allowing the artist to focus on the task at hand and not juggling of complex channel arithmetics.","PeriodicalId":378765,"journal":{"name":"Proceedings of the 8th Annual Digital Production Symposium","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121392705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work describes in-progress research to investigate methods for manipulating and/or correcting the colours of samples in deep images. Motivations for wanting this include, but are not limited to: a preference to minimise data footprints by only rendering deep alpha images, better colour manipulation tools in Nuke for 2D (i.e., not-deep) images, and post-render denoising. The most naïve way to (re)colour deep images with 2D RGB images is via Nuke's DeepRecolor. This effectively projects the RGB colour of a 2D pixel onto each sample of the corresponding deep pixel - rgbdeep(x, y, z) = rgb2d(x, y). This approach has many limitations: introducing halos when applying depth-of-field as a post-process (see Figure 2 below), and edge artefacts where bright background objects can "spill" into the edges of foreground objects when other objects are composited between them (see Figure 1 above). The work by [Egstad et al. 2015] on OpenDCX is perhaps the most advanced we've seen presented in this area, but it still seems to lack broad adoption. Further, we continued to identify other issues/workflows, and thus decided to pursue our own blue-sky thinking about the overall problem space. Much of what we describe may be conceptually easy to solve by changing upstream departments' workflows (e.g., "just get lighting to split that out into a separate pass", etc), but the practical challenges associated with these types of suggestions are often prohibitive as deadlines start looming.
这项工作描述了正在进行的研究,以调查在深度图像中操纵和/或校正样本颜色的方法。想要这样做的动机包括,但不限于:偏好通过只渲染深alpha图像来最小化数据足迹,在Nuke中为2D(即非深度)图像提供更好的颜色处理工具,以及渲染后去噪。最naïve的方式(重新)颜色深图像与2D RGB图像是通过Nuke的DeepRecolor。这有效地将2D像素的RGB颜色投射到相应深度像素的每个样本上- rgbdeep(x, y, z) = rgb2d(x, y)。这种方法有许多局限性:在应用景深作为后处理时引入晕(参见下面的图2),以及当其他对象在它们之间合成时,明亮背景对象可能“溢出”到前景对象的边缘的边缘伪影(参见上面的图1)。[Egstad et al. 2015]在OpenDCX上的工作可能是我们在这个领域看到的最先进的,但它似乎仍然缺乏广泛的采用。此外,我们继续确定其他问题/工作流,并因此决定追求我们自己对整个问题空间的蓝天思维。我们所描述的许多问题可能在概念上很容易通过改变上游部门的工作流程来解决(例如,“只需将灯光拆分为单独的通道”等),但是随着最后期限的临近,与这些类型的建议相关的实际挑战通常是令人望而却步的。
{"title":"Recolouring deep images","authors":"Rob Pieké, Yanli Zhao, F. Arrizabalaga","doi":"10.1145/3233085.3233095","DOIUrl":"https://doi.org/10.1145/3233085.3233095","url":null,"abstract":"This work describes in-progress research to investigate methods for manipulating and/or correcting the colours of samples in deep images. Motivations for wanting this include, but are not limited to: a preference to minimise data footprints by only rendering deep alpha images, better colour manipulation tools in Nuke for 2D (i.e., not-deep) images, and post-render denoising. The most naïve way to (re)colour deep images with 2D RGB images is via Nuke's DeepRecolor. This effectively projects the RGB colour of a 2D pixel onto each sample of the corresponding deep pixel - rgbdeep(x, y, z) = rgb2d(x, y). This approach has many limitations: introducing halos when applying depth-of-field as a post-process (see Figure 2 below), and edge artefacts where bright background objects can \"spill\" into the edges of foreground objects when other objects are composited between them (see Figure 1 above). The work by [Egstad et al. 2015] on OpenDCX is perhaps the most advanced we've seen presented in this area, but it still seems to lack broad adoption. Further, we continued to identify other issues/workflows, and thus decided to pursue our own blue-sky thinking about the overall problem space. Much of what we describe may be conceptually easy to solve by changing upstream departments' workflows (e.g., \"just get lighting to split that out into a separate pass\", etc), but the practical challenges associated with these types of suggestions are often prohibitive as deadlines start looming.","PeriodicalId":378765,"journal":{"name":"Proceedings of the 8th Annual Digital Production Symposium","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123875538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Several years ago DNEG set out to build Loom, a new rigging framework in a bid to improve the performance of our Maya animation rigs. This talk is an update on its development in the light of the discontinuation of its original evaluation back-end, Fabric Engine. In particular, we describe the design choices which enabled us to achieve a DCC agnostic rigging framework, allowing us to focus on development of pure rigging concepts. Also how this setback prompted us to extend the framework to properly deal with the deformation side of rigs, targeting memory efficiency, GPU/CPU memory interaction and high-end performance optimizations.
{"title":"Abstracting rigging concepts for a future proof framework design","authors":"J. R. Nieto, Charlie Banks, Ryan Chan","doi":"10.1145/3233085.3233088","DOIUrl":"https://doi.org/10.1145/3233085.3233088","url":null,"abstract":"Several years ago DNEG set out to build Loom, a new rigging framework in a bid to improve the performance of our Maya animation rigs. This talk is an update on its development in the light of the discontinuation of its original evaluation back-end, Fabric Engine. In particular, we describe the design choices which enabled us to achieve a DCC agnostic rigging framework, allowing us to focus on development of pure rigging concepts. Also how this setback prompted us to extend the framework to properly deal with the deformation side of rigs, targeting memory efficiency, GPU/CPU memory interaction and high-end performance optimizations.","PeriodicalId":378765,"journal":{"name":"Proceedings of the 8th Annual Digital Production Symposium","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124632400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gene Wei-Chin Lin, Elena Driskill, D. Milling, Giorgio Lafratta, Douglas Roble
Procedural workflows are widely used for creating hair and fur on characters in the visual effects industry, usually in the form of a node-based system. While they are able to create hairstyles with great variety, procedural systems often need to be combined with other external, non-procedural tools to achieve sophisticated, art-directed shapes. We present a new hair-generating workflow, which merges the procedural and non-procedural components for grooming and animating hairs. This new workflow is now a vital component of our character effects pipeline, expediting the process of handling fast-paced projects.
{"title":"Merging procedural and non-procedural hair grooming","authors":"Gene Wei-Chin Lin, Elena Driskill, D. Milling, Giorgio Lafratta, Douglas Roble","doi":"10.1145/3233085.3233098","DOIUrl":"https://doi.org/10.1145/3233085.3233098","url":null,"abstract":"Procedural workflows are widely used for creating hair and fur on characters in the visual effects industry, usually in the form of a node-based system. While they are able to create hairstyles with great variety, procedural systems often need to be combined with other external, non-procedural tools to achieve sophisticated, art-directed shapes. We present a new hair-generating workflow, which merges the procedural and non-procedural components for grooming and animating hairs. This new workflow is now a vital component of our character effects pipeline, expediting the process of handling fast-paced projects.","PeriodicalId":378765,"journal":{"name":"Proceedings of the 8th Annual Digital Production Symposium","volume":"137 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120980573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nick Avramoussis, Richard E. Jones, Francisco Gochez, T. Keeler, Matthew Warner
Almost all modern digital content creation (DCC) applications used throughout visual effects (VFX) pipelines provide a scripting or programming interface. This useful feature gives users the freedom to create and manipulate assets in bespoke ways, providing a powerful and customizable tool for working within the software. It is particularly useful for working with geometry, a process heavily involved in modelling, effects and animation tasks. However, most widely available examples of these are either confined to their host application or ill-suited to computationally demanding operations. We have created an efficient programming interface built around the open-source geometry format, OpenVDB, to allow fast geometry manipulation whilst offering the required portability for use anywhere in the VFX pipeline.
{"title":"A JIT expression language for fast manipulation of VDB points and volumes","authors":"Nick Avramoussis, Richard E. Jones, Francisco Gochez, T. Keeler, Matthew Warner","doi":"10.1145/3233085.3233087","DOIUrl":"https://doi.org/10.1145/3233085.3233087","url":null,"abstract":"Almost all modern digital content creation (DCC) applications used throughout visual effects (VFX) pipelines provide a scripting or programming interface. This useful feature gives users the freedom to create and manipulate assets in bespoke ways, providing a powerful and customizable tool for working within the software. It is particularly useful for working with geometry, a process heavily involved in modelling, effects and animation tasks. However, most widely available examples of these are either confined to their host application or ill-suited to computationally demanding operations. We have created an efficient programming interface built around the open-source geometry format, OpenVDB, to allow fast geometry manipulation whilst offering the required portability for use anywhere in the VFX pipeline.","PeriodicalId":378765,"journal":{"name":"Proceedings of the 8th Annual Digital Production Symposium","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117280996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}