首页 > 最新文献

ACM SIGGRAPH 2017 Talks最新文献

英文 中文
Large scale VFX pipelines 大规模的视觉特效管道
Pub Date : 2017-07-30 DOI: 10.1145/3084363.3085021
M. Chambers, J. Israel, A. Wright
To ensure peak utilization of hardware resources, as well as handle the increasingly dynamic demands placed on its render farm infrastructure, WETA Digital developed custom queuing, scheduling, job description and submission systems - which work in concert to maximize the available cores across a large range of non-uniform task types. The render farm is one of the most important, high traffic components of a modern VFX pipeline. Beyond the hardware itself a render farm requires careful management and maintenance to ensure it is operating at peak efficiency. In WETAs case this hardware consists of a mix of over 80,000 CPU cores and a number of GPU resources, and as this has grown it has introduced many interesting scalability challenges. In this talk we aim to present our end-to-end solutions in the render farm space, from the structure of the resource and the inherent problems introduced at this scale, through the development of Plow - our management, queuing and monitoring software. Finally we will detail the deployment process and production benefits realized. Within each section we intend to present the scalability issues encountered, and detail our strategy, process and results in solving these problems. The ever increasing complexity and computational demands of modern VFX drives WETAs need to innovate in all areas, from surfacing, rendering and simulation but also to core pipeline infrastructure.
为了确保硬件资源的峰值利用率,以及处理对其渲染场基础设施日益增长的动态需求,WETA Digital开发了自定义排队、调度、工作描述和提交系统,这些系统协同工作,以最大限度地提高在大范围非统一任务类型中的可用核心。渲染农场是现代视觉特效管道中最重要的高流量组件之一。除了硬件本身,渲染场还需要仔细的管理和维护,以确保其以最高效率运行。在weta的情况下,这个硬件由超过80,000个CPU内核和许多GPU资源组成,随着它的增长,它引入了许多有趣的可伸缩性挑战。在这次演讲中,我们的目标是展示我们在渲染农场空间的端到端解决方案,从资源的结构和在这种规模下引入的固有问题,通过我们的管理、排队和监控软件Plow的开发。最后,我们将详细介绍部署过程和实现的生产效益。在每个部分中,我们打算介绍遇到的可伸缩性问题,并详细介绍解决这些问题的策略、过程和结果。现代视觉特效不断增加的复杂性和计算需求驱动weta需要在所有领域进行创新,从表面,渲染和模拟,以及核心管道基础设施。
{"title":"Large scale VFX pipelines","authors":"M. Chambers, J. Israel, A. Wright","doi":"10.1145/3084363.3085021","DOIUrl":"https://doi.org/10.1145/3084363.3085021","url":null,"abstract":"To ensure peak utilization of hardware resources, as well as handle the increasingly dynamic demands placed on its render farm infrastructure, WETA Digital developed custom queuing, scheduling, job description and submission systems - which work in concert to maximize the available cores across a large range of non-uniform task types. The render farm is one of the most important, high traffic components of a modern VFX pipeline. Beyond the hardware itself a render farm requires careful management and maintenance to ensure it is operating at peak efficiency. In WETAs case this hardware consists of a mix of over 80,000 CPU cores and a number of GPU resources, and as this has grown it has introduced many interesting scalability challenges. In this talk we aim to present our end-to-end solutions in the render farm space, from the structure of the resource and the inherent problems introduced at this scale, through the development of Plow - our management, queuing and monitoring software. Finally we will detail the deployment process and production benefits realized. Within each section we intend to present the scalability issues encountered, and detail our strategy, process and results in solving these problems. The ever increasing complexity and computational demands of modern VFX drives WETAs need to innovate in all areas, from surfacing, rendering and simulation but also to core pipeline infrastructure.","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133076450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FurCollide: fast, robust, and controllable fur collisions with meshes furcollision:快速、稳健、可控的网格毛皮碰撞
Pub Date : 2017-07-30 DOI: 10.1145/3084363.3085051
Arunachalam Somasundaram
We present FurCollide, a fast, robust, and artist friendly tool used for collision detection and collision resolution of fur curves with meshes. The tool helps artists interact with and control tens of thousands of curves with ease while providing high fidelity realistic and/or artistic collision results. This tool is in use at DreamWorks Animation and has been used in a wide variety of fur and/or grass collision situations in various films.
我们提出了furcollision,一个快速,健壮,和艺术家友好的工具,用于与网格的毛皮曲线的碰撞检测和碰撞分辨率。该工具可以帮助艺术家轻松地与成千上万的曲线进行交互和控制,同时提供高保真的现实和/或艺术碰撞结果。这个工具在梦工厂动画中使用,并在各种电影中用于各种各样的毛皮和/或草碰撞情况。
{"title":"FurCollide: fast, robust, and controllable fur collisions with meshes","authors":"Arunachalam Somasundaram","doi":"10.1145/3084363.3085051","DOIUrl":"https://doi.org/10.1145/3084363.3085051","url":null,"abstract":"We present FurCollide, a fast, robust, and artist friendly tool used for collision detection and collision resolution of fur curves with meshes. The tool helps artists interact with and control tens of thousands of curves with ease while providing high fidelity realistic and/or artistic collision results. This tool is in use at DreamWorks Animation and has been used in a wide variety of fur and/or grass collision situations in various films.","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124469822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Precomputed multiple scattering for light simulation in participating medium 参与介质中光模拟的预计算多重散射
Pub Date : 2017-07-30 DOI: 10.1145/3084363.3085037
Beibei Wang, Nicolas Holzschuch
Illumination simulation involving participating media is computationally intensive. The overall aspect of the material depends on simulating a large number of scattering events inside the material. Combined, the contributions of these scattering events are a smooth illumination. Computing them using ray-tracing or photon-mapping algorithms is expensive: convergence time is high, and pictures before convergence are low quality (see Figure 1). In this paper, we precompute the result of multiple scattering events, assuming an infinite medium, and store it in two 4D tables. These precomputed tables can be used with many rendering algorithms, such as Virtual Ray Lights (VRL), Unified Point Beams and Paths (UPBP) or Manifold Exploration Metropolis Light Transport (MEMLT), greatly reducing the convergence time. The original algorithm takes care of low order scattering (single and double scattering), while our precomputations are used for multiple scattering (more than two scattering events).
涉及参与介质的照明模拟是计算密集型的。材料的整体方面取决于模拟材料内部的大量散射事件。综合起来,这些散射事件的贡献是一个平滑的照明。使用光线追踪或光子映射算法计算它们是昂贵的:收敛时间高,收敛前的图像质量低(见图1)。在本文中,我们假设无限介质,预先计算多个散射事件的结果,并将其存储在两个4D表中。这些预计算表可用于许多渲染算法,如虚拟光线(VRL)、统一点光束和路径(UPBP)或流形探索大都市光传输(MEMLT),大大缩短了收敛时间。原始算法考虑低阶散射(单次和双次散射),而我们的预计算用于多次散射(两次以上散射事件)。
{"title":"Precomputed multiple scattering for light simulation in participating medium","authors":"Beibei Wang, Nicolas Holzschuch","doi":"10.1145/3084363.3085037","DOIUrl":"https://doi.org/10.1145/3084363.3085037","url":null,"abstract":"Illumination simulation involving participating media is computationally intensive. The overall aspect of the material depends on simulating a large number of scattering events inside the material. Combined, the contributions of these scattering events are a smooth illumination. Computing them using ray-tracing or photon-mapping algorithms is expensive: convergence time is high, and pictures before convergence are low quality (see Figure 1). In this paper, we precompute the result of multiple scattering events, assuming an infinite medium, and store it in two 4D tables. These precomputed tables can be used with many rendering algorithms, such as Virtual Ray Lights (VRL), Unified Point Beams and Paths (UPBP) or Manifold Exploration Metropolis Light Transport (MEMLT), greatly reducing the convergence time. The original algorithm takes care of low order scattering (single and double scattering), while our precomputations are used for multiple scattering (more than two scattering events).","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128683403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Evolution of AR in Pokémon go 《pokemon go》中AR的进化
Pub Date : 2017-07-30 DOI: 10.1145/3084363.3107958
C. Kramer
{"title":"Evolution of AR in Pokémon go","authors":"C. Kramer","doi":"10.1145/3084363.3107958","DOIUrl":"https://doi.org/10.1145/3084363.3107958","url":null,"abstract":"","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"360 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115901064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The making of Google earth VR 谷歌地球VR的制作
Pub Date : 2017-07-30 DOI: 10.1145/3084363.3085094
Dominik P. Käser, Evan Parker, A. Glazier, Mike Podwal, Matthew Seegmiller, Chun-Po Wang, Per Karlsson, N. Ashkenazi, Joanna Kim, Andre Le, Matthias Bühlmann, Joshua Moshier
One of the great promises of virtual reality is that it can allow people to visit places in the world that they might otherwise be unable to. Since the recent renaissance of virtual reality, content creators have exercised various techniques such as 360-degree cameras and photogrammetry to make this promise come true. At Google, we spent more than 10 years capturing every part of the world as part of the Google Earth project. The result is a rich 3D mesh that contains trillions of triangles [Kontkanen and Parker 2014] and as such is predestined to be a good data source for VR content. In [Kaeser and Buehlmann 2016] we discussed some of our early experiments with bringing Google Earth to virtual reality, but without a focus on developing a product. Following these experiments, we worked extensively to create a well-rounded product, Google Earth VR, which we eventually launched to the world in November 2016. Google Earth VR quickly became one of the most actively used VR applications in the market and has won several awards since. This talk discusses the journey of the Google Earth VR project from its early prototypes to its final launched stage.
虚拟现实的一个伟大承诺是,它可以让人们参观世界上他们可能无法访问的地方。自从最近虚拟现实复兴以来,内容创作者们为了实现这一承诺,使用了360度相机和摄影测量等各种技术。在谷歌,我们花了10多年的时间来捕捉世界的每一个角落,作为谷歌地球项目的一部分。结果是一个包含数万亿个三角形的丰富3D网格[Kontkanen和Parker 2014],因此注定是VR内容的良好数据源。在[Kaeser and Buehlmann 2016]中,我们讨论了将Google Earth引入虚拟现实的一些早期实验,但没有专注于开发产品。在这些实验之后,我们进行了大量的工作,创造了一个全面的产品,谷歌地球VR,我们最终在2016年11月向全世界发布。谷歌地球VR很快成为市场上使用最活跃的VR应用之一,并获得了多个奖项。这次演讲讨论了Google Earth VR项目从早期原型到最终发布阶段的历程。
{"title":"The making of Google earth VR","authors":"Dominik P. Käser, Evan Parker, A. Glazier, Mike Podwal, Matthew Seegmiller, Chun-Po Wang, Per Karlsson, N. Ashkenazi, Joanna Kim, Andre Le, Matthias Bühlmann, Joshua Moshier","doi":"10.1145/3084363.3085094","DOIUrl":"https://doi.org/10.1145/3084363.3085094","url":null,"abstract":"One of the great promises of virtual reality is that it can allow people to visit places in the world that they might otherwise be unable to. Since the recent renaissance of virtual reality, content creators have exercised various techniques such as 360-degree cameras and photogrammetry to make this promise come true. At Google, we spent more than 10 years capturing every part of the world as part of the Google Earth project. The result is a rich 3D mesh that contains trillions of triangles [Kontkanen and Parker 2014] and as such is predestined to be a good data source for VR content. In [Kaeser and Buehlmann 2016] we discussed some of our early experiments with bringing Google Earth to virtual reality, but without a focus on developing a product. Following these experiments, we worked extensively to create a well-rounded product, Google Earth VR, which we eventually launched to the world in November 2016. Google Earth VR quickly became one of the most actively used VR applications in the market and has won several awards since. This talk discusses the journey of the Google Earth VR project from its early prototypes to its final launched stage.","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131788617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Field trip to Mars 火星实地考察
Pub Date : 2017-07-30 DOI: 10.1145/3084363.3085074
A. Rowan-Robinson
"Field Trip to Mars" is the first-ever headset-free group virtual reality vehicle experience. Taking the literal shape of a classic yellow school bus, the vehicle is home to an immersive virtual experience that transports school children to the surface of the Red Planet.
“实地火星之旅”是有史以来第一次不戴头显的团体虚拟现实车辆体验。这辆车的外形酷似一辆经典的黄色校车,它提供了一种沉浸式的虚拟体验,可以把学校的孩子们送到这颗红色星球的表面。
{"title":"Field trip to Mars","authors":"A. Rowan-Robinson","doi":"10.1145/3084363.3085074","DOIUrl":"https://doi.org/10.1145/3084363.3085074","url":null,"abstract":"\"Field Trip to Mars\" is the first-ever headset-free group virtual reality vehicle experience. Taking the literal shape of a classic yellow school bus, the vehicle is home to an immersive virtual experience that transports school children to the surface of the Red Planet.","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134544057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing VR for all users through adaptive focus displays 通过自适应焦点显示,为所有用户优化VR
Pub Date : 2017-07-30 DOI: 10.1145/3084363.3085029
Nitish Padmanaban, Robert Konrad, Emily A. Cooper, Gordon Wetzstein
Personal computing devices have evolved steadily, from desktops to mobile devices, and now to emerging trends in wearable computing. Wearables are expected to be integral to consumer electronics, with the primary mode of interaction often being a near-eye display. However, current-generation near-eye displays are unable to provide fully natural focus cues for all users, which often leads to discomfort. This core limitation is due to the optics of the systems themselves, with current displays being unable to change focus as required by natural vision. Furthermore, the form factor often makes it difficult for users to wear corrective eyewear. With two prototype near-eye displays, we address these issues using display modes that adapt to the user via computational optics. These prototypes make use of focus-tunable lenses, mechanically actuated displays, and gaze tracking technology to correct common refractive errors per user, and provide natural focus cues by dynamically updating scene depth based on where a user looks. Recent advances in computational optics hint at a future in which some users experience better vision in the virtual world than in the real one.
个人计算设备一直在稳步发展,从台式机到移动设备,再到现在的可穿戴计算的新兴趋势。可穿戴设备有望成为消费电子产品不可或缺的一部分,其主要交互模式通常是近眼显示。然而,当前一代的近眼显示器无法为所有用户提供完全自然的焦点提示,这往往会导致不适。这一核心限制是由于系统本身的光学特性,目前的显示器无法按照自然视觉的要求改变焦点。此外,形状因素往往使用户难以佩戴矫正眼镜。通过两个原型近眼显示器,我们使用通过计算光学适应用户的显示模式来解决这些问题。这些原型利用可调焦镜头、机械驱动显示器和凝视跟踪技术来纠正每个用户常见的屈光不正,并根据用户的视线位置动态更新场景深度,从而提供自然的焦点提示。计算光学的最新进展暗示,未来一些用户在虚拟世界中体验到的视觉效果会比在现实世界中更好。
{"title":"Optimizing VR for all users through adaptive focus displays","authors":"Nitish Padmanaban, Robert Konrad, Emily A. Cooper, Gordon Wetzstein","doi":"10.1145/3084363.3085029","DOIUrl":"https://doi.org/10.1145/3084363.3085029","url":null,"abstract":"Personal computing devices have evolved steadily, from desktops to mobile devices, and now to emerging trends in wearable computing. Wearables are expected to be integral to consumer electronics, with the primary mode of interaction often being a near-eye display. However, current-generation near-eye displays are unable to provide fully natural focus cues for all users, which often leads to discomfort. This core limitation is due to the optics of the systems themselves, with current displays being unable to change focus as required by natural vision. Furthermore, the form factor often makes it difficult for users to wear corrective eyewear. With two prototype near-eye displays, we address these issues using display modes that adapt to the user via computational optics. These prototypes make use of focus-tunable lenses, mechanically actuated displays, and gaze tracking technology to correct common refractive errors per user, and provide natural focus cues by dynamically updating scene depth based on where a user looks. Recent advances in computational optics hint at a future in which some users experience better vision in the virtual world than in the real one.","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132036577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A case study on raytracing-in-the-loop optimization: focal surface displays 射线跟踪环内优化的案例研究:焦面显示
Pub Date : 2017-07-30 DOI: 10.1145/3084363.3085071
N. Matsuda, Alexander Fix, Douglas Lanman
Optimization-based design of optical systems can yield configurations that would be impractical to achieve with manual parameter adjustment. Nonetheless, most approaches are geared toward one-time, offline generation of static configurations to be fabricated physically. Recently, challenging computational imaging problems, such as seeing around corners or through scattering media, have utilized dynamically addressable optical elements to probe scene light transport. A new class of optimization techniques targeted at these dynamic applications has emerged in which stochastic raytracing replaces the fixed operators applied with conventional optimization methods. By modeling optical systems as raytracing operators, more complex non-linear phenomena and larger problem sizes can be considered. We introduce a simple raytracing-in-the-loop optimization model for a head-mounted display (HMD) containing a spatial light modulator (SLM). Using this approach, we are able to compute color images to be displayed in concert with spatially varying SLM phase maps at a resolution that would otherwise be computationally in-feasible. We also consider extensions of this model that may further enhance the performance of the target system.
基于优化设计的光学系统可以产生配置,这将是不切实际的实现与手动参数调整。尽管如此,大多数方法都是针对一次性的、离线生成的静态配置进行物理制造的。最近,利用动态寻址光学元件探测场景光传输等具有挑战性的计算成像问题,如观察角落或散射介质。针对这些动态应用的一类新的优化技术已经出现,其中随机光线追踪取代了传统优化方法中应用的固定算子。通过将光学系统建模为光线追踪算子,可以考虑更复杂的非线性现象和更大的问题规模。我们介绍了一种简单的包含空间光调制器(SLM)的头戴式显示器(HMD)的射线跟踪环内优化模型。使用这种方法,我们能够计算出与空间变化的SLM相位图一起显示的彩色图像,其分辨率在计算上是不可行的。我们还考虑了该模型的扩展,以进一步提高目标系统的性能。
{"title":"A case study on raytracing-in-the-loop optimization: focal surface displays","authors":"N. Matsuda, Alexander Fix, Douglas Lanman","doi":"10.1145/3084363.3085071","DOIUrl":"https://doi.org/10.1145/3084363.3085071","url":null,"abstract":"Optimization-based design of optical systems can yield configurations that would be impractical to achieve with manual parameter adjustment. Nonetheless, most approaches are geared toward one-time, offline generation of static configurations to be fabricated physically. Recently, challenging computational imaging problems, such as seeing around corners or through scattering media, have utilized dynamically addressable optical elements to probe scene light transport. A new class of optimization techniques targeted at these dynamic applications has emerged in which stochastic raytracing replaces the fixed operators applied with conventional optimization methods. By modeling optical systems as raytracing operators, more complex non-linear phenomena and larger problem sizes can be considered. We introduce a simple raytracing-in-the-loop optimization model for a head-mounted display (HMD) containing a spatial light modulator (SLM). Using this approach, we are able to compute color images to be displayed in concert with spatially varying SLM phase maps at a resolution that would otherwise be computationally in-feasible. We also consider extensions of this model that may further enhance the performance of the target system.","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114080564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Moana: geometry based disco ball lighting for tamatoa's lair 海洋奇缘:为塔玛塔的巢穴设计基于几何的迪斯科球照明
Pub Date : 2017-07-30 DOI: 10.1145/3084363.3085058
D. Byun, Shant Ergenian, Gregory Culp
In the "Lair of Tamatoa" sequence of our latest movie Moana, we had 56 disco ball lighting effects shots. Our effects and lighting departments collaborated closely to create the bizarre and ludicrous environment of the scene. We developed a geometry-based lighting pipeline which allowed us to interactively design the light effects..
在我们最新的电影《海洋奇缘》的“塔玛托亚巢穴”序列中,我们有56个迪斯科球照明效果镜头。我们的效果和照明部门密切合作,创造了这个场景的奇异和滑稽的环境。我们开发了一个基于几何的照明管道,使我们能够交互式地设计灯光效果。
{"title":"Moana: geometry based disco ball lighting for tamatoa's lair","authors":"D. Byun, Shant Ergenian, Gregory Culp","doi":"10.1145/3084363.3085058","DOIUrl":"https://doi.org/10.1145/3084363.3085058","url":null,"abstract":"In the \"Lair of Tamatoa\" sequence of our latest movie Moana, we had 56 disco ball lighting effects shots. Our effects and lighting departments collaborated closely to create the bizarre and ludicrous environment of the scene. We developed a geometry-based lighting pipeline which allowed us to interactively design the light effects..","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"395 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132796181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modeling vellus facial hair from asperity scattering silhouettes 从粗糙的散射轮廓建模牛皮面部毛发
Pub Date : 2017-07-30 DOI: 10.1145/3084363.3085057
Chloe LeGendre, Loc Huynh, Shanhe Wang, P. Debevec
We present a technique for modeling the vellus hair over the face based on observations of asperity scattering along a subject's silhouette. We photograph the backlit subject in profile and three-quarters views with a high-resolution DSLR camera to observe the vellus hair on the side and front of the face and separately acquire a 3D scan of the face geometry and texture. We render a library of backlit vellus hair patch samples with different geometric parameters such as density, orientation, and curvature, and we compute image statistics for each set of parameters. We trace the silhouette contour in each face image and straighten the backlit hair silhouettes using image resampling. We compute image statistics for each section of the facial silhouette and determine which set of hair modeling parameters best matches the statistics. We then generate a complete set of vellus hairs for the face by interpolating and extrapolating the matched parameters over the skin. We add the modeled vellus hairs to the 3D facial scan and generate renderings under novel lighting conditions, generally matching the appearance of real photographs.
我们提出了一种基于沿受试者轮廓的粗糙散射观察的面部牛皮毛发建模技术。我们用高分辨率数码单反相机拍摄背光对象的侧面和四分之三视图,观察面部侧面和前部的牛皮毛,并分别获得面部几何形状和纹理的3D扫描。我们渲染了一个具有不同几何参数(如密度、方向和曲率)的背光牛毛斑块样本库,并计算了每组参数的图像统计量。我们对每张人脸图像中的剪影轮廓进行跟踪,并利用图像重采样对背光下的头发剪影进行拉直。我们计算面部轮廓的每个部分的图像统计量,并确定哪一组头发建模参数最符合统计量。然后,我们通过对皮肤的匹配参数进行插值和外推,为面部生成一套完整的牛皮毛。我们将建模的牛皮毛发添加到3D面部扫描中,并在新的照明条件下生成渲染图,通常与真实照片的外观相匹配。
{"title":"Modeling vellus facial hair from asperity scattering silhouettes","authors":"Chloe LeGendre, Loc Huynh, Shanhe Wang, P. Debevec","doi":"10.1145/3084363.3085057","DOIUrl":"https://doi.org/10.1145/3084363.3085057","url":null,"abstract":"We present a technique for modeling the vellus hair over the face based on observations of asperity scattering along a subject's silhouette. We photograph the backlit subject in profile and three-quarters views with a high-resolution DSLR camera to observe the vellus hair on the side and front of the face and separately acquire a 3D scan of the face geometry and texture. We render a library of backlit vellus hair patch samples with different geometric parameters such as density, orientation, and curvature, and we compute image statistics for each set of parameters. We trace the silhouette contour in each face image and straighten the backlit hair silhouettes using image resampling. We compute image statistics for each section of the facial silhouette and determine which set of hair modeling parameters best matches the statistics. We then generate a complete set of vellus hairs for the face by interpolating and extrapolating the matched parameters over the skin. We add the modeled vellus hairs to the 3D facial scan and generate renderings under novel lighting conditions, generally matching the appearance of real photographs.","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116089994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
ACM SIGGRAPH 2017 Talks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1