首页 > 最新文献

SIGGRAPH Asia 2019 Technical Briefs最新文献

英文 中文
Beyond the Screen 屏幕之外
Pub Date : 2019-11-17 DOI: 10.1145/3355088.3365140
W. Choi, Taehyung Lee, Wonchul Kang
While working on the theme park ride project, we were required to solve problems of making a projection screen as a window that shows the virtual world behind it. To create this magical effect, we developed our own image resampling pipeline called ”BeyondScreen”. For each screen, it generates a video clip that makes the audience in the ride feel like they are seeing the virtual space. It produces a sense of depth by showing hidden areas beyond the screen as the viewpoint moves. After ensuring that the algorithm works well, we developed custom plug-ins for Nuke, RenderMan, and Houdini so that it can be easily used in the existing VFX pipeline.
在主题公园骑行项目中,我们被要求解决制作投影屏幕作为窗口的问题,显示其背后的虚拟世界。为了创造这种神奇的效果,我们开发了自己的图像重采样管道,称为“BeyondScreen”。对于每个屏幕,它都会生成一个视频片段,使乘坐的观众感觉他们正在看到虚拟空间。它通过在视点移动时显示屏幕之外的隐藏区域来产生深度感。在确保算法工作良好后,我们为Nuke, RenderMan和Houdini开发了自定义插件,以便可以轻松地在现有的VFX管道中使用。
{"title":"Beyond the Screen","authors":"W. Choi, Taehyung Lee, Wonchul Kang","doi":"10.1145/3355088.3365140","DOIUrl":"https://doi.org/10.1145/3355088.3365140","url":null,"abstract":"While working on the theme park ride project, we were required to solve problems of making a projection screen as a window that shows the virtual world behind it. To create this magical effect, we developed our own image resampling pipeline called ”BeyondScreen”. For each screen, it generates a video clip that makes the audience in the ride feel like they are seeing the virtual space. It produces a sense of depth by showing hidden areas beyond the screen as the viewpoint moves. After ensuring that the algorithm works well, we developed custom plug-ins for Nuke, RenderMan, and Houdini so that it can be easily used in the existing VFX pipeline.","PeriodicalId":435930,"journal":{"name":"SIGGRAPH Asia 2019 Technical Briefs","volume":"189 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116322104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Flexible Ray Traversal with an Extended Programming Model 具有扩展规划模型的柔性射线遍历
Pub Date : 2019-11-17 DOI: 10.1145/3355088.3365149
Won-Jong Lee, Gabor Liktor, K. Vaidyanathan
The availability of hardware-accelerated ray tracing in GPUs and standardized APIs has led to a rapid adoption of ray tracing in games. While these APIs allow programmable surface shading and intersections, most of the ray traversal is assumed to be fixed-function. As a result, the implementation of per-instance Level-of-Detail (LOD) techniques is very limited. In this paper, we propose an extended programming model for ray tracing which includes an additional programmable stage called the traversal shader that enables procedural selection of acceleration structures for instances. Using this programming model, we demonstrate multiple applications such as procedural multi-level instancing and stochastic LOD selection that can significantly reduce the bandwidth and memory footprint of ray tracing with no perceptible loss in image quality.
gpu中的硬件加速光线追踪和标准化api的可用性导致了光线追踪在游戏中的快速采用。虽然这些api允许可编程的表面着色和相交,但大多数光线遍历被认为是固定函数。因此,每个实例的详细级别(LOD)技术的实现非常有限。在本文中,我们提出了一种用于光线跟踪的扩展编程模型,其中包括一个额外的可编程阶段,称为遍历着色器,它允许对实例的加速结构进行程序选择。使用该编程模型,我们演示了程序多级实例化和随机LOD选择等多种应用,这些应用可以显着减少光线跟踪的带宽和内存占用,而不会造成图像质量的明显损失。
{"title":"Flexible Ray Traversal with an Extended Programming Model","authors":"Won-Jong Lee, Gabor Liktor, K. Vaidyanathan","doi":"10.1145/3355088.3365149","DOIUrl":"https://doi.org/10.1145/3355088.3365149","url":null,"abstract":"The availability of hardware-accelerated ray tracing in GPUs and standardized APIs has led to a rapid adoption of ray tracing in games. While these APIs allow programmable surface shading and intersections, most of the ray traversal is assumed to be fixed-function. As a result, the implementation of per-instance Level-of-Detail (LOD) techniques is very limited. In this paper, we propose an extended programming model for ray tracing which includes an additional programmable stage called the traversal shader that enables procedural selection of acceleration structures for instances. Using this programming model, we demonstrate multiple applications such as procedural multi-level instancing and stochastic LOD selection that can significantly reduce the bandwidth and memory footprint of ray tracing with no perceptible loss in image quality.","PeriodicalId":435930,"journal":{"name":"SIGGRAPH Asia 2019 Technical Briefs","volume":"345 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115970150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Outdoor Sound Propagation in Inhomogeneous Atmosphere via Precomputation 基于预计算的室外非均匀大气声传播
Pub Date : 2019-11-17 DOI: 10.1145/3355088.3365168
Jin Liu, Shiguang Liu
Most of the sound propagation simulation methods are dedicated to room scenes, and only few of them can be used for outdoor scenes. Meanwhile, although ray tracing is used for simulation, it cannot accurately simulate some acoustic effects. In acoustics, some wave-based methods are accurate but suffer from low computational efficiency. We present a novel wave-based precomputation method that enables accurate and fast simulation of sound propagation in inhomogeneous atmosphere. An extended FDTD-PE method is used to calculate the sound pressure in 3D scene. The space is divided into two parts, the source region in which the FDTD method is employed and the far-field region in which the PE method is employed. A coupling methodology is applied at the junction between the two regions. The sound pressure data is further compressed to get the impulse response (IR) of the source region and sound attenuation function of the far-field region. Finally, we validated our method through various experiments, and the results indicate that our method can accurately simulate the sound propagation, with quite higher speed and lower storage.
大多数声音传播模拟方法都是针对室内场景的,只有少数方法可以用于室外场景。同时,虽然采用了光线追踪进行模拟,但某些声效的模拟并不准确。在声学中,一些基于波的方法精度较高,但计算效率较低。提出了一种新的基于波的预计算方法,可以准确、快速地模拟非均匀大气中的声音传播。采用一种扩展的FDTD-PE方法计算三维场景中的声压。将空间分为两部分,即采用FDTD方法的源区域和采用PE方法的远场区域。在两个区域之间的连接处应用了耦合方法。将声压数据进一步压缩得到声源区域的脉冲响应(IR)和远场区域的声衰减函数。最后,我们通过各种实验验证了我们的方法,结果表明我们的方法可以准确地模拟声音的传播,具有较高的速度和较低的存储空间。
{"title":"Outdoor Sound Propagation in Inhomogeneous Atmosphere via Precomputation","authors":"Jin Liu, Shiguang Liu","doi":"10.1145/3355088.3365168","DOIUrl":"https://doi.org/10.1145/3355088.3365168","url":null,"abstract":"Most of the sound propagation simulation methods are dedicated to room scenes, and only few of them can be used for outdoor scenes. Meanwhile, although ray tracing is used for simulation, it cannot accurately simulate some acoustic effects. In acoustics, some wave-based methods are accurate but suffer from low computational efficiency. We present a novel wave-based precomputation method that enables accurate and fast simulation of sound propagation in inhomogeneous atmosphere. An extended FDTD-PE method is used to calculate the sound pressure in 3D scene. The space is divided into two parts, the source region in which the FDTD method is employed and the far-field region in which the PE method is employed. A coupling methodology is applied at the junction between the two regions. The sound pressure data is further compressed to get the impulse response (IR) of the source region and sound attenuation function of the far-field region. Finally, we validated our method through various experiments, and the results indicate that our method can accurately simulate the sound propagation, with quite higher speed and lower storage.","PeriodicalId":435930,"journal":{"name":"SIGGRAPH Asia 2019 Technical Briefs","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122000265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Automatic Generation of Chinese Vector Fonts via Deep Layout Inferring 通过深度布局推断自动生成中文矢量字体
Pub Date : 2019-11-17 DOI: 10.1145/3355088.3365142
Yichen Gao, Z. Lian, Yingmin Tang, Jianguo Xiao
Designing a high-quality Chinese vector font library which can be directly used in real applications is very time-consuming, since the font library typically consists of large amounts of glyphs. To address this problem, we propose a data-driven system in which only a small number (about 10%) of glyphs need to be designed. Specifically, the system first automatically decomposes those input glyphs into vectorized components. Then, a layout prediction module based on deep neural network is applied to learn the layout and structure information of input characters. Finally, proper components are selected to assemble each character based on the predicted layout to build the font library that can be directly used in computers and smart mobile devices. Experimental results demonstrate that our system synthesizes high-quality glyphs and significantly enhances the producing efficiency of Chinese vector fonts.
设计一个可以直接用于实际应用的高质量中文矢量字体库是非常耗时的,因为字体库通常由大量的字形组成。为了解决这个问题,我们提出了一个数据驱动的系统,其中只需要设计少量(约10%)的符号。具体来说,系统首先自动将这些输入的符号分解成矢量化的分量。然后,利用基于深度神经网络的布局预测模块学习输入字符的布局和结构信息;最后,根据预测的布局选择合适的组件对每个字符进行组装,构建可直接用于计算机和智能移动设备的字体库。实验结果表明,该系统合成了高质量的汉字矢量字体,显著提高了汉字矢量字体的生成效率。
{"title":"Automatic Generation of Chinese Vector Fonts via Deep Layout Inferring","authors":"Yichen Gao, Z. Lian, Yingmin Tang, Jianguo Xiao","doi":"10.1145/3355088.3365142","DOIUrl":"https://doi.org/10.1145/3355088.3365142","url":null,"abstract":"Designing a high-quality Chinese vector font library which can be directly used in real applications is very time-consuming, since the font library typically consists of large amounts of glyphs. To address this problem, we propose a data-driven system in which only a small number (about 10%) of glyphs need to be designed. Specifically, the system first automatically decomposes those input glyphs into vectorized components. Then, a layout prediction module based on deep neural network is applied to learn the layout and structure information of input characters. Finally, proper components are selected to assemble each character based on the predicted layout to build the font library that can be directly used in computers and smart mobile devices. Experimental results demonstrate that our system synthesizes high-quality glyphs and significantly enhances the producing efficiency of Chinese vector fonts.","PeriodicalId":435930,"journal":{"name":"SIGGRAPH Asia 2019 Technical Briefs","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126029830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Faster RPNN: Rendering Clouds with Latent Space Light Probes 更快的RPNN:用潜在空间光探测器渲染云
Pub Date : 2019-11-17 DOI: 10.1145/3355088.3365150
M. Panin, S. Nikolenko
We introduce latent space light probes for fast rendering of high albedo anisotropic materials with multiple scattering. Our Faster RPNN model improves the performance of cloud rendering by precomputing some parts of the neural architecture, separating the parts that should be inferred at runtime. The model provides 2-3x speedup over state of the art Radiance-Predicting Neural Networks (RPNN), has negligible precomputation cost and low memory footprint, while providing results with low bias that are visually indistinguishable from computationally intensive path tracing.
为了快速绘制具有多重散射的高反照率各向异性材料,我们引入了潜在空间光探测器。我们的Faster RPNN模型通过预先计算神经结构的某些部分,分离应该在运行时推断的部分,提高了云渲染的性能。与最先进的辐射预测神经网络(RPNN)相比,该模型提供了2-3倍的加速,可以忽略不计的预计算成本和低内存占用,同时提供的结果具有低偏差,在视觉上与计算密集型路径跟踪无法区分。
{"title":"Faster RPNN: Rendering Clouds with Latent Space Light Probes","authors":"M. Panin, S. Nikolenko","doi":"10.1145/3355088.3365150","DOIUrl":"https://doi.org/10.1145/3355088.3365150","url":null,"abstract":"We introduce latent space light probes for fast rendering of high albedo anisotropic materials with multiple scattering. Our Faster RPNN model improves the performance of cloud rendering by precomputing some parts of the neural architecture, separating the parts that should be inferred at runtime. The model provides 2-3x speedup over state of the art Radiance-Predicting Neural Networks (RPNN), has negligible precomputation cost and low memory footprint, while providing results with low bias that are visually indistinguishable from computationally intensive path tracing.","PeriodicalId":435930,"journal":{"name":"SIGGRAPH Asia 2019 Technical Briefs","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115801445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Architecture of Integrated Machine Learning in Low Latency Mobile VR Graphics Pipeline 低延迟移动VR图形管道集成机器学习架构
Pub Date : 2019-11-17 DOI: 10.1145/3355088.3365154
Haomiao Jiang, Rohit Rao Padebettu, Kazuki Sakamoto, Behnam Bastani
In this paper, we discuss frameworks to execute machine learning algorithms in the mobile VR graphics pipeline to improve performance and rendered image quality in real time. We analyze and compare the benefits and costs of various possibilities. We illustrate the strength of using machine framework in graphics pipeline with an application of efficient spatial temporal super-resolution that amplifies GPU render power to achieve better image quality.
在本文中,我们讨论了在移动VR图形管道中执行机器学习算法的框架,以提高性能和实时渲染图像质量。我们分析和比较各种可能性的收益和成本。我们通过有效的时空超分辨率应用来说明在图形管道中使用机器框架的优势,该应用可以放大GPU的渲染能力以获得更好的图像质量。
{"title":"Architecture of Integrated Machine Learning in Low Latency Mobile VR Graphics Pipeline","authors":"Haomiao Jiang, Rohit Rao Padebettu, Kazuki Sakamoto, Behnam Bastani","doi":"10.1145/3355088.3365154","DOIUrl":"https://doi.org/10.1145/3355088.3365154","url":null,"abstract":"In this paper, we discuss frameworks to execute machine learning algorithms in the mobile VR graphics pipeline to improve performance and rendered image quality in real time. We analyze and compare the benefits and costs of various possibilities. We illustrate the strength of using machine framework in graphics pipeline with an application of efficient spatial temporal super-resolution that amplifies GPU render power to achieve better image quality.","PeriodicalId":435930,"journal":{"name":"SIGGRAPH Asia 2019 Technical Briefs","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131403210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
How NASA Uses Render Time Procedurals for Scientific Data Visualization NASA如何使用渲染时间程序进行科学数据可视化
Pub Date : 2019-11-17 DOI: 10.1145/3355088.3365169
Kel Elkins, Gregory W. Shirah
In data-driven visualizations, the size and accessibility of data files can greatly impact the computer graphics production pipeline. Loading large and complex data structures into 3D animation software such as Maya may result in system performance issues that limit interactivity. At NASA's Scientific Visualization Studio, we have implemented methods to procedurally read data files and generate graphics at render time. We accomplish this by creating per-frame calls in our animation software that are executed by the renderer. This procedural workflow accelerates visualization production and iteration.
在数据驱动的可视化中,数据文件的大小和可访问性可以极大地影响计算机图形制作管道。将大型和复杂的数据结构加载到3D动画软件(如Maya)中可能会导致系统性能问题,从而限制交互性。在NASA的科学可视化工作室,我们已经实现了在渲染时程序读取数据文件和生成图形的方法。我们通过在动画软件中创建由渲染器执行的逐帧调用来实现这一点。这个程序化的工作流程加速了可视化的生产和迭代。
{"title":"How NASA Uses Render Time Procedurals for Scientific Data Visualization","authors":"Kel Elkins, Gregory W. Shirah","doi":"10.1145/3355088.3365169","DOIUrl":"https://doi.org/10.1145/3355088.3365169","url":null,"abstract":"In data-driven visualizations, the size and accessibility of data files can greatly impact the computer graphics production pipeline. Loading large and complex data structures into 3D animation software such as Maya may result in system performance issues that limit interactivity. At NASA's Scientific Visualization Studio, we have implemented methods to procedurally read data files and generate graphics at render time. We accomplish this by creating per-frame calls in our animation software that are executed by the renderer. This procedural workflow accelerates visualization production and iteration.","PeriodicalId":435930,"journal":{"name":"SIGGRAPH Asia 2019 Technical Briefs","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134541660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Piku Piku Interpolation
Pub Date : 2019-11-17 DOI: 10.1145/3355088.3365156
R. Roberts, R. K. D. Anjos, K. Anjyo, J. P. Lewis
We propose a sampling algorithm that reassembles real-life movements to add detail to early-stage facial animation. We examine the results of applying our algorithm with FACS data extracted from video. Using our algorithm like an interpolation scheme, animators can reduce the time required to produce detailed animation.
我们提出了一种采样算法,重新组装现实生活中的动作,以增加早期面部动画的细节。我们用从视频中提取的FACS数据来检验应用我们的算法的结果。使用我们的算法,如插值方案,动画师可以减少制作详细动画所需的时间。
{"title":"Piku Piku Interpolation","authors":"R. Roberts, R. K. D. Anjos, K. Anjyo, J. P. Lewis","doi":"10.1145/3355088.3365156","DOIUrl":"https://doi.org/10.1145/3355088.3365156","url":null,"abstract":"We propose a sampling algorithm that reassembles real-life movements to add detail to early-stage facial animation. We examine the results of applying our algorithm with FACS data extracted from video. Using our algorithm like an interpolation scheme, animators can reduce the time required to produce detailed animation.","PeriodicalId":435930,"journal":{"name":"SIGGRAPH Asia 2019 Technical Briefs","volume":"CE-24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126541802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Binary Space Partitioning Visibility Tree for Polygonal Light Rendering 二元空间分割可见树的多边形光渲染
Pub Date : 2019-11-17 DOI: 10.1145/3355088.3365153
Hiroki Okuno, Kei Iwasaki
In this paper, we present a method to render shadows for physically-based materials under polygonal light sources. Direct illumination calculation from a polygonal light source involves the triple product integral of the lighting, the bidirectional reflectance distribution function (BRDF), and the visibility function over the polygonal domain, which is computation intensive. To achieve real-time performance, work on polygonal light shading exploits analytic solutions of boundary integrals along the edges of the polygonal light at the cost of lacking shadowing effects. We introduce a hierarchical representation for the pre-computed visibility function to retain the merits of closed-form solutions for boundary integrals. Our method subdivides the polygonal light into a set of polygons visible from a point to be shaded. Experimental results show that our method can render complex shadows with a GGX microfacet BRDF from polygonal light sources at interactive frame rates.
在本文中,我们提出了一种在多边形光源下渲染物理材料阴影的方法。多边形光源的直接照明计算涉及光照的三重积积分、双向反射分布函数(BRDF)和多边形域上的可见性函数,计算量很大。为了实现实时性能,多边形遮光的工作利用沿多边形光边缘的边界积分的解析解,以缺乏阴影效果为代价。为了保留边界积分的闭型解的优点,我们引入了预先计算的可见性函数的层次表示。我们的方法将多边形光细分为一组从要阴影的点可见的多边形。实验结果表明,该方法能够以交互帧率在多边形光源的GGX微面BRDF上渲染复杂阴影。
{"title":"Binary Space Partitioning Visibility Tree for Polygonal Light Rendering","authors":"Hiroki Okuno, Kei Iwasaki","doi":"10.1145/3355088.3365153","DOIUrl":"https://doi.org/10.1145/3355088.3365153","url":null,"abstract":"In this paper, we present a method to render shadows for physically-based materials under polygonal light sources. Direct illumination calculation from a polygonal light source involves the triple product integral of the lighting, the bidirectional reflectance distribution function (BRDF), and the visibility function over the polygonal domain, which is computation intensive. To achieve real-time performance, work on polygonal light shading exploits analytic solutions of boundary integrals along the edges of the polygonal light at the cost of lacking shadowing effects. We introduce a hierarchical representation for the pre-computed visibility function to retain the merits of closed-form solutions for boundary integrals. Our method subdivides the polygonal light into a set of polygons visible from a point to be shaded. Experimental results show that our method can render complex shadows with a GGX microfacet BRDF from polygonal light sources at interactive frame rates.","PeriodicalId":435930,"journal":{"name":"SIGGRAPH Asia 2019 Technical Briefs","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124545319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The Potential of Light Fields in Media Productions 光场在媒体制作中的潜力
Pub Date : 2019-11-17 DOI: 10.1145/3355088.3365158
Jonas Trottnow, S. Spielmann, T. Lange, Kelvin Chelli, Marek Solony, P. Smrz, P. Zemčík, W. Aenchbacher, M. Grogan, Martin Alain, A. Smolic, Trevor Canham, Olivier Vu-Thanh, Javier Vazquez-Corral, M. Bertalmío
One aspect of the EU funded project SAUCE is to explore the possibilities and challenges of integrating light field capturing and processing into media productions. A special light field camera was build by Saarland University [Herfet et al. 2018] and is first tested under production conditions in the test production “Unfolding” as part of the SAUCE project. Filmakademie Baden-Württemberg developed the contentual frame, executed the post-production and prepared a complete previsualization. Calibration and post-processing algorithms are developed by the Trinity College Dublin and the Brno University of Technology. This document describes challenges during building and shooting with the light field camera array, as well as its potential and challenges for the post-production.
欧盟资助的项目SAUCE的一个方面是探索将光场捕获和处理整合到媒体制作中的可能性和挑战。萨尔大学(Saarland University)建造了一种特殊的光场相机[Herfet et al. 2018],作为SAUCE项目的一部分,在测试生产“展开”的生产条件下首次进行了测试。电影导演巴登- 符腾堡制定了内容框架,执行了后期制作,并准备了完整的预告片。校准和后处理算法是由都柏林三一学院和布尔诺理工大学开发的。本文档描述了光场相机阵列在构建和拍摄过程中的挑战,以及它在后期制作中的潜力和挑战。
{"title":"The Potential of Light Fields in Media Productions","authors":"Jonas Trottnow, S. Spielmann, T. Lange, Kelvin Chelli, Marek Solony, P. Smrz, P. Zemčík, W. Aenchbacher, M. Grogan, Martin Alain, A. Smolic, Trevor Canham, Olivier Vu-Thanh, Javier Vazquez-Corral, M. Bertalmío","doi":"10.1145/3355088.3365158","DOIUrl":"https://doi.org/10.1145/3355088.3365158","url":null,"abstract":"One aspect of the EU funded project SAUCE is to explore the possibilities and challenges of integrating light field capturing and processing into media productions. A special light field camera was build by Saarland University [Herfet et al. 2018] and is first tested under production conditions in the test production “Unfolding” as part of the SAUCE project. Filmakademie Baden-Württemberg developed the contentual frame, executed the post-production and prepared a complete previsualization. Calibration and post-processing algorithms are developed by the Trinity College Dublin and the Brno University of Technology. This document describes challenges during building and shooting with the light field camera array, as well as its potential and challenges for the post-production.","PeriodicalId":435930,"journal":{"name":"SIGGRAPH Asia 2019 Technical Briefs","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121484771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
SIGGRAPH Asia 2019 Technical Briefs
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1