首页 > 最新文献

SIGGRAPH Asia 2019 Technical Briefs最新文献

英文 中文
Latency of 30 ms Benefits First Person Targeting Tasks More Than Refresh Rate Above 60 Hz 30毫秒的延迟比60 Hz以上的刷新率更有利于第一人称定位任务
Pub Date : 2019-11-17 DOI: 10.1145/3355088.3365170
J. Spjut, Ben Boudaoud, Kamran Binaee, Jonghyun Kim, Alexander Majercik, M. McGuire, D. Luebke, Joohwan Kim
In competitive sports, human performance makes the difference between who wins and loses. In some competitive video games (esports), response time is an essential factor of human performance. When the athlete’s equipment (computer, input and output device) responds with lower latency, it provides a measurable advantage. In this study, we isolate latency and refresh rate by artificially increasing latency when operating at high refresh rates. Eight skilled esports athletes then perform gaming-inspired first person targeting tasks under varying conditions of refresh rate and latency, completing the tasks as quickly as possible. We show that reduced latency has a clear benefit in task completion time while increased refresh rate has relatively minor effects on performance when the inherent latency reduction present at high refresh rates is removed. Additionally, for certain tracking tasks, there is a small, but marginally significant effect from high refresh rates alone.
在竞技体育中,人的表现决定了谁赢谁输。在一些竞争性电子游戏(电子竞技)中,响应时间是人类表现的重要因素。当运动员的设备(计算机,输入和输出设备)以较低的延迟响应时,它提供了可测量的优势。在本研究中,我们通过在高刷新率下操作时人为地增加延迟来隔离延迟和刷新率。然后,8名熟练的电子竞技运动员在不同的刷新率和延迟条件下执行受游戏启发的第一人称目标任务,尽可能快地完成任务。我们表明,减少延迟对任务完成时间有明显的好处,而增加刷新率对性能的影响相对较小,当高刷新率下固有的延迟减少被移除时。此外,对于某些跟踪任务,仅高刷新率就会产生很小但不太显著的影响。
{"title":"Latency of 30 ms Benefits First Person Targeting Tasks More Than Refresh Rate Above 60 Hz","authors":"J. Spjut, Ben Boudaoud, Kamran Binaee, Jonghyun Kim, Alexander Majercik, M. McGuire, D. Luebke, Joohwan Kim","doi":"10.1145/3355088.3365170","DOIUrl":"https://doi.org/10.1145/3355088.3365170","url":null,"abstract":"In competitive sports, human performance makes the difference between who wins and loses. In some competitive video games (esports), response time is an essential factor of human performance. When the athlete’s equipment (computer, input and output device) responds with lower latency, it provides a measurable advantage. In this study, we isolate latency and refresh rate by artificially increasing latency when operating at high refresh rates. Eight skilled esports athletes then perform gaming-inspired first person targeting tasks under varying conditions of refresh rate and latency, completing the tasks as quickly as possible. We show that reduced latency has a clear benefit in task completion time while increased refresh rate has relatively minor effects on performance when the inherent latency reduction present at high refresh rates is removed. Additionally, for certain tracking tasks, there is a small, but marginally significant effect from high refresh rates alone.","PeriodicalId":435930,"journal":{"name":"SIGGRAPH Asia 2019 Technical Briefs","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123313801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Fast Terrain-Adaptive Motion Generation using Deep Neural Networks 基于深度神经网络的快速地形自适应运动生成
Pub Date : 2019-11-17 DOI: 10.1145/3355088.3365157
Moonwon Yu, Byungjun Kwon, Jongmin Kim, Shinjin Kang, Hanyoung Jang
We propose a fast motion adaptation framework using deep neural networks. Traditionally, motion adaptation is performed via iterative numerical optimization. We adopted deep neural networks and replaced the iterative process with the feed-forward inference consisting of simple matrix multiplications. For efficient mapping from contact constraints to character motion, the proposed system is composed of two types of networks: trajectory and pose generators. The networks are trained using augmented motion capture data and are fine-tuned using the inverse kinematics loss. In experiments, our system successfully generates multi-contact motions of a hundred of characters in real-time, and the result motions contain the naturalness existing in the motion capture data.
提出了一种基于深度神经网络的快速运动自适应框架。传统上,运动自适应是通过迭代数值优化来实现的。我们采用深度神经网络,用由简单矩阵乘法组成的前馈推理取代迭代过程。为了从接触约束到角色运动的有效映射,提出的系统由两种类型的网络组成:轨迹和姿态生成器。网络使用增强运动捕捉数据进行训练,并使用逆运动学损失进行微调。在实验中,我们的系统成功地实时生成了100个字符的多接触动作,结果动作包含了动作捕捉数据中存在的自然性。
{"title":"Fast Terrain-Adaptive Motion Generation using Deep Neural Networks","authors":"Moonwon Yu, Byungjun Kwon, Jongmin Kim, Shinjin Kang, Hanyoung Jang","doi":"10.1145/3355088.3365157","DOIUrl":"https://doi.org/10.1145/3355088.3365157","url":null,"abstract":"We propose a fast motion adaptation framework using deep neural networks. Traditionally, motion adaptation is performed via iterative numerical optimization. We adopted deep neural networks and replaced the iterative process with the feed-forward inference consisting of simple matrix multiplications. For efficient mapping from contact constraints to character motion, the proposed system is composed of two types of networks: trajectory and pose generators. The networks are trained using augmented motion capture data and are fine-tuned using the inverse kinematics loss. In experiments, our system successfully generates multi-contact motions of a hundred of characters in real-time, and the result motions contain the naturalness existing in the motion capture data.","PeriodicalId":435930,"journal":{"name":"SIGGRAPH Asia 2019 Technical Briefs","volume":"753 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122978315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Interactive editing of performance-based facial animation 基于表演的面部动画的交互式编辑
Pub Date : 2019-11-17 DOI: 10.1145/3355088.3365147
Yeongho Seol, M. Cozens
While performance-based facial animation efficiently produces realistic animation, it still needs additional editing after automatic solving and retargeting. We review why additional editing is required and present a set of interactive editing solutions for VFX studios. The presented solutions allow artists to enhance the result of the automatic solve-retarget with a few tweaks. The methods are integrated into our performance-based facial animation framework and have been actively used in high-quality movie production.
虽然基于表演的面部动画可以高效地制作出逼真的动画,但在自动求解和重新定位后,仍然需要进行额外的编辑。我们回顾了为什么需要额外的编辑,并为VFX工作室提供了一套交互式编辑解决方案。所呈现的解决方案允许美术师通过一些调整来增强自动解决-重新定位的结果。这些方法被整合到我们基于表演的面部动画框架中,并已积极用于高质量的电影制作。
{"title":"Interactive editing of performance-based facial animation","authors":"Yeongho Seol, M. Cozens","doi":"10.1145/3355088.3365147","DOIUrl":"https://doi.org/10.1145/3355088.3365147","url":null,"abstract":"While performance-based facial animation efficiently produces realistic animation, it still needs additional editing after automatic solving and retargeting. We review why additional editing is required and present a set of interactive editing solutions for VFX studios. The presented solutions allow artists to enhance the result of the automatic solve-retarget with a few tweaks. The methods are integrated into our performance-based facial animation framework and have been actively used in high-quality movie production.","PeriodicalId":435930,"journal":{"name":"SIGGRAPH Asia 2019 Technical Briefs","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122020989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Structure-Aware Image Expansion with Global Attention 结构感知图像扩展与全球关注
Pub Date : 2019-11-17 DOI: 10.1145/3355088.3365161
Dewen Guo, J. Feng, Bingfeng Zhou
We present a novel structure-aware strategy for image expansion which aims to complete an image from a small patch. Different from image inpainting, the majority of the pixels are absent here. Hence, there are higher requirements for global structure-aware prediction to produce visually plausible results. Thus, treating the expansion tasks as inpainting from the outside is ill-posed. Therefore, we propose a learning-based method combining structure-aware and visual attention strategies to make better prediction. Our architecture consists of two stages. Since visual attention cannot be taken full advantage of when the global structure is absent, we first use the ImageNet-pre-trained VGG-19 to make the structure-aware prediction on the pre-training stage. Then, we implement a non-local attention layer on the coarsely-completed results on the refining stage. Our network can well predict the global structures and semantic details from small input image patches, and generate full images with structural consistency. We apply our method on a human face dataset, which containing rich semantic and structural details. The results show its stability and effectiveness.
我们提出了一种新的结构感知图像扩展策略,旨在从一个小块完成图像。与图像上漆不同的是,大多数像素在这里是缺席的。因此,对全局结构感知预测提出了更高的要求,以产生视觉上可信的结果。因此,将扩展任务视为从外部进行粉刷是不恰当的。因此,我们提出了一种结合结构感知和视觉注意策略的基于学习的方法来进行更好的预测。我们的架构由两个阶段组成。由于全局结构缺失时,视觉注意力无法被充分利用,我们首先使用imagenet预训练的VGG-19在预训练阶段进行结构感知预测。然后,在精炼阶段,我们在粗完成的结果上实现非局部关注层。我们的网络可以很好地预测小块输入图像的整体结构和语义细节,并生成结构一致的完整图像。我们将该方法应用于包含丰富语义和结构细节的人脸数据集。实验结果表明了该方法的稳定性和有效性。
{"title":"Structure-Aware Image Expansion with Global Attention","authors":"Dewen Guo, J. Feng, Bingfeng Zhou","doi":"10.1145/3355088.3365161","DOIUrl":"https://doi.org/10.1145/3355088.3365161","url":null,"abstract":"We present a novel structure-aware strategy for image expansion which aims to complete an image from a small patch. Different from image inpainting, the majority of the pixels are absent here. Hence, there are higher requirements for global structure-aware prediction to produce visually plausible results. Thus, treating the expansion tasks as inpainting from the outside is ill-posed. Therefore, we propose a learning-based method combining structure-aware and visual attention strategies to make better prediction. Our architecture consists of two stages. Since visual attention cannot be taken full advantage of when the global structure is absent, we first use the ImageNet-pre-trained VGG-19 to make the structure-aware prediction on the pre-training stage. Then, we implement a non-local attention layer on the coarsely-completed results on the refining stage. Our network can well predict the global structures and semantic details from small input image patches, and generate full images with structural consistency. We apply our method on a human face dataset, which containing rich semantic and structural details. The results show its stability and effectiveness.","PeriodicalId":435930,"journal":{"name":"SIGGRAPH Asia 2019 Technical Briefs","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133233180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
PaintersView: Automatic Suggestion of Optimal Viewpoints for 3D Texture Painting PaintersView:自动建议3D纹理绘画的最佳视点
Pub Date : 2019-11-17 DOI: 10.1145/3355088.3365159
Yuka Takahashi, Tsukasa Fukusato, T. Igarashi
Although 3D texture painting has an advantage of making it easy to grasp the overall shape compared with a method of drawing directly onto a UV map, a disadvantage is unpainted (or distorted) regions appearing in the result due to, for example, self-occluded parts. Thus, in order to perform painting without leaving unpainted parts, sequential change of viewpoints is necessary. However, this process is highly time-consuming. To address this problem, we propose an automatic suggestion of optimal viewpoints for 3D texture painting. As the user paints a model, the system searches for optimal viewpoints for subsequent painting and presents them as multiple suggestions. The user switches to a suggested viewpoint by clicking on a suggestion. We conducted a user study and confirmed that the proposed workflow was effective for 3D texture painting envisioned by users.
尽管与直接在UV图上绘制的方法相比,3D纹理绘制具有易于掌握整体形状的优点,但缺点是由于诸如自遮挡部分等原因导致结果中出现未绘制(或扭曲)的区域。因此,为了在不留下未画部分的情况下进行绘画,有必要顺序地改变视点。然而,这个过程非常耗时。为了解决这个问题,我们提出了一种3D纹理绘制的最佳视点自动建议。当用户绘制模型时,系统会为后续绘制搜索最佳视点,并将其作为多个建议呈现。用户通过单击建议切换到建议的视点。我们进行了用户研究,并确认提出的工作流程是有效的3D纹理绘制由用户设想。
{"title":"PaintersView: Automatic Suggestion of Optimal Viewpoints for 3D Texture Painting","authors":"Yuka Takahashi, Tsukasa Fukusato, T. Igarashi","doi":"10.1145/3355088.3365159","DOIUrl":"https://doi.org/10.1145/3355088.3365159","url":null,"abstract":"Although 3D texture painting has an advantage of making it easy to grasp the overall shape compared with a method of drawing directly onto a UV map, a disadvantage is unpainted (or distorted) regions appearing in the result due to, for example, self-occluded parts. Thus, in order to perform painting without leaving unpainted parts, sequential change of viewpoints is necessary. However, this process is highly time-consuming. To address this problem, we propose an automatic suggestion of optimal viewpoints for 3D texture painting. As the user paints a model, the system searches for optimal viewpoints for subsequent painting and presents them as multiple suggestions. The user switches to a suggested viewpoint by clicking on a suggestion. We conducted a user study and confirmed that the proposed workflow was effective for 3D texture painting envisioned by users.","PeriodicalId":435930,"journal":{"name":"SIGGRAPH Asia 2019 Technical Briefs","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130296304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Enhancing Piecewise Planar Scene Modeling from a Single Image via Multi-View Regularization 通过多视图正则化增强单幅图像的分段平面场景建模
Pub Date : 2019-11-17 DOI: 10.1145/3355088.3365152
Weijie Xi, Siyu Hu, X. Chen, Zhiwei Xiong
Recent studies on planar scene modeling from a single image employ multi-branch neural networks to simultaneously segment pla-nes and recover 3D plane parameters. However, the generalizability and accuracy of these supervised methods heavily rely on the scale of available annotated data. In this paper, we propose multi-view regularization for network training to further enhance single-view reconstruction networks, without demanding extra annotated data. Our multi-view regularization emphasizes multi-view consistency in the training phase, making the feature embedding more robust against view change and lighting variation. Thus, the neural network trained with our regularization can be better generalized to a wide range of views and lightings. Our method achieves state-of-the-art reconstruction performance compared to previous piecewise planar reconstruction methods on the public ScanNet dataset.
目前已有的基于单幅图像的平面场景建模研究采用多分支神经网络同时分割平面并恢复三维平面参数。然而,这些监督方法的泛化性和准确性严重依赖于可用注释数据的规模。在本文中,我们提出了用于网络训练的多视图正则化,以进一步增强单视图重构网络,而不需要额外的注释数据。我们的多视图正则化强调训练阶段的多视图一致性,使特征嵌入对视图变化和光照变化更具鲁棒性。因此,用我们的正则化训练的神经网络可以更好地推广到更大范围的视图和照明。与之前在公共ScanNet数据集上的分段平面重建方法相比,我们的方法实现了最先进的重建性能。
{"title":"Enhancing Piecewise Planar Scene Modeling from a Single Image via Multi-View Regularization","authors":"Weijie Xi, Siyu Hu, X. Chen, Zhiwei Xiong","doi":"10.1145/3355088.3365152","DOIUrl":"https://doi.org/10.1145/3355088.3365152","url":null,"abstract":"Recent studies on planar scene modeling from a single image employ multi-branch neural networks to simultaneously segment pla-nes and recover 3D plane parameters. However, the generalizability and accuracy of these supervised methods heavily rely on the scale of available annotated data. In this paper, we propose multi-view regularization for network training to further enhance single-view reconstruction networks, without demanding extra annotated data. Our multi-view regularization emphasizes multi-view consistency in the training phase, making the feature embedding more robust against view change and lighting variation. Thus, the neural network trained with our regularization can be better generalized to a wide range of views and lightings. Our method achieves state-of-the-art reconstruction performance compared to previous piecewise planar reconstruction methods on the public ScanNet dataset.","PeriodicalId":435930,"journal":{"name":"SIGGRAPH Asia 2019 Technical Briefs","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130678738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
ARSpectator: Exploring Augmented Reality for Sport Events ARSpectator:探索体育赛事的增强现实技术
Pub Date : 2019-11-17 DOI: 10.1145/3355088.3365162
S. Zollmann, T. Langlotz, Moritz Loos, Wei Hong Lo, Lewis Baker
Augmented Reality (AR) has gained a lot of interests recently and has been used for various applications. Most of these applications are however limited to small indoor environments. Despite the wide range of large scale application areas that could highly benefit from AR usage, until now there are rarely AR applications that target such environments. In this work, we discuss how AR can be used to enhance the experience of on-site spectators at live sport events. We investigate the challenges that come with applying AR for such a large scale environment and explore state-of-the-art technology and its suitability for an on-site AR spectator experience. We also present a concept design and explore the options to implement AR applications inside large scale environments.
增强现实技术(AR)近年来获得了广泛的关注,并被广泛应用于各种领域。然而,这些应用大多局限于小型室内环境。尽管大规模应用领域可以从AR使用中获益良多,但到目前为止,很少有针对这种环境的AR应用程序。在这项工作中,我们讨论了如何使用AR来增强现场体育赛事现场观众的体验。我们研究了将AR应用于如此大规模的环境所带来的挑战,并探索了最先进的技术及其对现场AR观众体验的适用性。我们还提出了一个概念设计,并探讨了在大规模环境中实现AR应用程序的选项。
{"title":"ARSpectator: Exploring Augmented Reality for Sport Events","authors":"S. Zollmann, T. Langlotz, Moritz Loos, Wei Hong Lo, Lewis Baker","doi":"10.1145/3355088.3365162","DOIUrl":"https://doi.org/10.1145/3355088.3365162","url":null,"abstract":"Augmented Reality (AR) has gained a lot of interests recently and has been used for various applications. Most of these applications are however limited to small indoor environments. Despite the wide range of large scale application areas that could highly benefit from AR usage, until now there are rarely AR applications that target such environments. In this work, we discuss how AR can be used to enhance the experience of on-site spectators at live sport events. We investigate the challenges that come with applying AR for such a large scale environment and explore state-of-the-art technology and its suitability for an on-site AR spectator experience. We also present a concept design and explore the options to implement AR applications inside large scale environments.","PeriodicalId":435930,"journal":{"name":"SIGGRAPH Asia 2019 Technical Briefs","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123103186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Saliency Diagrams 显著图
Pub Date : 2019-11-17 DOI: 10.1145/3355088.3365155
Nicolas Nghiem, R. Roberts, J. P. Lewis, Jun-yong Noh
Keyframes are a core notion used by animators to understand and describe the motion. In this paper, we take inspiration from keyframe animation to compute a feature that we call the “Saliency diagram” of the animation. To create our saliency diagrams, we visualize how often each frame becomes a keyframe when using an existing selection technique. Animators can use the resulting Saliency diagram to analyze the motion.
关键帧是动画师用来理解和描述动作的核心概念。在本文中,我们从关键帧动画中获得灵感来计算动画的一个特征,我们称之为“显著性图”。为了创建我们的显著性图,我们可视化了当使用现有选择技术时,每个帧成为关键帧的频率。动画师可以使用生成的显著性图来分析运动。
{"title":"Saliency Diagrams","authors":"Nicolas Nghiem, R. Roberts, J. P. Lewis, Jun-yong Noh","doi":"10.1145/3355088.3365155","DOIUrl":"https://doi.org/10.1145/3355088.3365155","url":null,"abstract":"Keyframes are a core notion used by animators to understand and describe the motion. In this paper, we take inspiration from keyframe animation to compute a feature that we call the “Saliency diagram” of the animation. To create our saliency diagrams, we visualize how often each frame becomes a keyframe when using an existing selection technique. Animators can use the resulting Saliency diagram to analyze the motion.","PeriodicalId":435930,"journal":{"name":"SIGGRAPH Asia 2019 Technical Briefs","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133087369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Flexible Neural Renderer for Material Visualization 用于材料可视化的灵活神经渲染器
Pub Date : 2019-08-26 DOI: 10.1145/3355088.3365160
T. AakashK., P. Sakurikar, Saurabh Saini, P J Narayanan
Photo realism in computer generated imagery is crucially dependent on how well an artist is able to recreate real-world materials in the scene. The workflow for material modeling and editing typically involves manual tweaking of material parameters and uses a standard path tracing engine for visual feedback. A lot of time may be spent in iterative selection and rendering of materials at an appropriate quality. In this work, we propose a convolutional neural network that quickly generates high-quality ray traced material visualizations on a shaderball. Our novel architecture allows for control over environment lighting which assists in material selection and also provides the ability to render spatially-varying materials. Comparison with state-of-the-art denoising and neural rendering techniques suggests that our neural renderer performs faster and better. We provide an interactive visualization tool and an extensive dataset to foster further research in this area.
计算机生成图像的照片真实感关键取决于艺术家在场景中重现真实世界材料的能力。材料建模和编辑的工作流程通常涉及手动调整材料参数,并使用标准的路径跟踪引擎进行视觉反馈。大量的时间可能花费在迭代选择和渲染适当质量的材料上。在这项工作中,我们提出了一个卷积神经网络,可以在shaderball上快速生成高质量的光线追踪材料可视化。我们的新建筑允许控制环境照明,这有助于材料的选择,也提供了渲染空间变化材料的能力。与最先进的去噪和神经渲染技术的比较表明,我们的神经渲染器执行得更快更好。我们提供了一个交互式可视化工具和广泛的数据集,以促进这一领域的进一步研究。
{"title":"A Flexible Neural Renderer for Material Visualization","authors":"T. AakashK., P. Sakurikar, Saurabh Saini, P J Narayanan","doi":"10.1145/3355088.3365160","DOIUrl":"https://doi.org/10.1145/3355088.3365160","url":null,"abstract":"Photo realism in computer generated imagery is crucially dependent on how well an artist is able to recreate real-world materials in the scene. The workflow for material modeling and editing typically involves manual tweaking of material parameters and uses a standard path tracing engine for visual feedback. A lot of time may be spent in iterative selection and rendering of materials at an appropriate quality. In this work, we propose a convolutional neural network that quickly generates high-quality ray traced material visualizations on a shaderball. Our novel architecture allows for control over environment lighting which assists in material selection and also provides the ability to render spatially-varying materials. Comparison with state-of-the-art denoising and neural rendering techniques suggests that our neural renderer performs faster and better. We provide an interactive visualization tool and an extensive dataset to foster further research in this area.","PeriodicalId":435930,"journal":{"name":"SIGGRAPH Asia 2019 Technical Briefs","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132271514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
SIGGRAPH Asia 2019 Technical Briefs SIGGRAPH Asia 2019技术简报
Pub Date : 1900-01-01 DOI: 10.1145/3355088
{"title":"SIGGRAPH Asia 2019 Technical Briefs","authors":"","doi":"10.1145/3355088","DOIUrl":"https://doi.org/10.1145/3355088","url":null,"abstract":"","PeriodicalId":435930,"journal":{"name":"SIGGRAPH Asia 2019 Technical Briefs","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122498747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
SIGGRAPH Asia 2019 Technical Briefs
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1