首页 > 最新文献

Proceedings of the 20th ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games最新文献

英文 中文
A framework to transform in-core GPU algorithms to out-of-core algorithms 一个将内核GPU算法转换为外核算法的框架
T. Harada
Porting an existing application to the GPU requires a lot of engineering effort. However, as the GPU memory of today is smaller compared to the host memory, some application which requires to access to a large data set needs to implement an out-of-core logic on top of the GPU implementation which is additional engineering work [Garanzha et al. 2011]. We present a framework to make it easy to transform an in-core GPU implementation to an out-of-core GPU implementation. In this work, we assume that the out-of-core memory access is read only. The proposed method is implemented using OpenCL thus we use the terminology of OpenCL in this document.
将现有的应用程序移植到GPU上需要大量的工程工作。然而,由于今天的GPU内存与主机内存相比更小,一些需要访问大型数据集的应用程序需要在GPU实现之上实现外核逻辑,这是额外的工程工作[Garanzha等人,2011]。我们提出了一个框架,可以很容易地将核内GPU实现转换为核外GPU实现。在这项工作中,我们假设核外内存访问是只读的。所提出的方法是使用OpenCL实现的,因此我们在本文中使用OpenCL术语。
{"title":"A framework to transform in-core GPU algorithms to out-of-core algorithms","authors":"T. Harada","doi":"10.1145/2856400.2876011","DOIUrl":"https://doi.org/10.1145/2856400.2876011","url":null,"abstract":"Porting an existing application to the GPU requires a lot of engineering effort. However, as the GPU memory of today is smaller compared to the host memory, some application which requires to access to a large data set needs to implement an out-of-core logic on top of the GPU implementation which is additional engineering work [Garanzha et al. 2011]. We present a framework to make it easy to transform an in-core GPU implementation to an out-of-core GPU implementation. In this work, we assume that the out-of-core memory access is read only. The proposed method is implemented using OpenCL thus we use the terminology of OpenCL in this document.","PeriodicalId":207863,"journal":{"name":"Proceedings of the 20th ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117346785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Ductile fracture for clustered shape matching 簇形匹配的韧性断裂
Ben Jones, April Martin, J. Levine, Tamar Shinar, Adam W. Bargteil
In this paper, we incorporate ductile fracture into the clustered shape matching simulation framework for deformable bodies, thus filling a gap in the shape matching literature. Our plasticity and fracture models are inspired by the finite element literature on deformable bodies, but are adapted to the clustered shape matching framework. The resulting approach is fast, versatile, and simple to implement.
本文将韧性断裂纳入到可变形体的聚类形状匹配仿真框架中,填补了形状匹配文献的空白。我们的塑性和断裂模型受到可变形体有限元文献的启发,但适用于聚类形状匹配框架。得到的方法快速、通用且易于实现。
{"title":"Ductile fracture for clustered shape matching","authors":"Ben Jones, April Martin, J. Levine, Tamar Shinar, Adam W. Bargteil","doi":"10.1145/2856400.2856415","DOIUrl":"https://doi.org/10.1145/2856400.2856415","url":null,"abstract":"In this paper, we incorporate ductile fracture into the clustered shape matching simulation framework for deformable bodies, thus filling a gap in the shape matching literature. Our plasticity and fracture models are inspired by the finite element literature on deformable bodies, but are adapted to the clustered shape matching framework. The resulting approach is fast, versatile, and simple to implement.","PeriodicalId":207863,"journal":{"name":"Proceedings of the 20th ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117023678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
RGB-D IBR: rendering indoor scenes using sparse RGB-D images with local alignments RGB-D IBR:使用局部对齐的稀疏RGB-D图像渲染室内场景
Yeong-Hu Jeong, Haejoon Kim, H. Seo, Frédéric Cordier, Seungyong Lee
This paper presents an image-based rendering (IBR) system based on RGB-D images. The input of our system consists of RGB-D images captured at sparse locations in the scene and can be expanded by adding new RGB-D images. The sparsity of RGB-D images increases the usability of our system as the user need not capture a RGB-D image stream in a single shot, which may require careful planning for a hand-held camera. Our system begins with a single RGB-D image and images are incrementally added one by one. For each newly added image, a batch process is performed to align it with previously added images. The process does not include a global alignment step, such as bundle adjustment, and can be completed quickly by computing only local alignments of RGB-D images. Aligned images are represented as a graph, where each node is an input image and an edge contains relative pose information between nodes. A novel view image is rendered by picking the nearest input as the reference image and then blending the neighboring images based on depth information in real time. Experimental results with indoor scenes using Microsoft Kinect demonstrate that our system can synthesize high quality novel view images from a sparse set of RGB-D images.
提出了一种基于RGB-D图像的图像绘制系统。我们系统的输入由场景中稀疏位置捕获的RGB-D图像组成,并且可以通过添加新的RGB-D图像来扩展。RGB-D图像的稀疏性增加了我们系统的可用性,因为用户不需要在一次拍摄中捕获RGB-D图像流,这可能需要对手持相机进行仔细规划。我们的系统从单个RGB-D图像开始,然后一个接一个地添加图像。对于每个新添加的图像,将执行批处理过程以使其与先前添加的图像对齐。该过程不包括全局对齐步骤,例如束调整,并且可以通过仅计算RGB-D图像的局部对齐来快速完成。对齐后的图像表示为一个图,其中每个节点是一个输入图像,边缘包含节点之间的相对姿态信息。选取距离最近的输入图像作为参考图像,然后根据深度信息实时混合相邻图像,生成新的视图图像。使用Microsoft Kinect的室内场景实验结果表明,我们的系统可以从稀疏的RGB-D图像集合成高质量的新颖视图图像。
{"title":"RGB-D IBR: rendering indoor scenes using sparse RGB-D images with local alignments","authors":"Yeong-Hu Jeong, Haejoon Kim, H. Seo, Frédéric Cordier, Seungyong Lee","doi":"10.1145/2856400.2876006","DOIUrl":"https://doi.org/10.1145/2856400.2876006","url":null,"abstract":"This paper presents an image-based rendering (IBR) system based on RGB-D images. The input of our system consists of RGB-D images captured at sparse locations in the scene and can be expanded by adding new RGB-D images. The sparsity of RGB-D images increases the usability of our system as the user need not capture a RGB-D image stream in a single shot, which may require careful planning for a hand-held camera. Our system begins with a single RGB-D image and images are incrementally added one by one. For each newly added image, a batch process is performed to align it with previously added images. The process does not include a global alignment step, such as bundle adjustment, and can be completed quickly by computing only local alignments of RGB-D images. Aligned images are represented as a graph, where each node is an input image and an edge contains relative pose information between nodes. A novel view image is rendered by picking the nearest input as the reference image and then blending the neighboring images based on depth information in real time. Experimental results with indoor scenes using Microsoft Kinect demonstrate that our system can synthesize high quality novel view images from a sparse set of RGB-D images.","PeriodicalId":207863,"journal":{"name":"Proceedings of the 20th ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133706934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Precision: precomputing environment semantics for contact-rich character animation 精度:为接触丰富的角色动画预计算环境语义
Mubbasir Kapadia, Xianghao Xu, Maurizio Nitti, Marcelo Kallmann, Stelian Coros, R. Sumner, M. Gross
The widespread availability of high-quality motion capture data and the maturity of solutions to animate virtual characters has paved the way for the next generation of interactive virtual worlds exhibiting intricate interactions between characters and the environments they inhabit. However, current motion synthesis techniques have not been designed to scale with complex environments and contact-rich motions, requiring environment designers to manually embed motion semantics in the environment geometry in order to address online motion synthesis. This paper presents an automated approach for analyzing both motions and environments in order to represent the different ways in which an environment can afford a character to move. We extract the salient features that characterize the contact-rich motion repertoire of a character and detect valid transitions in the environment where each of these motions may be possible, along with additional semantics that inform which surfaces of the environment the character may use for support during the motion. The precomputed motion semantics can be easily integrated into standard navigation and animation pipelines in order to greatly enhance the motion capabilities of virtual characters. The computational efficiency of our approach enables two additional applications. Environment designers can interactively design new environments and get instant feedback on how characters may potentially interact, which can be used for iterative modeling and refinement. End users can dynamically edit virtual worlds and characters will automatically accommodate the changes in the environment in their movement strategies.
高质量动作捕捉数据的广泛可用性和动画虚拟角色解决方案的成熟为下一代交互式虚拟世界铺平了道路,这些虚拟世界展示了角色和他们所居住的环境之间复杂的互动。然而,目前的运动合成技术还没有被设计成适应复杂环境和接触丰富的运动,这就要求环境设计师手动将运动语义嵌入到环境几何中,以解决在线运动合成问题。本文提出了一种自动分析动作和环境的方法,以表示环境可以提供角色移动的不同方式。我们提取了角色丰富接触动作的显著特征,并检测环境中的有效过渡,其中每个动作都是可能的,以及额外的语义,告知角色在运动期间可以使用环境的哪些表面作为支持。预先计算的运动语义可以很容易地集成到标准的导航和动画管道中,从而大大增强虚拟角色的运动能力。我们方法的计算效率使两个额外的应用成为可能。环境设计师可以交互式地设计新环境,并获得关于角色可能如何互动的即时反馈,这可以用于迭代建模和改进。最终用户可以动态地编辑虚拟世界,角色将自动适应环境的变化在他们的运动策略。
{"title":"Precision: precomputing environment semantics for contact-rich character animation","authors":"Mubbasir Kapadia, Xianghao Xu, Maurizio Nitti, Marcelo Kallmann, Stelian Coros, R. Sumner, M. Gross","doi":"10.1145/2856400.2856404","DOIUrl":"https://doi.org/10.1145/2856400.2856404","url":null,"abstract":"The widespread availability of high-quality motion capture data and the maturity of solutions to animate virtual characters has paved the way for the next generation of interactive virtual worlds exhibiting intricate interactions between characters and the environments they inhabit. However, current motion synthesis techniques have not been designed to scale with complex environments and contact-rich motions, requiring environment designers to manually embed motion semantics in the environment geometry in order to address online motion synthesis. This paper presents an automated approach for analyzing both motions and environments in order to represent the different ways in which an environment can afford a character to move. We extract the salient features that characterize the contact-rich motion repertoire of a character and detect valid transitions in the environment where each of these motions may be possible, along with additional semantics that inform which surfaces of the environment the character may use for support during the motion. The precomputed motion semantics can be easily integrated into standard navigation and animation pipelines in order to greatly enhance the motion capabilities of virtual characters. The computational efficiency of our approach enables two additional applications. Environment designers can interactively design new environments and get instant feedback on how characters may potentially interact, which can be used for iterative modeling and refinement. End users can dynamically edit virtual worlds and characters will automatically accommodate the changes in the environment in their movement strategies.","PeriodicalId":207863,"journal":{"name":"Proceedings of the 20th ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115456011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Accurate analytic approximations for real-time specular area lighting 精确的分析近似实时镜面区域照明
P. Lecocq, A. Dufay, G. Sourimant, Jean-Eudes Marvie
We introduce analytic approximations for accurate real-time rendering of specular surfaces lit by area light sources. Our solution leverages the Irradiance Tensors developed by Arvo for the rendering of Phong surfaces lit by a polygonal light source. Using a reformulation of the 1D boundary edge integral, we develop a general framework for approximating and evaluating the integral in constant time using simple peak shape functions. To overcome the Phong restriction, we propose a low cost edge splitting strategy that accounts for the spherical warp introduced by the half vector parametrization. Thanks to this novel extension, we accurately approximate common microfacet BRDFs, providing the first practical method producing specular stretches that closely match ground truth image references in real-time. Finally, using the same approximation framework, we introduce support for spherical and disc area light sources, based on an original polygon spinning method supporting non-uniform scaling operations and horizon clipping. Implemented on a GPU, our method achieves real-time performances without any assumption on area light shape nor surface roughness, with a quality close to the ground truth.
我们引入了解析近似,用于精确实时渲染由区域光源照亮的镜面。我们的解决方案利用了Arvo开发的辐照张量,用于渲染由多边形光源照亮的Phong表面。利用一维边界边缘积分的重新表述,我们建立了一个用简单峰形函数在常数时间内近似和计算积分的一般框架。为了克服Phong限制,我们提出了一种低成本的边缘分割策略,该策略考虑了由半矢量参数化引入的球面弯曲。由于这种新颖的扩展,我们可以准确地近似常见的microfacet brdf,提供了第一种实用的方法,可以实时产生与地面真实图像参考密切匹配的镜面拉伸。最后,在支持非均匀缩放操作和水平裁剪的原始多边形旋转方法的基础上,使用相同的近似框架引入了对球形和圆盘面积光源的支持。该方法在GPU上实现,无需对面积光形状和表面粗糙度进行任何假设,即可实现实时性能,质量接近地面真实。
{"title":"Accurate analytic approximations for real-time specular area lighting","authors":"P. Lecocq, A. Dufay, G. Sourimant, Jean-Eudes Marvie","doi":"10.1145/2856400.2856403","DOIUrl":"https://doi.org/10.1145/2856400.2856403","url":null,"abstract":"We introduce analytic approximations for accurate real-time rendering of specular surfaces lit by area light sources. Our solution leverages the Irradiance Tensors developed by Arvo for the rendering of Phong surfaces lit by a polygonal light source. Using a reformulation of the 1D boundary edge integral, we develop a general framework for approximating and evaluating the integral in constant time using simple peak shape functions. To overcome the Phong restriction, we propose a low cost edge splitting strategy that accounts for the spherical warp introduced by the half vector parametrization. Thanks to this novel extension, we accurately approximate common microfacet BRDFs, providing the first practical method producing specular stretches that closely match ground truth image references in real-time. Finally, using the same approximation framework, we introduce support for spherical and disc area light sources, based on an original polygon spinning method supporting non-uniform scaling operations and horizon clipping. Implemented on a GPU, our method achieves real-time performances without any assumption on area light shape nor surface roughness, with a quality close to the ground truth.","PeriodicalId":207863,"journal":{"name":"Proceedings of the 20th ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122512363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time hair mesh simulation 实时毛发网格模拟
Kui Wu, Cem Yuksel
We present a robust real-time hair simulation method using hair meshes. Leveraging existing simulation models for sheet-based cloth, we introduce a volumetric force model for incorporating hair interactions inside the hair mesh volume. We also introduce a position correction method that minimizes the local deformation of the hair mesh due to collision handling. We demonstrate the robustness of our hair simulation method using large time steps with fast motion, and we show that our method can recover the initial hair shape even when the hair mesh goes through substantial deformation.
提出了一种基于毛发网格的鲁棒实时毛发仿真方法。利用现有的基于片布的模拟模型,我们引入了一个体积力模型,用于将头发相互作用纳入头发网格体积内。我们还介绍了一种位置校正方法,该方法可以最大限度地减少由于碰撞处理而引起的毛发网格局部变形。我们证明了我们的头发模拟方法的鲁棒性,使用大时间步长与快速运动,我们表明,我们的方法可以恢复初始头发形状,即使头发网格经历了实质性的变形。
{"title":"Real-time hair mesh simulation","authors":"Kui Wu, Cem Yuksel","doi":"10.1145/2856400.2856412","DOIUrl":"https://doi.org/10.1145/2856400.2856412","url":null,"abstract":"We present a robust real-time hair simulation method using hair meshes. Leveraging existing simulation models for sheet-based cloth, we introduce a volumetric force model for incorporating hair interactions inside the hair mesh volume. We also introduce a position correction method that minimizes the local deformation of the hair mesh due to collision handling. We demonstrate the robustness of our hair simulation method using large time steps with fast motion, and we show that our method can recover the initial hair shape even when the hair mesh goes through substantial deformation.","PeriodicalId":207863,"journal":{"name":"Proceedings of the 20th ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121235434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Triangle reordering for reduced overdraw in animated scenes 三角形重新排序减少动画场景中的透支
Songfang Han, P. Sander
We introduce an automatic approach for optimizing the triangle rendering order of animated meshes with the objective of reducing overdraw while maintaining good post-transform vertex cache efficiency. Our approach is based on prior methods designed for static meshes. We propose an algorithm that clusters the space of viewpoints and key frames. For each cluster, we generate a triangle order that exhibits satisfactory vertex cache efficiency and low overdraw. Results show that our approach significantly improves overdraw throughout the entire animation sequence while only requiring a few index buffers. We expect that this approach will be useful for games and other real-time rendering applications that involve complex shading of articulated characters.
本文介绍了一种自动优化动画网格三角形渲染顺序的方法,目的是在保持良好的变换后顶点缓存效率的同时减少过度绘制。我们的方法是基于先前为静态网格设计的方法。提出了一种对视点和关键帧空间进行聚类的算法。对于每个集群,我们生成一个三角形顺序,显示出令人满意的顶点缓存效率和低透支。结果表明,我们的方法在只需要少量索引缓冲区的情况下显著改善了整个动画序列的透支情况。我们希望这种方法对游戏和其他实时渲染应用程序有用,这些应用程序涉及到复杂的铰接角色阴影。
{"title":"Triangle reordering for reduced overdraw in animated scenes","authors":"Songfang Han, P. Sander","doi":"10.1145/2856400.2856408","DOIUrl":"https://doi.org/10.1145/2856400.2856408","url":null,"abstract":"We introduce an automatic approach for optimizing the triangle rendering order of animated meshes with the objective of reducing overdraw while maintaining good post-transform vertex cache efficiency. Our approach is based on prior methods designed for static meshes. We propose an algorithm that clusters the space of viewpoints and key frames. For each cluster, we generate a triangle order that exhibits satisfactory vertex cache efficiency and low overdraw. Results show that our approach significantly improves overdraw throughout the entire animation sequence while only requiring a few index buffers. We expect that this approach will be useful for games and other real-time rendering applications that involve complex shading of articulated characters.","PeriodicalId":207863,"journal":{"name":"Proceedings of the 20th ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125104302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A phenomenological scattering model for order-independent transparency 序无关透明的现象学散射模型
M. McGuire, Michael Mara
Translucent objects such as fog, smoke, glass, ice, and liquids are pervasive in cinematic environments because they frame scenes in depth and create visually compelling shots. Unfortunately, they are hard to simulate in real-time and have thus previously been rendered poorly compared to opaque surfaces in games. This paper introduces the first model for a real-time rasterization algorithm that can simultaneously approximate the following transparency phenomena: wavelength-varying ("colored") transmission, translucent colored shadows, caustics, partial coverage, diffusion, and refraction. All render efficiently on modern GPUs by using order-independent draw calls and low bandwidth. We include source code for the transparency and resolve shaders.
半透明的物体,如雾、烟、玻璃、冰和液体,在电影环境中无处不在,因为它们可以深度框架场景,并创造出视觉上引人注目的镜头。不幸的是,它们很难实时模拟,因此与游戏中的不透明表面相比,它们的渲染效果很差。本文介绍了实时光栅化算法的第一个模型,该模型可以同时近似以下透明现象:波长变化(“彩色”)传输、半透明彩色阴影、焦散、部分覆盖、扩散和折射。所有渲染有效地在现代gpu上使用顺序无关的绘制调用和低带宽。我们包含了透明和解析着色器的源代码。
{"title":"A phenomenological scattering model for order-independent transparency","authors":"M. McGuire, Michael Mara","doi":"10.1145/2856400.2856418","DOIUrl":"https://doi.org/10.1145/2856400.2856418","url":null,"abstract":"Translucent objects such as fog, smoke, glass, ice, and liquids are pervasive in cinematic environments because they frame scenes in depth and create visually compelling shots. Unfortunately, they are hard to simulate in real-time and have thus previously been rendered poorly compared to opaque surfaces in games. This paper introduces the first model for a real-time rasterization algorithm that can simultaneously approximate the following transparency phenomena: wavelength-varying (\"colored\") transmission, translucent colored shadows, caustics, partial coverage, diffusion, and refraction. All render efficiently on modern GPUs by using order-independent draw calls and low bandwidth. We include source code for the transparency and resolve shaders.","PeriodicalId":207863,"journal":{"name":"Proceedings of the 20th ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129099973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Data-driven adaptive history for image editing 数据驱动的自适应历史图像编辑
Hsiang-Ting Chen, Li-Yi Wei, Bjoern Hartmann, Maneesh Agrawala
Digital image editing is usually an iterative process; users repetitively perform short sequences of operations, as well as undo and redo using history navigation tools. In our collected data, undo, redo and navigation constitute about 9 percent of the total commands and consume a significant amount of user time. Unfortunately, such activities also tend to be tedious and frustrating, especially for complex projects. We address this crucial issue by adaptive history, a UI mechanism that groups relevant operations together to reduce user workloads. Such grouping can occur at various history granularities. We present two that have been found to be most useful. On a fine level, we group repeating commands patterns together to facilitate smart undo. On a coarse level, we segment commands history into chunks for semantic navigation. The main advantages of our approach are that it is intuitive to use and easy to integrate into any existing tools with text-based history lists. Unlike prior methods that are predominately rule based, our approach is data driven, and thus adapts better to common editing tasks which exhibit sufficient diversity and complexity that may defy predetermined rules or procedures. A user study showed that our system performs quantitatively better than two other baselines, and the participants also gave positive qualitative feedbacks on the system features.
数字图像编辑通常是一个迭代的过程;用户重复执行短序列的操作,以及使用历史导航工具撤消和重做。在我们收集的数据中,撤销、重做和导航大约占命令总数的9%,并且消耗了大量的用户时间。不幸的是,这样的活动也往往是乏味和令人沮丧的,特别是对于复杂的项目。我们通过自适应历史来解决这个关键问题,自适应历史是一种将相关操作组合在一起以减少用户工作量的UI机制。这种分组可以发生在不同的历史粒度上。我们将介绍两种被发现最有用的方法。在一个较好的层次上,我们将重复的命令模式分组在一起,以方便智能撤销。在粗略的层次上,我们将命令历史分割成块以进行语义导航。我们的方法的主要优点是使用起来很直观,并且很容易集成到任何现有的基于文本的历史列表工具中。与先前主要基于规则的方法不同,我们的方法是数据驱动的,因此更好地适应常见的编辑任务,这些任务表现出足够的多样性和复杂性,可能会违背预定的规则或程序。一项用户研究表明,我们的系统在数量上比其他两个基线表现得更好,参与者也对系统特性给出了积极的定性反馈。
{"title":"Data-driven adaptive history for image editing","authors":"Hsiang-Ting Chen, Li-Yi Wei, Bjoern Hartmann, Maneesh Agrawala","doi":"10.1145/2856400.2856417","DOIUrl":"https://doi.org/10.1145/2856400.2856417","url":null,"abstract":"Digital image editing is usually an iterative process; users repetitively perform short sequences of operations, as well as undo and redo using history navigation tools. In our collected data, undo, redo and navigation constitute about 9 percent of the total commands and consume a significant amount of user time. Unfortunately, such activities also tend to be tedious and frustrating, especially for complex projects. We address this crucial issue by adaptive history, a UI mechanism that groups relevant operations together to reduce user workloads. Such grouping can occur at various history granularities. We present two that have been found to be most useful. On a fine level, we group repeating commands patterns together to facilitate smart undo. On a coarse level, we segment commands history into chunks for semantic navigation. The main advantages of our approach are that it is intuitive to use and easy to integrate into any existing tools with text-based history lists. Unlike prior methods that are predominately rule based, our approach is data driven, and thus adapts better to common editing tasks which exhibit sufficient diversity and complexity that may defy predetermined rules or procedures. A user study showed that our system performs quantitatively better than two other baselines, and the participants also gave positive qualitative feedbacks on the system features.","PeriodicalId":207863,"journal":{"name":"Proceedings of the 20th ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"98 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116765207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
SSVDAGs: symmetry-aware sparse voxel DAGs ssvdag:对称感知稀疏体素dag
A. Villanueva, F. Marton, E. Gobbetti
Voxelized representations of complex 3D scenes are widely used nowadays to accelerate visibility queries in many GPU rendering techniques. Since GPU memory is limited, it is important that these data structures can be kept within a strict memory budget. Recently, directed acyclic graphs (DAGs) have been successfully introduced to compress sparse voxel octrees (SVOs), but they are limited to sharing identical regions of space. In this paper, we show that a more efficient lossless compression of geometry can be achieved, while keeping the same visibility-query performance, by merging subtrees that are identical through a similarity transform, and by exploiting the skewed distribution of references to shared nodes to store child pointers using a variabile bit-rate encoding. We also describe how, by selecting plane reflections along the main grid directions as symmetry transforms, we can construct highly compressed GPU-friendly structures using a fully out-of-core method. Our results demonstrate that state-of-the-art compression and real-time tracing performance can be achieved on high-resolution voxelized representations of real-world scenes of very different characteristics, including large CAD models, 3D scans, and typical gaming models, leading, for instance, to real-time GPU in-core visualization with shading and shadows of the full Boeing 777 at sub-millimetric precision.
复杂3D场景的体素化表示目前在许多GPU渲染技术中被广泛用于加速可见性查询。由于GPU内存是有限的,所以这些数据结构可以保持在严格的内存预算内是很重要的。近年来,有向无环图(dag)被成功地用于压缩稀疏体素八叉树(SVOs),但它们仅限于共享相同的空间区域。在本文中,我们展示了一个更有效的几何图形无损压缩可以实现,同时保持相同的可见性查询性能,通过相似变换合并相同的子树,并通过利用对共享节点的引用的倾斜分布来存储使用可变比特率编码的子指针。我们还描述了如何通过选择沿主要网格方向的平面反射作为对称变换,我们可以使用完全的外核方法构建高度压缩的gpu友好结构。我们的研究结果表明,最先进的压缩和实时跟踪性能可以在具有不同特征的现实世界场景的高分辨率体素化表示上实现,包括大型CAD模型、3D扫描和典型的游戏模型,例如,可以在亚毫米精度的情况下实现对整个波音777的阴影和阴影的实时GPU核心可视化。
{"title":"SSVDAGs: symmetry-aware sparse voxel DAGs","authors":"A. Villanueva, F. Marton, E. Gobbetti","doi":"10.1145/2856400.2856420","DOIUrl":"https://doi.org/10.1145/2856400.2856420","url":null,"abstract":"Voxelized representations of complex 3D scenes are widely used nowadays to accelerate visibility queries in many GPU rendering techniques. Since GPU memory is limited, it is important that these data structures can be kept within a strict memory budget. Recently, directed acyclic graphs (DAGs) have been successfully introduced to compress sparse voxel octrees (SVOs), but they are limited to sharing identical regions of space. In this paper, we show that a more efficient lossless compression of geometry can be achieved, while keeping the same visibility-query performance, by merging subtrees that are identical through a similarity transform, and by exploiting the skewed distribution of references to shared nodes to store child pointers using a variabile bit-rate encoding. We also describe how, by selecting plane reflections along the main grid directions as symmetry transforms, we can construct highly compressed GPU-friendly structures using a fully out-of-core method. Our results demonstrate that state-of-the-art compression and real-time tracing performance can be achieved on high-resolution voxelized representations of real-world scenes of very different characteristics, including large CAD models, 3D scans, and typical gaming models, leading, for instance, to real-time GPU in-core visualization with shading and shadows of the full Boeing 777 at sub-millimetric precision.","PeriodicalId":207863,"journal":{"name":"Proceedings of the 20th ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115760713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
期刊
Proceedings of the 20th ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1